CtrlK
BlogDocsLog inGet started
Tessl Logo

self-improvement

Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Claude ('No, that's wrong...', 'Actually...'), (3) User requests a capability that doesn't exist, (4) An external API or tool fails, (5) Claude realizes its knowledge is outdated or incorrect, (6) A better approach is discovered for a recurring task. Also review learnings before major tasks. For CI-only/headless learning capture, use self-improvement-ci.

85

1.86x
Quality

72%

Does it follow best practices?

Impact

93%

1.86x

Average score across 6 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/self-improvement/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong description with excellent completeness and trigger term quality. The 'Use when' clause is thorough with six well-defined scenarios using natural language patterns. The main weakness is that the core capability description ('captures learnings, errors, and corrections') is somewhat abstract and could benefit from more concrete action verbs describing what the skill actually does mechanically.

Suggestions

Replace the abstract 'captures learnings, errors, and corrections' with more concrete actions like 'Logs error details, records user corrections, and stores discovered best practices in a knowledge base to enable continuous improvement.'

DimensionReasoningScore

Specificity

The description names the domain ('learnings, errors, and corrections') and the general action ('captures... to enable continuous improvement'), but the specific concrete actions are somewhat vague — 'captures learnings' is abstract. It doesn't list specific operations like 'logs error messages', 'creates correction entries', or 'updates knowledge base'.

2 / 3

Completeness

Clearly answers both 'what' (captures learnings, errors, and corrections for continuous improvement) and 'when' with an explicit, detailed 'Use when:' clause listing six specific trigger scenarios plus an additional instruction to review learnings before major tasks.

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms users would actually say: 'fails unexpectedly', 'No, that's wrong...', 'Actually...', 'doesn't exist', 'outdated or incorrect', 'better approach'. These closely match real user language patterns and correction phrases. Also includes the related skill reference 'self-improvement-ci' for disambiguation.

3 / 3

Distinctiveness Conflict Risk

The skill occupies a clear niche around self-improvement and learning capture, which is distinct from typical task-execution skills. It further disambiguates itself from a related skill ('self-improvement-ci') by explicitly noting when to use that alternative instead.

3 / 3

Total

11

/

12

Passed

Implementation

55%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is highly actionable with excellent concrete templates, commands, and workflows, but severely undermined by its verbosity and poor progressive disclosure. At 400+ lines, it tries to be both a quick reference and a comprehensive manual in one file, including agent-specific setup for four platforms, hook configurations, skill extraction workflows, and detection trigger lists that could all live in reference files. The content quality is good but the packaging wastes significant context window budget.

Suggestions

Move logging templates (Learning Entry, Error Entry, Feature Request Entry) to a separate `references/templates.md` file and link from the quick reference table

Extract agent-specific setup (Claude Code, Codex CLI, GitHub Copilot, OpenClaw sections) into `references/agent-setup.md` - the main skill should just say 'See references/agent-setup.md for agent-specific configuration'

Move the Automatic Skill Extraction section (extraction criteria, workflow, detection triggers, quality gates) to a separate `references/skill-extraction.md` file

Remove the Detection Triggers section entirely - Claude already knows how to recognize corrections, feature requests, and errors from the Quick Reference table without needing example phrases listed

DimensionReasoningScore

Conciseness

Extremely verbose at ~400+ lines. Includes extensive template boilerplate, multiple promotion examples, agent-specific setup for 4 different agents, gitignore options, hook integration details, skill extraction workflows, and quality gate checklists. Much of this could be in reference files. Claude doesn't need explanations of what corrections or feature requests sound like, or detailed priority/area tag tables.

1 / 3

Actionability

Highly actionable with concrete markdown templates for each entry type, specific bash commands for searching/reviewing, exact JSON configurations for hooks, and copy-paste ready formats with field-level guidance. The logging formats and grep commands are immediately executable.

3 / 3

Workflow Clarity

Clear multi-step workflows throughout: logging workflow (detect → log → link → promote), resolution workflow (update status → add resolution block), promotion workflow (distill → add → update status), extraction workflow (identify → run helper → customize → update → verify). Includes validation checkpoints like skill quality gates and promotion criteria thresholds (Recurrence-Count >= 3).

3 / 3

Progressive Disclosure

Nearly everything is inlined in one massive file. Hook setup details, agent-specific configurations, skill extraction workflows, template formats, and openclaw integration details are all in the main SKILL.md. Only two references are mentioned (references/hooks-setup.md and references/openclaw-integration.md) but the bulk of content that should be in reference files remains inline. The quick reference table at the top is good but the rest is a monolithic wall.

1 / 3

Total

8

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (566 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Repository
pskoett/pskoett-ai-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.