CtrlK
BlogDocsLog inGet started
Tessl Logo

self-improvement

Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Claude ('No, that's wrong...', 'Actually...'), (3) User requests a capability that doesn't exist, (4) An external API or tool fails, (5) Claude realizes its knowledge is outdated or incorrect, (6) A better approach is discovered for a recurring task. Also review learnings before major tasks. For CI-only/headless learning capture, use self-improvement-ci.

81

1.78x
Quality

72%

Does it follow best practices?

Impact

100%

1.78x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/self-improvement/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong description with excellent trigger term coverage and completeness, clearly specifying both what the skill does and when to use it with six detailed trigger scenarios. Its main weakness is that the core capability description ('captures learnings, errors, and corrections') is somewhat abstract and could benefit from more concrete action verbs describing what specifically happens when the skill is invoked. The disambiguation with the CI variant is a nice touch.

Suggestions

Add more specific concrete actions to the 'what' portion, e.g., 'Logs error details, records user corrections, and stores discovered best practices in a learnings file' instead of the vaguer 'captures learnings, errors, and corrections'.

DimensionReasoningScore

Specificity

The description names the domain ('learnings, errors, and corrections') and the general action ('captures... to enable continuous improvement'), but the specific concrete actions are somewhat vague — 'captures learnings' is abstract. It doesn't list specific operations like 'logs error messages', 'creates correction entries', or 'updates knowledge base'.

2 / 3

Completeness

Clearly answers both 'what' (captures learnings, errors, and corrections for continuous improvement) and 'when' with an explicit, detailed 'Use when:' clause listing six specific trigger scenarios plus an additional guidance note about reviewing learnings before major tasks and a pointer to a related CI-specific skill.

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms users would actually say: 'fails unexpectedly', 'No, that's wrong...', 'Actually...', 'doesn't exist', 'outdated or incorrect', 'better approach'. These closely match real user language patterns and correction phrases. Also includes the related skill reference 'self-improvement-ci' for disambiguation.

3 / 3

Distinctiveness Conflict Risk

This skill occupies a clear niche — self-improvement and learning capture from errors/corrections — that is unlikely to conflict with other skills. The explicit disambiguation with 'self-improvement-ci' for headless environments further reduces conflict risk.

3 / 3

Total

11

/

12

Passed

Implementation

55%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill is highly actionable with excellent templates, concrete commands, and clear workflows including validation steps. However, it is severely bloated—installation instructions, multi-agent setup guides, gitignore options, hook JSON configs, skill extraction workflows, and basic concept explanations (priority levels, area tags) all compete for token budget. The content desperately needs progressive disclosure: the core logging workflow should be ~50-80 lines with everything else in reference files.

Suggestions

Move hook integration, multi-agent support, skill extraction, and gitignore options into separate reference files (e.g., references/hooks-setup.md, references/multi-agent.md, references/skill-extraction.md) and link to them from a concise overview section.

Remove the installation/setup section entirely—this is operational metadata, not skill content that teaches Claude how to perform the task.

Delete the Area Tags and Priority Guidelines tables; Claude already understands these concepts and can infer appropriate values from context.

Trim the Detection Triggers section to a single sentence ('Log corrections, feature requests, knowledge gaps, and errors as they occur') since Claude can recognize these situations without example phrases.

DimensionReasoningScore

Conciseness

Extremely verbose at ~400+ lines. Includes installation instructions, gitignore options, multi-agent setup guides, hook configuration JSON, skill extraction workflows, and extensive metadata field explanations that bloat the skill far beyond what's needed. Much of this (e.g., explaining what areas like 'frontend' and 'backend' mean, priority level definitions, detection trigger phrases) is knowledge Claude already has.

1 / 3

Actionability

Provides concrete, copy-paste-ready templates for every entry type (learning, error, feature request), executable bash commands for setup and review, specific JSON configurations for hooks, and clear examples of promotion from verbose learning to concise CLAUDE.md entry.

3 / 3

Workflow Clarity

Multi-step processes are clearly sequenced with explicit validation checkpoints. The promotion workflow has clear when/how steps, the simplify-and-harden ingestion workflow has a numbered dedup process, the skill extraction workflow includes dry-run verification and quality gate checklists, and the resolution workflow has explicit status transitions.

3 / 3

Progressive Disclosure

This is a monolithic wall of text with everything inline. The hook setup details, multi-agent configuration, OpenClaw integration, skill extraction workflows, and detailed templates should be split into reference files. Only two references are mentioned (references/hooks-setup.md and references/openclaw-integration.md) but the vast majority of content that could be offloaded remains inline, making the skill overwhelming to consume.

1 / 3

Total

8

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (573 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Repository
pskoett/pskoett-ai-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.