Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Claude ('No, that's wrong...', 'Actually...'), (3) User requests a capability that doesn't exist, (4) An external API or tool fails, (5) Claude realizes its knowledge is outdated or incorrect, (6) A better approach is discovered for a recurring task. Also review learnings before major tasks.
Install with Tessl CLI
npx tessl i github:jdrhyne/agent-skills --skill self-improvementOverall
score
87%
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that clearly articulates its purpose (capturing learnings for continuous improvement) and provides comprehensive, explicit trigger conditions. The numbered list of six specific scenarios makes it very clear when Claude should invoke this skill, and the natural language examples ('No, that's wrong...', 'Actually...') help match real user interactions.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Captures learnings, errors, and corrections' with detailed scenarios like command failures, user corrections, API failures, and discovering better approaches. | 3 / 3 |
Completeness | Clearly answers both what ('Captures learnings, errors, and corrections to enable continuous improvement') and when with explicit numbered triggers covering six specific scenarios plus guidance to review before major tasks. | 3 / 3 |
Trigger Term Quality | Includes natural phrases users would say: 'No, that's wrong...', 'Actually...', plus technical but common terms like 'command fails', 'API fails', 'outdated', making it easy to match real user interactions. | 3 / 3 |
Distinctiveness Conflict Risk | Has a clear niche focused on learning from errors and corrections, with distinct triggers like user corrections and failed operations that are unlikely to conflict with other skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a highly actionable and well-structured skill with excellent workflow clarity and concrete, executable guidance throughout. The main weakness is length - at ~400 lines with detailed templates and configurations inline, it could benefit from better progressive disclosure by moving reference material to separate files. The content is valuable but somewhat verbose for a SKILL.md overview.
Suggestions
Move the detailed markdown templates (Learning Entry, Error Entry, Feature Request Entry) to a separate `assets/TEMPLATES.md` file and reference it from the main skill
Extract the Multi-Agent Support section to a separate `references/multi-agent-setup.md` file since it's configuration-heavy and not needed for basic usage
Consolidate the Quick Reference table and Detection Triggers section to reduce redundancy in describing when to log
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is comprehensive but includes some redundancy (e.g., multiple similar tables, repeated explanations of when to log). The templates and examples are valuable but could be more condensed. Some sections like 'Best Practices' restate what's already covered elsewhere. | 2 / 3 |
Actionability | Excellent actionability with complete, copy-paste ready templates for all entry types, concrete bash commands for setup and querying, specific JSON configurations for hooks, and clear examples throughout. Every section provides executable guidance. | 3 / 3 |
Workflow Clarity | Clear multi-step workflows with explicit validation checkpoints. The skill extraction workflow includes dry-run verification, the resolution process has clear status transitions, and the periodic review section provides concrete grep commands for status checks. Quality gates provide explicit checklists. | 3 / 3 |
Progressive Disclosure | The skill is quite long (~400 lines) and could benefit from splitting detailed content (templates, hook configurations, multi-agent setup) into separate reference files. References to 'references/hooks-setup.md' and 'assets/' are good, but much inline content could be externalized. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 13 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (501 lines); consider splitting into references/ and linking | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
license_field | 'license' field is missing | Warning |
Total | 13 / 16 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.