Automatically extract reusable patterns from Claude Code sessions and save them as learned skills for future use.
65
47%
Does it follow best practices?
Impact
100%
2.27xAverage score across 3 eval scenarios
Risky
Do not use without reviewing
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/continuous-learning/SKILL.mdQuality
Discovery
57%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description communicates a clear and distinctive purpose—extracting reusable patterns from sessions and saving them as skills—but lacks explicit trigger guidance ('Use when...') and could benefit from more specific concrete actions and natural user-facing keywords. It is reasonably distinguishable from other skills but falls short on completeness and trigger term coverage.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to save a pattern, create a skill, remember a workflow, or extract learnings from a session.'
Include more natural trigger terms users might say, such as 'save skill', 'SKILL.md', 'remember this', 'create a reusable pattern', or 'skill file'.
List more specific concrete actions, e.g., 'Analyzes session transcripts, identifies repeatable workflows, generates SKILL.md files with frontmatter and instructions.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (extracting patterns from Claude Code sessions) and a general action (save as learned skills), but doesn't list multiple specific concrete actions like what kinds of patterns, how extraction works, or what formats are saved. | 2 / 3 |
Completeness | Describes what it does (extract reusable patterns and save as skills) but lacks an explicit 'Use when...' clause or equivalent trigger guidance, which per the rubric caps completeness at 2. | 2 / 3 |
Trigger Term Quality | Includes some relevant terms like 'patterns', 'Claude Code sessions', 'learned skills', but misses natural user phrases like 'save skill', 'create skill file', 'SKILL.md', 'skill extraction', or 'remember this for later'. | 2 / 3 |
Distinctiveness Conflict Risk | The concept of extracting patterns from Claude Code sessions and saving them as learned skills is a fairly unique niche that is unlikely to conflict with other skills like general code generation or document processing. | 3 / 3 |
Total | 9 / 12 Passed |
Implementation
37%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill provides good configuration examples and hook setup instructions, but critically lacks the actual implementation details (the evaluate-session.sh script content or pattern extraction logic). The workflow is too high-level with no validation steps for an operation that automatically writes files. The lengthy Homunculus comparison section adds bulk without helping Claude execute the skill.
Suggestions
Include the actual evaluate-session.sh script content or at minimum describe its expected inputs, outputs, and core logic so Claude can implement or debug it.
Add validation steps to the workflow: how to verify extracted skills are well-formed, how to handle extraction failures, and how to review/approve patterns before saving.
Move the 'Comparison Notes (Research: Jan 2025)' section to a separate file (e.g., docs/comparison-notes.md) and link to it, keeping SKILL.md focused on execution.
Remove the 'Why Stop Hook?' section—Claude doesn't need justification for architectural decisions, just the setup instructions.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill includes useful configuration and setup details but the comparison section with Homunculus is lengthy and tangential—research notes and v2 speculation don't help Claude execute the current skill. The 'Why Stop Hook?' section explains obvious tradeoffs Claude would already understand. | 2 / 3 |
Actionability | The hook setup JSON is concrete and copy-paste ready, and the config.json is specific. However, the actual core mechanism—the evaluate-session.sh script—is never shown or described. Claude wouldn't know how to actually implement the pattern extraction or what the script does. | 2 / 3 |
Workflow Clarity | The 3-step 'How It Works' is extremely high-level with no validation checkpoints, no error handling, and no feedback loop. There's no guidance on what happens if extraction fails, how to verify extracted skills are correct, or how to handle edge cases. For a system that automatically writes skill files, missing validation is a significant gap. | 1 / 3 |
Progressive Disclosure | References to docs/continuous-learning-v2-spec.md and the learned skills directory are present, but the comparison/research section is inlined when it should be in a separate file. The main skill content and the research notes are mixed together, making navigation harder. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
5df943e
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.