Automatically extract reusable patterns from Claude Code sessions and save them as learned skills for future use.
47
35%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/continuous-learning/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description conveys the general purpose of the skill but lacks explicit trigger guidance ('Use when...'), specific concrete actions, and natural user-facing keywords. It would benefit from listing specific capabilities and adding clear trigger conditions to help Claude distinguish when to select this skill.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to save a pattern, create a skill, or extract learnings from a session.'
List more specific concrete actions, e.g., 'Analyzes conversation history, identifies reusable workflows, generates SKILL.md files with proper frontmatter and instructions.'
Include natural trigger terms users might say, such as 'save this as a skill', 'remember this pattern', 'create a reusable skill', 'SKILL.md'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (extracting patterns from Claude Code sessions) and a general action (save as learned skills), but doesn't list multiple specific concrete actions like what kinds of patterns, how extraction works, or what formats are saved. | 2 / 3 |
Completeness | Describes what it does (extract patterns and save as skills) but has no explicit 'Use when...' clause or equivalent trigger guidance, which per the rubric should cap completeness at 2, and the 'what' is also somewhat vague, placing this at 1. | 1 / 3 |
Trigger Term Quality | Includes some relevant terms like 'patterns', 'Claude Code sessions', 'learned skills', but misses natural user phrases like 'save skill', 'create skill', 'remember this', 'skill extraction', or 'SKILL.md'. | 2 / 3 |
Distinctiveness Conflict Risk | The concept of extracting reusable patterns from sessions is somewhat specific, but 'patterns' and 'skills' are broad terms that could overlap with other meta-learning or documentation skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
37%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a reasonable overview of a continuous learning system with some concrete configuration examples, but falls short on actionability—the core script and output format are never shown. The workflow lacks validation steps for a system that automatically generates files, and the lengthy comparison section with Homunculus adds bulk without helping Claude execute the task. The skill reads more like a design document than an operational guide.
Suggestions
Add the actual `evaluate-session.sh` script content or at minimum show what an extracted skill file looks like (input session → output skill markdown), so Claude can verify correct operation.
Add explicit validation steps: how to review extracted skills, what a good vs bad extraction looks like, and how to handle/rollback incorrect extractions.
Move the Homunculus comparison and v2 enhancement notes to a separate `docs/comparison.md` or `docs/v2-roadmap.md` file—they don't help Claude operate the v1 system.
Remove the 'Why Stop Hook?' section—Claude doesn't need architectural justification, just the setup instructions.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill includes some useful content but has unnecessary sections like the 'Why Stop Hook?' rationale (Claude doesn't need to be convinced), the lengthy comparison table with Homunculus, and research notes that don't contribute to actionable guidance. The pattern types table is borderline redundant given the config already lists them. | 2 / 3 |
Actionability | The hook setup JSON and config JSON are concrete and copy-paste ready, which is good. However, the core mechanism—the actual `evaluate-session.sh` script—is never shown or described, and there's no guidance on what the extracted skill files look like, how to review them, or how to handle the auto_approve flow. The skill describes a system but doesn't fully equip Claude to implement or operate it. | 2 / 3 |
Workflow Clarity | The three-step 'How It Works' section is extremely high-level with no validation checkpoints. There's no guidance on what happens when extraction fails, how to verify extracted skills are correct, how to curate or prune bad extractions, or any feedback loop. For a system that automatically writes skill files, missing validation is a significant gap. | 1 / 3 |
Progressive Disclosure | The skill references `docs/continuous-learning-v2-spec.md` and `evaluate-session.sh` but no bundle files are provided, so these references are unverifiable. The content is reasonably structured with clear sections, but the comparison/research notes section is lengthy inline content that could be in a separate file, and the referenced paths may be dead links. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
841beea
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.