Automatically extract reusable patterns from Claude Code sessions and save them as learned skills for future use.
42
27%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./docs/zh-TW/skills/continuous-learning/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description conveys the general purpose of the skill but lacks explicit trigger guidance ('Use when...'), specific concrete actions, and natural user-facing keywords. It would benefit from listing specific capabilities and adding clear trigger conditions to help Claude distinguish when to select this skill.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to save a pattern, create a skill, or extract learnings from a session.'
List more specific concrete actions, e.g., 'Analyzes conversation history, identifies reusable workflows, generates SKILL.md files with proper frontmatter and instructions.'
Include natural trigger terms users might say, such as 'save this as a skill', 'remember this pattern', 'create a reusable skill', 'SKILL.md'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (extracting patterns from Claude Code sessions) and a general action (save as learned skills), but doesn't list multiple specific concrete actions like what kinds of patterns, how extraction works, or what formats are saved. | 2 / 3 |
Completeness | Describes what it does (extract patterns and save as skills) but has no explicit 'Use when...' clause or equivalent trigger guidance, which per the rubric should cap completeness at 2, and the 'what' is also somewhat vague, placing this at 1. | 1 / 3 |
Trigger Term Quality | Includes some relevant terms like 'patterns', 'Claude Code sessions', 'learned skills', but misses natural user phrases like 'save skill', 'create skill', 'remember this', 'skill extraction', or 'SKILL.md'. | 2 / 3 |
Distinctiveness Conflict Risk | The concept of extracting reusable patterns from sessions is somewhat specific, but 'patterns' and 'skills' are broad terms that could overlap with other meta-learning or documentation skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
22%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads more like a design document or README than an actionable skill for Claude. It describes what the system should do but lacks the actual implementation details, executable scripts, and validation steps needed to make it work. The comparison notes and v2 enhancement sections consume significant token budget without adding actionable value.
Suggestions
Remove the comparison notes and v2 enhancements sections entirely, or move them to a separate docs/research.md file — they consume ~40% of the content without helping Claude execute the skill.
Include the actual evaluate-session.sh script content or at minimum show the core pattern detection logic with executable code.
Add validation steps: how to verify an extracted skill is valid, what to do if pattern detection produces low-quality results, and how to handle sessions with insufficient content.
Add a concrete example showing a sample session pattern and the resulting extracted skill file, so Claude knows exactly what output format to produce.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill contains significant unnecessary content: the comparison notes section with Homunculus is research/discussion material that doesn't help Claude execute the skill. The 'Why Stop Hook?' section explains rationale Claude doesn't need. The 'Potential v2 Enhancements' section is speculative roadmap content. Nearly half the document is non-actionable padding. | 1 / 3 |
Actionability | The skill provides concrete JSON configuration examples and a hook setup snippet, which are useful. However, it lacks the actual implementation — there's no evaluate-session.sh script content, no code showing how pattern detection works, and no example of what an extracted skill looks like. The core functionality is described rather than implemented. | 2 / 3 |
Workflow Clarity | The three-step workflow (evaluate → detect → extract) is mentioned but not elaborated with any validation or error handling. There's no guidance on what happens if extraction fails, how to verify extracted skills are correct, or how to handle edge cases. For a system that automatically writes files to disk, missing validation steps is a significant gap. | 1 / 3 |
Progressive Disclosure | The document references external files like `docs/continuous-learning-v2-spec.md` and `config.json`, but no bundle files are provided. The main content mixes overview, configuration, comparison research, and future roadmap in a single file without clear separation. The comparison section should be in a separate reference document. | 2 / 3 |
Total | 6 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
79cc4e3
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.