Automatically extract reusable patterns from Claude Code sessions and save them as learned skills for future use.
59
48%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./docs/zh-TW/skills/continuous-learning/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description communicates the core purpose of pattern extraction and skill creation but lacks explicit trigger guidance for when Claude should select this skill. It uses appropriate third-person voice and avoids vague fluff, but would benefit from natural trigger terms and a clear 'Use when...' clause to help Claude distinguish when to apply this skill.
Suggestions
Add a 'Use when...' clause with explicit triggers like 'when the user asks to save a workflow', 'learn from this session', or 'create a skill from what we just did'
Include natural trigger terms users might say: 'remember this', 'save as template', 'learn this pattern', 'extract skill'
Specify what types of patterns are extracted (e.g., 'command sequences, code transformations, multi-step workflows') to improve specificity
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Claude Code sessions, learned skills) and describes the core action (extract reusable patterns, save as skills), but lacks comprehensive detail about what specific patterns are extracted or what 'learned skills' entails. | 2 / 3 |
Completeness | Describes what the skill does but completely lacks a 'Use when...' clause or any explicit trigger guidance. Per rubric guidelines, missing explicit trigger guidance caps completeness at 2, and this has no 'when' component at all. | 1 / 3 |
Trigger Term Quality | Includes some relevant terms like 'patterns', 'Claude Code sessions', and 'skills', but misses common variations users might say such as 'learn from session', 'save workflow', 'remember how to', or 'create skill from'. | 2 / 3 |
Distinctiveness Conflict Risk | The concept of extracting patterns from sessions is somewhat specific, but 'reusable patterns' and 'skills' are broad terms that could overlap with other automation, templating, or learning-related skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides good actionable configuration examples and clear pattern categorization, making it easy to implement. However, it suffers from including research/comparison content that belongs in separate files, and lacks validation steps for the learning workflow. The core functionality is well-documented but the skill would benefit from trimming non-essential content.
Suggestions
Move the 'Comparison Notes' section to a separate RESEARCH.md or COMPARISON.md file, keeping only a brief reference link in the main skill
Add validation steps: how to verify extracted skills are valid, what to do if extraction fails, and how to review/approve learned patterns
Add an example of what a learned skill output looks like (sample file in ~/.claude/skills/learned/)
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably efficient but includes some unnecessary sections like the comparison notes and research that could be in a separate file. The core content is well-organized but the v2 enhancement discussion adds bulk without immediate actionability. | 2 / 3 |
Actionability | Provides concrete, copy-paste ready JSON configurations for both config.json and settings.json. The hook setup command path is specific and the pattern types table gives clear, actionable categories. | 3 / 3 |
Workflow Clarity | The 3-step workflow (evaluate → detect → extract) is clear but lacks validation checkpoints. No guidance on what happens if extraction fails, how to verify learned skills are valid, or how to handle edge cases. | 2 / 3 |
Progressive Disclosure | References external resources (Longform Guide, /learn command, v2 spec file) but the comparison notes section is inline when it should be in a separate RESEARCH.md or COMPARISON.md file. The main skill content is well-structured. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
ae2cadd
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.