Automatically extract reusable patterns from Claude Code sessions and save them as learned skills for future use.
Install with Tessl CLI
npx tessl i github:affaan-m/everything-claude-code --skill continuous-learningOverall
score
61%
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
33%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description communicates the core purpose of extracting and saving patterns from Claude Code sessions, but lacks explicit trigger guidance that would help Claude know when to select this skill. It uses appropriate third-person voice and avoids vague fluff, but would benefit significantly from a 'Use when...' clause with natural user trigger terms.
Suggestions
Add a 'Use when...' clause with explicit triggers like 'when the user asks to save a workflow', 'remember this approach', 'create a skill from this session', or 'learn this pattern'.
Include natural user phrases that would trigger this skill, such as 'save for later', 'make this reusable', 'turn this into a template', or 'extract a skill'.
Specify what types of patterns are extracted (e.g., 'code workflows, command sequences, problem-solving approaches') to improve specificity and distinctiveness.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Claude Code sessions, learned skills) and describes the core action (extract reusable patterns, save as skills), but lacks comprehensive detail about what specific patterns are extracted or what 'learned skills' entails. | 2 / 3 |
Completeness | Describes what the skill does but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, missing explicit trigger guidance caps this at 2, and the 'when' is entirely absent. | 1 / 3 |
Trigger Term Quality | Includes some relevant terms like 'patterns', 'Claude Code sessions', and 'skills', but misses natural user phrases like 'save this for later', 'remember this', 'create a skill', or 'learn from this session'. | 2 / 3 |
Distinctiveness Conflict Risk | The concept of extracting patterns from sessions is somewhat specific, but 'reusable patterns' and 'skills' are broad enough to potentially overlap with other learning, templating, or automation skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
65%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides good actionable configuration examples and clear hook setup instructions. However, it suffers from including research/comparison content that belongs in a separate file, and lacks validation steps for the extraction workflow. The core functionality is well-documented but the skill would benefit from trimming the comparison section and adding error handling guidance.
Suggestions
Move the 'Comparison Notes (Research: Jan 2025)' section to a separate RESEARCH.md or COMPARISON.md file and link to it
Add validation steps: how to verify patterns were extracted correctly, what to do if evaluate-session.sh fails, and how to review/approve extracted skills
Add an example of what a learned skill looks like after extraction (sample output in ~/.claude/skills/learned/)
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably efficient but includes some unnecessary content like the detailed comparison table with Homunculus and research notes that could be in a separate file. The core functionality is explained concisely, but the 'Comparison Notes' section adds significant length. | 2 / 3 |
Actionability | Provides concrete, copy-paste ready JSON configurations for both config.json and settings.json hook setup. The pattern types table and configuration options are specific and executable. | 3 / 3 |
Workflow Clarity | The three-step workflow (Session Evaluation → Pattern Detection → Skill Extraction) is clear but lacks validation checkpoints. No guidance on what to do if extraction fails, how to verify patterns were saved correctly, or how to handle edge cases. | 2 / 3 |
Progressive Disclosure | Has some structure with clear sections, but the 'Comparison Notes' research section (50+ lines) should be in a separate file. References to external resources exist but the main file contains too much detail that could be split out. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
91%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.