Automatically extract reusable patterns from Claude Code sessions and save them as learned skills for future use.
Install with Tessl CLI
npx tessl i github:haniakrim21/everything-claude-code --skill continuous-learning72
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
N/ABased on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
Something went wrong
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides good actionable configuration examples and clear pattern type documentation. However, it's weakened by inline research/comparison notes that should be in separate files, and lacks validation steps for a system that automatically generates new skills. The core functionality is clear but the document tries to serve both as a skill guide and research notes.
Suggestions
Move the 'Comparison Notes' section to a separate RESEARCH.md or COMPARISON.md file and link to it
Add validation steps: how to verify extracted skills are correct, how to review before auto-approve, what to do if extraction produces invalid output
Add an example of what a learned skill output looks like in ~/.claude/skills/learned/
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably efficient but includes some unnecessary content like the comparison notes section which adds bulk without being essential for using the skill. The Chinese text is appropriate for the target audience but the research notes could be in a separate file. | 2 / 3 |
Actionability | Provides concrete, copy-paste ready JSON configurations for both config.json and settings.json hook setup. The pattern types table and file paths are specific and actionable. | 3 / 3 |
Workflow Clarity | The three-step workflow (evaluate → detect → extract) is mentioned but lacks validation checkpoints. No guidance on what to do if extraction fails or how to verify learned skills are correct. For a system that auto-generates skills, validation is important. | 2 / 3 |
Progressive Disclosure | References external resources (Longform Guide, /learn command, v2 spec file) but the comparison notes section is inline when it should be in a separate RESEARCH.md or COMPARISON.md file. The main skill content is well-structured but bloated by research notes. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.