tessl i github:ysyecust/everything-claude-code --skill continuous-learningAutomatically extract reusable patterns from Claude Code sessions and save them as learned skills for future use.
Validation
75%| Criteria | Description | Result |
|---|---|---|
description_trigger_hint | Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...') | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
license_field | 'license' field is missing | Warning |
body_output_format | No obvious output/return/format terms detected; consider specifying expected outputs | Warning |
Total | 12 / 16 Passed | |
Implementation
50%This skill provides a solid conceptual overview of continuous learning with good configuration examples, but falls short on actionability by not showing the actual extraction logic. The comparison section with Homunculus, while informative, bloats the document and should be moved to a separate reference file. Missing validation steps for reviewing extracted patterns weakens the workflow.
Suggestions
Move the 'Comparison Notes' and 'Potential v2 Enhancements' sections to a separate RESEARCH.md or COMPARISON.md file
Add the actual evaluate-session.sh script content or at minimum show the core extraction logic
Add validation steps: how to review extracted skills, what happens when auto_approve is false, how to verify skill quality
Include an example of what a learned skill output looks like in ~/.claude/skills/learned/
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably efficient but includes a lengthy comparison section with another tool (Homunculus) and research notes that add context but aren't essential for using the skill. The core content is lean, but the comparison bloats the document. | 2 / 3 |
Actionability | Provides concrete JSON configuration examples and hook setup, but the actual extraction logic is delegated to an external script (evaluate-session.sh) without showing what it does. The skill describes what happens but doesn't provide the executable implementation. | 2 / 3 |
Workflow Clarity | The 3-step process (Evaluation → Detection → Extraction) is listed but lacks validation checkpoints. No guidance on what to do if extraction fails, how to verify learned skills are valid, or how to review/approve patterns when auto_approve is false. | 2 / 3 |
Progressive Disclosure | Has reasonable structure with sections, but the comparison notes and v2 enhancements should be in a separate file rather than inline. References to external resources exist but the main document contains too much tangential content. | 2 / 3 |
Total | 8 / 12 Passed |
Activation
33%The description communicates the general purpose of pattern extraction and skill creation but lacks the explicit trigger guidance essential for Claude to know when to select this skill. It uses appropriate third-person voice but relies on abstract terms like 'reusable patterns' without concrete examples of what gets extracted or saved.
Suggestions
Add a 'Use when...' clause with explicit triggers like 'when the user asks to save a workflow', 'create a skill from this session', or 'remember this pattern'
Specify concrete examples of what patterns are extracted (e.g., 'command sequences, code snippets, multi-step workflows')
Include natural user phrases that would trigger this skill such as 'learn from this', 'save this for later', or 'make this reusable'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Claude Code sessions, learned skills) and describes the core action (extract patterns, save as skills), but lacks comprehensive detail about what specific patterns are extracted or what 'reusable patterns' means concretely. | 2 / 3 |
Completeness | Describes what the skill does but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, missing explicit trigger guidance caps this at 2, but the 'when' is entirely absent, warranting a 1. | 1 / 3 |
Trigger Term Quality | Includes some relevant terms like 'patterns', 'skills', 'Claude Code sessions', but misses natural user phrases like 'save this workflow', 'remember how to', 'create a skill', or 'learn from this'. | 2 / 3 |
Distinctiveness Conflict Risk | The concept of extracting patterns from sessions is somewhat specific, but 'reusable patterns' and 'learned skills' are vague enough that this could overlap with documentation, template creation, or general automation skills. | 2 / 3 |
Total | 7 / 12 Passed |
Reviewed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.