Instinct-based learning system that observes sessions via hooks, creates atomic instincts with confidence scoring, and evolves them into skills/commands/agents.
Install with Tessl CLI
npx tessl i github:ysyecust/everything-claude-code --skill continuous-learning-v252
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
17%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description focuses on internal system mechanics rather than user-facing capabilities. It uses technical jargon that users wouldn't naturally use and completely lacks trigger guidance for when Claude should select this skill. The description reads more like an architecture document than a skill selection guide.
Suggestions
Add an explicit 'Use when...' clause with natural trigger terms users would actually say (e.g., 'Use when the user wants to automate repetitive tasks, create custom workflows, or have Claude learn from their patterns').
Replace technical jargon with user-facing language - instead of 'atomic instincts with confidence scoring', describe what benefit users get (e.g., 'learns your preferences and suggests improvements').
Clarify concrete outcomes users can expect rather than internal mechanisms (e.g., 'Automatically creates reusable commands from repeated actions').
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (learning system) and some actions (observes sessions, creates instincts, evolves into skills/commands/agents), but uses abstract terms like 'instinct-based' and 'confidence scoring' without explaining concrete user-facing capabilities. | 2 / 3 |
Completeness | Describes what the system does internally but completely lacks a 'Use when...' clause or any explicit trigger guidance. Does not answer when Claude should select this skill. | 1 / 3 |
Trigger Term Quality | Uses technical jargon ('hooks', 'atomic instincts', 'confidence scoring') that users would not naturally say. Missing natural trigger terms - users wouldn't ask for 'instinct-based learning' or 'confidence scoring'. | 1 / 3 |
Distinctiveness Conflict Risk | The 'instinct-based learning' and 'hooks' terminology is somewhat distinctive, but 'evolves into skills/commands/agents' is vague enough to potentially overlap with other automation or learning-related skills. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a comprehensive overview of an instinct-based learning system with good actionable examples (JSON configs, bash commands, YAML templates). However, it's somewhat verbose for a SKILL.md overview, mixing quick-start content with detailed explanations that could be in separate files. The workflow lacks explicit validation steps to confirm the system is working correctly after setup.
Suggestions
Add validation steps after quick start (e.g., 'Verify hooks are working: check that observations.jsonl receives entries after your next tool use')
Move detailed sections (confidence scoring explanation, backward compatibility, privacy notes) to separate reference files and link to them
Add a troubleshooting section or validation command to confirm the observer agent is running correctly
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is reasonably efficient with good use of tables and diagrams, but includes some unnecessary explanations (e.g., explaining why hooks vs skills, backward compatibility details) that could be trimmed or moved to separate files. | 2 / 3 |
Actionability | Provides concrete, copy-paste ready configuration JSON, bash commands for initialization, and clear YAML examples for instinct format. The quick start section has executable steps. | 3 / 3 |
Workflow Clarity | The ASCII diagram shows the overall flow well, and quick start has numbered steps, but lacks explicit validation checkpoints. No verification steps after hook setup or directory initialization to confirm things are working. | 2 / 3 |
Progressive Disclosure | Content is reasonably structured with clear sections, but the file is quite long (~200 lines) with detailed configuration and explanations that could be split into separate reference files. The 'Related' section links to external resources but internal documentation structure could be better. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.