Instinct-based learning system that observes sessions via hooks, creates atomic instincts with confidence scoring, and evolves them into skills/commands/agents.
Install with Tessl CLI
npx tessl i github:affaan-m/everything-claude-code --skill continuous-learning-v259
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
17%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description reads like internal system documentation rather than a user-facing skill description. It explains the mechanism (instincts, hooks, confidence scoring) but fails to communicate practical use cases or include natural trigger terms users would actually say. The complete absence of 'when to use' guidance makes it nearly impossible for Claude to correctly select this skill.
Suggestions
Add an explicit 'Use when...' clause with natural trigger terms like 'learn from my workflow', 'remember how I do this', 'automate repetitive tasks', or 'improve based on my patterns'.
Replace technical jargon with user-facing language - instead of 'atomic instincts with confidence scoring', describe the benefit: 'learns your preferences and patterns over time'.
Clarify concrete outcomes users can expect, such as 'automatically suggests improvements based on observed patterns' or 'creates reusable automations from repeated actions'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (learning system) and some actions (observes sessions, creates instincts, evolves into skills/commands/agents), but uses abstract terms like 'instinct-based' and 'confidence scoring' without explaining concrete user-facing capabilities. | 2 / 3 |
Completeness | Describes what the system does internally but completely lacks a 'Use when...' clause or any explicit trigger guidance. Users cannot determine when this skill should be invoked. | 1 / 3 |
Trigger Term Quality | Uses technical jargon ('hooks', 'atomic instincts', 'confidence scoring') that users would never naturally say. No common user-facing keywords like 'learn', 'remember', 'automate', or 'improve' are included. | 1 / 3 |
Distinctiveness Conflict Risk | The concept of 'instinct-based learning' and 'evolving into skills' is somewhat unique, but the vague language around 'skills/commands/agents' could overlap with other automation or learning-related skills. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
72%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides comprehensive, actionable guidance for setting up an instinct-based learning system with good structure and clear examples. The main weaknesses are some verbosity in explanatory sections and a lack of validation/verification steps in the workflow - users have no way to confirm the system is working correctly after setup.
Suggestions
Add a verification step after hook setup (e.g., 'Run a simple command and check that observations.jsonl contains an entry')
Remove or condense the 'Why Hooks vs Skills' section - the comparison table already conveys this information
Add troubleshooting guidance: what to check if instincts aren't being created, how to verify the observer agent is running
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content includes useful information but has some verbosity - the v1 vs v2 comparison table, extensive explanations of the instinct model, and the 'Why Hooks vs Skills' section explain concepts that could be more concise. The marketing-style tagline at the end is unnecessary. | 2 / 3 |
Actionability | Provides fully executable guidance with concrete JSON configurations, bash commands for directory setup, and specific file paths. The hook configuration examples are copy-paste ready with both plugin and manual installation variants. | 3 / 3 |
Workflow Clarity | The Quick Start provides a clear 3-step sequence, and the ASCII diagram shows the data flow well. However, there are no validation checkpoints - no way to verify hooks are working, no feedback loop for troubleshooting failed observations, and no verification that instincts are being created correctly. | 2 / 3 |
Progressive Disclosure | Well-structured with clear sections progressing from overview to quick start to detailed configuration. References to external resources (Skill Creator, Longform Guide) are one level deep and clearly signaled. Content is appropriately split between overview and detailed config. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.