CtrlK
BlogDocsLog inGet started
Tessl Logo

continuous-learning-v2

Instinct-based learning system that observes sessions via hooks, creates atomic instincts with confidence scoring, and evolves them into skills/commands/agents. v2.1 adds project-scoped instincts to prevent cross-project contamination.

37

Quality

22%

Does it follow best practices?

Impact

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/continuous-learning-v2/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

17%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description reads like a technical architecture summary or release note rather than a functional skill description. It lacks natural trigger terms users would say, has no explicit 'Use when...' guidance, and relies heavily on internal jargon ('atomic instincts', 'confidence scoring', 'hooks') that obscures what the skill actually does for the user. The version number reference (v2.1) adds noise without aiding skill selection.

Suggestions

Add an explicit 'Use when...' clause describing the scenarios that should trigger this skill, e.g., 'Use when the user wants to automatically learn patterns from sessions and generate reusable skills or commands.'

Replace jargon with natural language trigger terms users would actually say, such as 'learn from usage', 'auto-generate skills', 'pattern recognition', 'session learning'.

Rewrite the description to lead with concrete user-facing actions rather than architectural details, e.g., 'Automatically learns recurring patterns from coding sessions and generates reusable skills, commands, and agents. Supports project-scoped learning to keep patterns isolated per project.'

DimensionReasoningScore

Specificity

Names some actions like 'observes sessions via hooks', 'creates atomic instincts with confidence scoring', and 'evolves them into skills/commands/agents', but these are domain-specific jargon rather than concrete user-facing actions. The description is more about internal architecture than what it concretely does for the user.

2 / 3

Completeness

While there is a partial 'what' (though expressed in abstract/architectural terms), there is no 'Use when...' clause or any explicit guidance on when Claude should select this skill. The description reads more like a changelog entry than a skill description.

1 / 3

Trigger Term Quality

The description uses highly technical jargon ('atomic instincts', 'confidence scoring', 'hooks', 'cross-project contamination') that users would almost never naturally say. There are no natural trigger terms a user would use when needing this skill.

1 / 3

Distinctiveness Conflict Risk

The concept of 'instinct-based learning' is fairly unique and unlikely to conflict with most other skills, but the mention of 'skills/commands/agents' is broad enough to potentially overlap with other meta-skills or automation tools.

2 / 3

Total

6

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill content reads more like a product README or documentation page than an actionable skill for Claude. It's heavily padded with version comparison tables, conceptual explanations, and marketing-style content that doesn't help Claude execute tasks. The core actionable content (hook setup, commands, file structure) is buried in verbose surrounding material that could be cut by 60%+ or split into reference files.

Suggestions

Cut the document to ~80 lines: remove version comparison tables, the 'Why Hooks vs Skills' rationale section, the philosophical quote, and the marketing tagline. Claude doesn't need to be sold on the approach.

Move the Scope Decision Guide, Confidence Scoring details, File Structure tree, and Backward Compatibility notes into separate reference files (e.g., SCOPE.md, REFERENCE.md) and link to them from the main skill.

Add validation checkpoints: after hook setup, include a step to verify observations are being captured (e.g., 'Check that observations.jsonl exists and has entries after a few tool calls'). Add error recovery for common failures.

Remove explanations of concepts Claude already understands (what confidence scoring means, why deterministic hooks are better than probabilistic skills) and replace with just the actionable rules.

DimensionReasoningScore

Conciseness

The document is extremely verbose at ~300+ lines. It includes extensive version comparison tables (v1 vs v2, v2 vs v2.1), explains concepts Claude already knows (why hooks are better than skills, what confidence scoring means conceptually), includes a philosophical quote, marketing tagline, and multiple sections that could be dramatically condensed. The 'Why Hooks vs Skills' section explains something Claude doesn't need justified.

1 / 3

Actionability

The Quick Start section provides concrete JSON configuration and bash commands, and the commands table is clear. However, much of the document is descriptive rather than instructive — the instinct model explanation, scope decision guide, and confidence scoring sections describe concepts rather than providing executable guidance. The CLI commands shown (python3 instinct-cli.py promote) are concrete but lack complete context for setup.

2 / 3

Workflow Clarity

The Quick Start provides a 3-step setup sequence, and the architecture diagram shows the data flow clearly. However, there are no validation checkpoints — no way to verify hooks are working, no way to confirm observations are being captured, no error recovery steps if the observer fails. For a system involving background agents and file manipulation, this is a significant gap.

2 / 3

Progressive Disclosure

This is a monolithic wall of text with no references to supporting files despite being a complex multi-file system. The file structure, scope decision guide, confidence scoring details, backward compatibility notes, and version comparison tables could all be in separate reference files. No bundle files are provided, and no external references are made to detailed documentation within the skill's own structure.

1 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
affaan-m/everything-claude-code
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.