CtrlK
BlogDocsLog inGet started
Tessl Logo

continuous-learning-v2

Instinct-based learning system that observes sessions via hooks, creates atomic instincts with confidence scoring, and evolves them into skills/commands/agents. v2.1 adds project-scoped instincts to prevent cross-project contamination.

40

Quality

26%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Critical

Do not install without reviewing

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/continuous-learning-v2/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

17%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description reads like a technical feature summary or release note rather than a skill description designed for selection. It lacks any 'Use when...' guidance, uses heavy internal jargon instead of natural user-facing trigger terms, and fails to communicate what practical problem a user would be solving when this skill should be activated.

Suggestions

Add an explicit 'Use when...' clause describing the situations that should trigger this skill, e.g., 'Use when the user wants to automatically learn patterns from their workflow and generate reusable skills or commands.'

Replace jargon like 'atomic instincts', 'confidence scoring', and 'cross-project contamination' with natural language terms users would actually say, such as 'learning from usage patterns', 'auto-generating workflows', or 'project-specific automation'.

Reframe the description around user-facing outcomes rather than internal mechanisms — what does the user get out of this system, and what would they ask for that should trigger it?

DimensionReasoningScore

Specificity

Names some actions like 'observes sessions via hooks', 'creates atomic instincts with confidence scoring', and 'evolves them into skills/commands/agents', but these are domain-specific jargon rather than concrete user-facing actions. The description talks about internal mechanisms rather than what a user would ask it to do.

2 / 3

Completeness

While there is a partial 'what' (observes sessions, creates instincts, evolves into skills), there is no 'when' clause or any explicit trigger guidance for when Claude should select this skill. The description reads more like a changelog than a skill description.

1 / 3

Trigger Term Quality

The keywords used ('instinct-based learning system', 'atomic instincts', 'confidence scoring', 'cross-project contamination') are highly technical internal jargon that no user would naturally say when requesting this functionality. There are no natural trigger terms a user would use.

1 / 3

Distinctiveness Conflict Risk

The concept of 'instinct-based learning' and 'atomic instincts' is fairly niche and unlikely to conflict with common skills, but the mention of 'skills/commands/agents' is broad enough to potentially overlap with other meta-skills or automation tools.

2 / 3

Total

6

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is comprehensive in scope but severely over-documented for a SKILL.md file. It reads more like a README/documentation page than an actionable skill, with extensive version comparison tables, conceptual explanations, and marketing-style content that inflate token usage without adding actionable value. The core setup instructions are buried among explanatory content, and critical validation steps are missing from the workflow.

Suggestions

Cut version comparison tables, 'Why Hooks vs Skills' section, backward compatibility notes, and the marketing tagline — move these to a separate CHANGELOG.md or README.md if needed.

Add validation checkpoints: after hook setup, show how to verify observations are being recorded (e.g., 'Run a command, then check `cat ~/.claude/homunculus/projects/<hash>/observations.jsonl | tail -1`').

Move the file structure diagram, scope decision guide, and confidence scoring table to separate reference files and link to them from the main skill.

Show the actual content or key logic of observe.sh so Claude can debug or adapt it, rather than just referencing the script path.

DimensionReasoningScore

Conciseness

Extremely verbose at ~300+ lines. Includes extensive version comparison tables (v1 vs v2, v2 vs v2.1), explanations of concepts Claude already understands (what hooks are, why deterministic > probabilistic), a marketing tagline, backward compatibility notes, and a 'Why Hooks vs Skills' section that explains obvious architectural decisions. Much of this could be cut without losing actionability.

1 / 3

Actionability

Provides concrete JSON configuration for hooks and CLI commands with real arguments, but the core observation/analysis pipeline relies on scripts (observe.sh, instinct-cli.py) whose contents are never shown. The instinct YAML format is shown as an example but there's no executable code for creating or processing instincts. Commands like `/instinct-status` and `/evolve` are listed but their implementation is opaque.

2 / 3

Workflow Clarity

The Quick Start provides a 3-step setup sequence, and the ASCII flow diagram shows the overall pipeline clearly. However, there are no validation checkpoints — no way to verify hooks are firing correctly, no way to confirm observations are being recorded, no error recovery steps if the observer agent fails or if project detection is wrong.

2 / 3

Progressive Disclosure

The content is a monolithic wall of text with many sections that could be split into separate files (scope decision guide, confidence scoring details, version comparison tables, file structure reference). There are external links at the bottom but no references to companion files for detailed topics. The inline content is far too long for a SKILL.md overview.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
affaan-m/everything-claude-code
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.