CtrlK
BlogDocsLog inGet started
Tessl Logo

continuous-learning-v2

Instinct-based learning system that observes sessions via hooks, creates atomic instincts with confidence scoring, and evolves them into skills/commands/agents.

46

Quality

33%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./docs/zh-TW/skills/continuous-learning-v2/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

17%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description focuses on internal system mechanics rather than user-facing capabilities. It uses technical jargon that users wouldn't naturally use and completely lacks trigger guidance for when Claude should select this skill. The description reads more like an architecture document than a skill selection guide.

Suggestions

Add an explicit 'Use when...' clause with natural trigger terms users would actually say (e.g., 'Use when the user wants to automate repetitive tasks, create custom workflows, or have Claude learn from their patterns').

Replace technical jargon with user-facing language - instead of 'atomic instincts with confidence scoring', describe what benefit users get (e.g., 'learns your preferences and suggests improvements').

Clarify concrete outcomes users can expect rather than internal mechanisms (e.g., 'Automatically creates reusable commands from repeated actions').

DimensionReasoningScore

Specificity

Names the domain (learning system) and some actions (observes sessions, creates instincts, evolves into skills/commands/agents), but uses abstract terms like 'instinct-based' and 'confidence scoring' without explaining concrete user-facing capabilities.

2 / 3

Completeness

Describes what the system does internally but completely lacks a 'Use when...' clause or any explicit trigger guidance. Does not answer when Claude should select this skill.

1 / 3

Trigger Term Quality

Uses technical jargon ('hooks', 'atomic instincts', 'confidence scoring') that users would not naturally say. Missing natural trigger terms - users wouldn't ask for 'instinct-based learning' or 'confidence scoring'.

1 / 3

Distinctiveness Conflict Risk

The 'instinct-based learning' and 'hooks' terminology is somewhat distinctive, but 'evolves into skills/commands/agents' is vague enough to potentially overlap with other automation or learning-related skills.

2 / 3

Total

6

/

12

Passed

Implementation

50%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides a comprehensive conceptual overview of an instinct-based learning system with good visual diagrams and structured tables. However, it falls short on actionability by referencing scripts and commands without providing their implementations, and lacks validation steps to confirm the system is working correctly. The content would benefit from splitting detailed reference material into separate files and providing complete, executable code.

Suggestions

Provide the actual content of observe.sh and start-observer.sh scripts, or clearly indicate where users can find/generate them

Add validation checkpoints after setup steps (e.g., 'Verify hooks are working: run `cat ~/.claude/homunculus/observations.jsonl` after a session')

Split detailed configuration options, confidence scoring rules, and file structure documentation into separate reference files

Include a minimal working example showing the complete flow from observation to instinct creation

DimensionReasoningScore

Conciseness

The skill is reasonably efficient with good use of tables and diagrams, but includes some unnecessary explanations (e.g., explaining why hooks vs skills, backward compatibility details) that could be trimmed or moved to separate files.

2 / 3

Actionability

Provides concrete JSON config and bash commands for setup, but critical components are missing: the actual observe.sh hook script content, the start-observer.sh script, and the command implementations are not provided—only referenced.

2 / 3

Workflow Clarity

The ASCII diagram shows the overall flow clearly, and setup steps are numbered, but there are no validation checkpoints (e.g., how to verify hooks are working, how to confirm observer is running correctly, what to do if observations aren't being captured).

2 / 3

Progressive Disclosure

Good structure with tables and sections, but the document is monolithic—detailed config, file structure, confidence scoring, and integration details could be split into separate reference files. External links exist but internal documentation structure is flat.

2 / 3

Total

8

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
haniakrim21/everything-claude-code
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.