tessl i github:ysyecust/everything-claude-code --skill continuous-learning-v2Instinct-based learning system that observes sessions via hooks, creates atomic instincts with confidence scoring, and evolves them into skills/commands/agents.
Validation
63%| Criteria | Description | Result |
|---|---|---|
description_trigger_hint | Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...') | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
license_field | 'license' field is missing | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
body_output_format | No obvious output/return/format terms detected; consider specifying expected outputs | Warning |
body_steps | No step-by-step structure detected (no ordered list); consider adding a simple workflow | Warning |
Total | 10 / 16 Passed | |
Implementation
65%This skill provides good actionable setup instructions with concrete configuration examples and clear command references. However, it lacks validation checkpoints for the multi-step setup process and includes some unnecessary context (v1 comparison, lengthy explanations of the instinct model). The workflow would benefit from verification steps to confirm the system is working correctly.
Suggestions
Add validation steps after hook configuration (e.g., 'Verify hooks are active: run a command and check observations.jsonl for new entries')
Remove or minimize the v1 vs v2 comparison table - Claude doesn't need historical context
Add troubleshooting guidance: what to check if instincts aren't being created
Move the detailed instinct model explanation and architecture diagram to a separate ARCHITECTURE.md file, keeping only essential quick-start info in SKILL.md
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is reasonably efficient but includes some unnecessary elements like the v1 vs v2 comparison table (Claude doesn't need this context) and explanatory prose that could be trimmed. The diagrams and tables add value but some sections are verbose. | 2 / 3 |
Actionability | Provides concrete, copy-paste ready configuration JSON, bash commands for directory setup, and specific command invocations. The hook configuration examples are complete and executable for both installation methods. | 3 / 3 |
Workflow Clarity | The Quick Start provides a clear 3-step sequence, but lacks validation checkpoints. There's no guidance on verifying hooks are working, no error recovery steps if observation fails, and no feedback loop for confirming instincts are being created correctly. | 2 / 3 |
Progressive Disclosure | Content is reasonably organized with clear sections, but everything is inline in one file. References to config.json and hook scripts exist but aren't linked. The instinct model explanation and architecture diagram could be in separate reference files. | 2 / 3 |
Total | 9 / 12 Passed |
Activation
17%This description focuses heavily on internal system architecture ('hooks', 'atomic instincts', 'confidence scoring') rather than user-facing capabilities and triggers. It lacks explicit guidance on when Claude should select this skill and uses technical jargon instead of natural language users would employ. The description would benefit from concrete examples of what users can accomplish and clear trigger conditions.
Suggestions
Add a 'Use when...' clause with natural trigger terms like 'learn from my workflow', 'remember how I do things', 'automate patterns', or 'create shortcuts from my habits'.
Replace technical jargon with user-facing outcomes, e.g., 'Learns from your repeated actions to suggest automations' instead of 'creates atomic instincts with confidence scoring'.
Include specific examples of what gets created, e.g., 'Creates reusable commands, skills, or agents based on observed patterns in your sessions'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (learning system) and some actions (observes sessions, creates instincts, evolves into skills/commands/agents), but uses abstract terms like 'instinct-based' and 'confidence scoring' without explaining concrete user-facing capabilities. | 2 / 3 |
Completeness | Describes what the system does internally but completely lacks a 'Use when...' clause or any explicit trigger guidance. Does not tell Claude when to select this skill. | 1 / 3 |
Trigger Term Quality | Uses technical jargon ('hooks', 'atomic instincts', 'confidence scoring') that users would not naturally say. Missing natural trigger terms like 'learn from my behavior', 'remember patterns', or 'automate repetitive tasks'. | 1 / 3 |
Distinctiveness Conflict Risk | The concept of 'instinct-based learning' and 'evolves into skills/commands/agents' is somewhat distinctive, but the vague language could overlap with other automation, learning, or agent-related skills. | 2 / 3 |
Total | 6 / 12 Passed |
Reviewed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.