Operate and extend NanoClaw v2, ECC's zero-dependency session-aware REPL built on claude -p.
50
50%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Quality
Discovery
40%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description clearly identifies a unique, named tool (NanoClaw v2) which makes it highly distinctive, but it fails to explain what concrete actions the skill enables or when Claude should select it. The lack of a 'Use when...' clause and the reliance on project-specific jargon significantly limit its effectiveness for skill selection among many options.
Suggestions
Add a 'Use when...' clause specifying trigger scenarios, e.g., 'Use when the user asks about NanoClaw, session-aware REPL workflows, or extending the claude -p pipeline.'
List specific concrete actions the skill covers, e.g., 'Start and manage REPL sessions, add custom commands, handle session persistence, debug pipeline issues.'
Include natural language synonyms a user might use, such as 'interactive shell', 'command-line session', or 'REPL tool'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (NanoClaw v2, REPL) and mentions two actions ('operate' and 'extend'), but doesn't list specific concrete actions like 'create sessions', 'manage commands', or 'debug pipelines'. The description is more of a label than a capability list. | 2 / 3 |
Completeness | It partially answers 'what' (operate and extend NanoClaw v2) but provides no 'when' clause or explicit trigger guidance. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and the 'what' is also quite thin, so this scores a 1. | 1 / 3 |
Trigger Term Quality | Includes some relevant keywords like 'NanoClaw', 'REPL', 'session-aware', and 'claude -p', which would match users who know the tool by name. However, it lacks natural language terms a user might say (e.g., 'interactive shell', 'command loop', 'session management') and relies heavily on project-specific jargon. | 2 / 3 |
Distinctiveness Conflict Risk | The description is highly specific to a named tool ('NanoClaw v2') with distinctive identifiers like 'ECC's zero-dependency session-aware REPL built on claude -p'. This is unlikely to conflict with any other skill. | 3 / 3 |
Total | 8 / 12 Passed |
Implementation
37%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill is admirably concise and well-organized but severely lacks actionability—it reads more like a feature list than an instructional guide. There are no concrete examples of command usage, no sample session workflows, and no references to deeper documentation for the numerous commands listed.
Suggestions
Add concrete usage examples for key commands (e.g., show actual `/branch`, `/compact`, `/search` invocations with expected input/output)
Define a clear workflow sequence for common tasks, such as: start session → work → branch before risky change → validate → compact → export
Reference deeper documentation files for command details (e.g., 'See [COMMANDS.md](COMMANDS.md) for full command reference') or inline brief usage patterns for each slash command
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient. Every line serves a purpose—listing capabilities, operating guidance, and extension rules without explaining what a REPL is or how sessions work. No unnecessary padding. | 3 / 3 |
Actionability | The skill lists slash commands and general guidance but provides no concrete examples, executable code, or specific command invocations. 'Branch before high-risk changes' and 'Compact after major milestones' are vague directives with no illustration of how to actually use these commands. | 1 / 3 |
Workflow Clarity | The operating guidance lists four loosely ordered tips but doesn't define a clear workflow sequence, lacks validation checkpoints, and provides no feedback loops. There's no guidance on what to do when things go wrong or how steps relate to each other. | 1 / 3 |
Progressive Disclosure | The content is well-structured with clear sections and is appropriately brief for a SKILL.md overview. However, it doesn't reference any deeper documentation (e.g., the actual claw.js source, examples, or detailed command references) despite listing 8+ commands that likely need more explanation. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Reviewed
Table of Contents