Add specs, conventions, constraints, or learnings to project guidelines interactively or automatically
52
43%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.codex/skills/spec-add/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description provides a reasonable sense of what the skill does—adding various types of information to project guidelines—but lacks explicit trigger guidance ('Use when...') and misses common natural language terms users would employ. The abstract nature of terms like 'specs' and 'learnings' reduces clarity, and the absence of when-to-use guidance significantly hurts completeness.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user wants to update project guidelines, add coding conventions, record lessons learned, or modify CLAUDE.md.'
Include more natural trigger terms users would say, such as 'project rules', 'coding standards', 'style guide', 'CLAUDE.md', 'project documentation', or 'best practices'.
Clarify what 'interactively or automatically' means in practice—e.g., 'Prompts the user for input or automatically extracts patterns from code reviews to update guidelines.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists some actions ('add specs, conventions, constraints, or learnings') and names the domain ('project guidelines'), but the terms are somewhat abstract—what exactly are 'specs' or 'learnings' in this context? It also mentions 'interactively or automatically' which adds some detail but remains vague about concrete operations. | 2 / 3 |
Completeness | Describes what the skill does (adds specs/conventions/constraints/learnings to project guidelines) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and since the 'what' is also only moderately clear, this scores at 1. | 1 / 3 |
Trigger Term Quality | Includes some relevant keywords like 'guidelines', 'conventions', 'constraints', and 'learnings', but misses common natural phrases a user might say such as 'project rules', 'coding standards', 'style guide', 'CLAUDE.md', or 'project config'. The terms are somewhat domain-specific but not comprehensive. | 2 / 3 |
Distinctiveness Conflict Risk | The mention of 'project guidelines' provides some specificity, but terms like 'specs', 'conventions', and 'constraints' could overlap with skills related to documentation, project configuration, or code review. Without clearer scoping, there's moderate conflict risk. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
55%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is highly actionable and has excellent workflow clarity with complete executable code and clear step-by-step processes. However, it is extremely verbose—embedding hundreds of lines of implementation code, redundant tables, and detailed examples all inline—when much of this could be split into referenced files or condensed significantly. Claude doesn't need full JavaScript implementations spelled out in a skill file; a concise specification of behavior, file paths, and key logic would suffice.
Suggestions
Extract the full JavaScript implementation code into separate bundle files (e.g., spec-add.js) and reference them from SKILL.md, keeping only the behavioral specification and key decision logic inline.
Remove redundant information: the execution process flowchart, the implementation steps, and the examples all describe the same logic three times—consolidate into one authoritative representation.
Trim parameter documentation to essentials; Claude can infer validation rules and default behaviors from concise descriptions rather than exhaustive tables plus code plus examples.
Move the detailed interactive wizard prompt configurations to a separate reference file, keeping only the high-level flow description in SKILL.md.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~400+ lines. Massive amounts of implementation detail (full JavaScript parsing code, interactive wizard prompts, file system operations) that Claude could generate from a concise specification. The parameter tables, type subcategory tables, and workflow stage tables repeat information. The full execution process flowchart duplicates what the code already shows. | 1 / 3 |
Actionability | Highly actionable with complete, executable JavaScript code for every step including argument parsing, auto-detection functions, file writing, and interactive wizard flows. CLI examples with expected outputs are concrete and copy-paste ready. | 3 / 3 |
Workflow Clarity | Clear multi-step process with explicit branching (interactive vs direct mode), validation at each step (input validation, duplicate checking, file existence checks), and a well-defined execution flowchart. Error handling section covers recovery scenarios including file backup before modification. | 3 / 3 |
Progressive Disclosure | Monolithic wall of text with no references to external files despite the content being long enough to warrant splitting. Implementation details, parameter references, examples, and error handling are all inline. The full JavaScript implementation code should be in separate files, with SKILL.md serving as an overview. | 1 / 3 |
Total | 8 / 12 Passed |
Validation
72%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 8 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (621 lines); consider splitting into references/ and linking | Warning |
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 8 / 11 Passed | |
227244f
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.