Prevent semantic code duplication with capability index and check-before-write
36
22%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/code-deduplication/SKILL.mdQuality
Discovery
17%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is too terse and relies on internal jargon ('capability index', 'check-before-write') that users would not naturally use. It lacks a 'Use when...' clause, concrete action verbs, and natural trigger terms, making it difficult for Claude to reliably select this skill from a pool of alternatives.
Suggestions
Add a 'Use when...' clause with natural trigger scenarios, e.g., 'Use when writing new functions, adding features, or when the user asks about avoiding duplicate code, DRY principles, or code reuse.'
Replace jargon with natural keywords users would say, such as 'duplicate code', 'redundant functions', 'code reuse', 'DRY', 'already implemented'.
List specific concrete actions the skill performs, e.g., 'Checks existing codebase for semantically similar functions before writing new code, maintains an index of existing capabilities, and suggests reusing existing implementations instead of creating duplicates.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names a domain (semantic code duplication) and mentions two mechanisms ('capability index' and 'check-before-write'), but doesn't list concrete actions like 'scan codebase for duplicates', 'maintain an index of existing functions', or 'suggest reuse of existing code'. | 2 / 3 |
Completeness | The description only partially addresses 'what' (prevent duplication) and completely lacks a 'when' clause or any explicit trigger guidance for when Claude should select this skill. | 1 / 3 |
Trigger Term Quality | Terms like 'capability index' and 'check-before-write' are internal/technical jargon unlikely to be used by a user naturally. Missing natural trigger terms like 'duplicate code', 'DRY', 'code reuse', 'redundant functions', or 'already exists'. | 1 / 3 |
Distinctiveness Conflict Risk | The concept of preventing semantic code duplication is a somewhat specific niche, but the vague phrasing could overlap with general code quality, refactoring, or linting skills without clear boundaries. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill communicates a valuable concept—checking for existing code before writing new code—but massively over-explains it. The extensive hypothetical CODE_INDEX.md example, ASCII art boxes, and optional Vector DB integration bloat the document far beyond what's needed. The core actionable workflow is buried under illustrative examples that Claude doesn't need to see every time this skill is loaded.
Suggestions
Cut the CODE_INDEX.md example to 1-2 categories (10 lines max) instead of 7 full categories with 40+ entries—Claude understands the pattern from a small example.
Move the Vector DB integration, common duplication patterns, and audit output format into separate referenced files (e.g., VECTOR_DB.md, PATTERNS.md) to reduce the main skill to under 100 lines.
Remove the ASCII box diagrams and decision tree art—replace with a simple numbered list that conveys the same workflow in 1/4 the tokens.
Define or remove the slash commands (/update-code-index, /audit-duplicates)—currently they're referenced but never implemented, making key workflow steps non-actionable.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~350+ lines. The massive CODE_INDEX.md example with dozens of hypothetical functions (Button, Modal, Toast, etc.) is illustrative padding that Claude doesn't need. The ASCII box diagrams, decision trees, and extensive anti-pattern examples all repeat the same simple concept: check before writing. The Vector DB section adds significant bulk for an 'optional' feature. | 1 / 3 |
Actionability | Provides concrete examples of CODE_INDEX.md format, file headers, and function documentation templates, plus executable ChromaDB/LanceDB code. However, the core workflow relies on slash commands (/update-code-index, /audit-duplicates) that are never defined or implemented, making the key actions non-executable. The grep suggestion is actionable but basic. | 2 / 3 |
Workflow Clarity | The check-before-write process and decision tree are clearly sequenced, and the 'Claude Instructions' section provides session-level workflow. However, there are no validation checkpoints for the index update process itself—no way to verify the index is accurate or complete, and the audit process lacks concrete implementation beyond a checklist template. | 2 / 3 |
Progressive Disclosure | This is a monolithic wall of text with everything inline. The Vector DB section, the extensive CODE_INDEX.md example, the audit output format, and the common duplication patterns could all be separate referenced files. Instead, everything is crammed into one massive document with no references to external files for detailed content. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (545 lines); consider splitting into references/ and linking | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
d4ddb03
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.