Use when auditing a codebase for semantic duplication - functions that do the same thing but have different names or implementations. Especially useful for LLM-generated codebases where new functions are often created rather than reusing existing ones.
Install with Tessl CLI
npx tessl i github:obra/superpowers-lab --skill finding-duplicate-functions91
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
75%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description has strong completeness with an explicit 'Use when' clause and good distinctiveness for its specific niche. However, it could benefit from more concrete action verbs describing what the skill actually does (beyond 'auditing') and additional natural trigger terms users might use when seeking this functionality.
Suggestions
Add specific concrete actions like 'identifies duplicate functions, suggests consolidations, maps redundant implementations'
Include additional natural trigger terms users might say: 'duplicate code', 'redundant functions', 'DRY violations', 'code cleanup', 'refactor duplicates'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (semantic duplication auditing) and describes the general action (auditing for functions that do the same thing), but doesn't list multiple concrete actions like 'identify duplicates, suggest consolidations, generate reports'. | 2 / 3 |
Completeness | Clearly answers both what (auditing for semantic duplication - functions doing the same thing with different names/implementations) and when (explicitly starts with 'Use when' and provides specific trigger scenarios including LLM-generated codebases). | 3 / 3 |
Trigger Term Quality | Includes some relevant terms like 'auditing', 'codebase', 'semantic duplication', 'LLM-generated', but misses common variations users might say like 'duplicate code', 'redundant functions', 'code cleanup', 'DRY violations'. | 2 / 3 |
Distinctiveness Conflict Risk | Has a clear niche focused specifically on semantic duplication rather than general code quality or refactoring. The mention of 'LLM-generated codebases' further distinguishes it from generic code analysis skills. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
100%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is an excellent skill file that demonstrates best practices: a clear quick reference table, concrete executable commands, well-sequenced multi-phase workflow with appropriate model selection, and practical guidance on high-risk zones and common mistakes. The content respects Claude's intelligence while providing all necessary specifics for execution.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient, avoiding explanations of concepts Claude already knows. Every section serves a purpose - no padding about what semantic duplicates are or why they matter beyond the brief context-setting. | 3 / 3 |
Actionability | Provides concrete, executable commands with specific flags and options. Each phase has clear bash commands, file paths, and expected outputs. The quick reference table makes the workflow immediately actionable. | 3 / 3 |
Workflow Clarity | Excellent multi-step workflow with clear sequencing (6 phases), explicit tool/model choices per phase, and validation guidance in Phase 6. The dot diagram visualizes the flow, and the 'Common Mistakes' section provides implicit validation checkpoints. | 3 / 3 |
Progressive Disclosure | Well-structured with quick reference table for overview, detailed phases for depth, and clear references to external scripts and prompt files. Content is appropriately split between the skill file and referenced scripts/prompts. | 3 / 3 |
Total | 12 / 12 Passed |
Validation
87%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 14 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata' field is not a dictionary | Warning |
license_field | 'license' field is missing | Warning |
Total | 14 / 16 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.