Identify and analyze unused or redundant code including unused functions/methods, unused variables/imports, unreachable code, and redundant conditions. Use when cleaning up codebases, improving maintainability, reducing technical debt, or conducting code quality audits. Analyzes Python code using AST analysis and produces markdown reports listing dead code locations with line numbers, severity ratings, and recommendations. Triggers when users ask to find dead code, remove unused code, identify unused imports, find unreachable code, or clean up redundant logic.
81
75%
Does it follow best practices?
Impact
93%
2.06xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/dead-code-eliminator/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that hits all the marks. It provides specific capabilities (dead code types, AST analysis, markdown reports), comprehensive trigger terms that users would naturally say, explicit 'Use when' and 'Triggers when' clauses, and a clear niche that distinguishes it from general code analysis skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'unused functions/methods, unused variables/imports, unreachable code, redundant conditions' and specifies outputs like 'markdown reports listing dead code locations with line numbers, severity ratings, and recommendations.' | 3 / 3 |
Completeness | Clearly answers both what (analyze unused/redundant code, produce markdown reports with line numbers and severity) AND when (explicit 'Use when...' clause plus 'Triggers when...' clause with specific user phrases). | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural terms users would say: 'find dead code', 'remove unused code', 'identify unused imports', 'find unreachable code', 'clean up redundant logic', plus domain terms like 'technical debt', 'code quality audits', 'maintainability.' | 3 / 3 |
Distinctiveness Conflict Risk | Clear niche focused specifically on dead code analysis in Python using AST; distinct triggers like 'dead code', 'unused imports', 'unreachable code' are unlikely to conflict with general code review or other analysis skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
50%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides highly actionable, executable guidance with clear tool commands and detection strategies. However, it suffers from severe verbosity - the inline report template and redundant explanations bloat the content significantly. The workflow is logical but lacks explicit validation gates, and content that should be in reference files is embedded inline.
Suggestions
Move the entire 'Generate Report' template section to a separate reference file (e.g., references/report-template.md) and link to it
Remove explanatory text about what dead code types are (e.g., 'Code that can never execute') - Claude knows this
Consolidate the 'Common False Positives' section with the 'Verify Findings' section to eliminate redundancy
Add explicit validation checkpoint: 'Before generating report, confirm at least 3 findings have been verified manually'
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~400+ lines with significant redundancy. The report template alone is ~150 lines of example output. Explains concepts Claude knows (what unreachable code is, what unused imports are). The 'Common False Positives' section repeats information already covered in verification steps. | 1 / 3 |
Actionability | Provides fully executable bash commands and Python code snippets throughout. Tool installation, usage commands, and grep patterns are all copy-paste ready. The bundled scripts have clear invocation syntax with examples. | 3 / 3 |
Workflow Clarity | Steps are clearly numbered (1-6) with logical sequence, but lacks explicit validation checkpoints. The 'Verify Findings' step exists but doesn't have a clear feedback loop for when verification fails. No explicit 'stop and check' gates before proceeding to report generation. | 2 / 3 |
Progressive Disclosure | References external file (dead-code-patterns.md) appropriately, but the main skill file is monolithic with the massive report template inline. The report template and common false positives sections should be in separate reference files. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (564 lines); consider splitting into references/ and linking | Warning |
Total | 10 / 11 Passed | |
0f00a4f
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.