Technical debt detection and remediation. Run at session end to find duplicated code, dead imports, security issues, and complexity hotspots. Triggers: 'find tech debt', 'scan for issues', 'check code quality', 'wrap up session', 'ready to commit', 'before merge', 'code review prep'. Always uses parallel subagents for fast analysis.
78
70%
Does it follow best practices?
Impact
87%
1.35xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./data/skills-md/0xdarkmatter/claude-mods/techdebt/SKILL.mdQuality
Discovery
92%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong description that clearly communicates specific capabilities and provides extensive trigger terms covering multiple user scenarios. The explicit listing of trigger phrases and the 'when' context (session end, before merge) make it highly actionable for skill selection. The only weakness is potential overlap with more specialized code quality or security scanning skills due to the breadth of its scope.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'find duplicated code, dead imports, security issues, and complexity hotspots.' Also mentions parallel subagents for fast analysis, adding implementation detail. | 3 / 3 |
Completeness | Clearly answers both 'what' (technical debt detection and remediation—duplicated code, dead imports, security issues, complexity hotspots) and 'when' (session end, before merge, code review prep, with explicit trigger phrases listed). | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms users would actually say: 'find tech debt', 'scan for issues', 'check code quality', 'wrap up session', 'ready to commit', 'before merge', 'code review prep'. These cover multiple natural phrasings and contexts. | 3 / 3 |
Distinctiveness Conflict Risk | While the specific triggers like 'find tech debt' and 'complexity hotspots' are fairly distinctive, terms like 'check code quality', 'code review prep', and 'scan for issues' could overlap with linting skills, security scanning skills, or general code review skills. The scope is broad enough to potentially conflict with more focused tools. | 2 / 3 |
Total | 11 / 12 Passed |
Implementation
47%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill has a well-structured workflow with clear steps and good safety constraints for the auto-fix mode, but it is severely over-engineered and verbose for a SKILL.md file. It explains many concepts Claude already knows (security patterns, complexity metrics, dead code detection) in exhaustive detail, and much of the inline content (detection patterns, integration examples, troubleshooting) should be in referenced files. The actionability is moderate—while CLI interfaces and config formats are shown, the core implementation details for subagent spawning and actual analysis are conceptual rather than executable.
Suggestions
Reduce content by 60-70%: move detection patterns, language support tables, integration patterns, and advanced usage into separate reference files, keeping only a concise overview with links in the main SKILL.md
Remove explanations of well-known concepts (what cyclomatic complexity is, what SQL injection is, etc.) and replace with just the thresholds and patterns Claude should use
Provide actual executable subagent spawning code or tool invocations rather than conceptual templates with placeholders
Cut the emoji decorations, benefits bullet list, and best practices section which add no actionable information for Claude
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~350+ lines. Includes extensive explanations of concepts Claude already knows (what cyclomatic complexity is, what dead code is, how security patterns work), detailed ASCII architecture diagrams, emoji decorations, and lengthy tables of well-known patterns. The report template, detection patterns, and integration sections could be dramatically condensed. | 1 / 3 |
Actionability | Provides concrete CLI commands and configuration examples, but the core scanning logic relies on conceptual descriptions rather than executable code. The subagent instructions are templates with placeholders, and the actual implementation of how to spawn subagents, run ast-grep queries, or calculate complexity is never shown with real executable code. The pre-commit hook and CI/CD examples reference non-existent commands ('claude skill techdebt'). | 2 / 3 |
Workflow Clarity | The 5-step workflow is clearly sequenced with explicit validation and safety rules. The auto-fix mode includes safety constraints (never auto-fix security issues), interactive confirmation, and the consolidation step includes deduplication and severity ranking. The feedback loop for fix mode (confirm → apply → show diff → prompt commit) is well-defined. | 3 / 3 |
Progressive Disclosure | References two external files (references/patterns.md, references/severity-guide.md) and mentions configuration files, but the main document is a monolithic wall of content. The detection patterns, language support, integration patterns, advanced usage, and troubleshooting sections contain extensive inline detail that should be split into separate reference files, with only summaries in the main skill. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
9f4534c
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.