You are a technical debt expert specializing in identifying, quantifying, and prioritizing technical debt in software projects. Analyze the codebase to uncover debt, assess its impact, and create acti
Install with Tessl CLI
npx tessl i github:sickn33/antigravity-awesome-skills --skill codebase-cleanup-tech-debt50
Quality
30%
Does it follow best practices?
Impact
80%
1.66xAverage score across 3 eval scenarios
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/codebase-cleanup-tech-debt/SKILL.mdDiscovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear domain (technical debt analysis) and lists relevant capabilities, but is significantly weakened by the lack of explicit 'Use when...' guidance and appears to be truncated. It uses second person voice ('You are') which violates the third-person requirement, and the actions described remain somewhat abstract rather than concrete.
Suggestions
Add an explicit 'Use when...' clause with trigger terms like 'technical debt', 'tech debt', 'code quality assessment', 'refactoring priorities', or 'legacy code analysis'
Rewrite in third person voice (e.g., 'Identifies, quantifies, and prioritizes technical debt...') instead of 'You are a technical debt expert'
Complete the truncated description and list specific concrete outputs such as 'generates debt inventory reports, calculates remediation costs, produces prioritized refactoring roadmaps'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (technical debt) and lists some actions (identifying, quantifying, prioritizing, analyze, assess, create), but uses somewhat abstract language like 'uncover debt' and 'assess its impact' rather than concrete specific actions. | 2 / 3 |
Completeness | Describes what it does (analyze codebase for technical debt) but completely lacks a 'Use when...' clause or any explicit trigger guidance. The description also appears truncated ('create acti...'). | 1 / 3 |
Trigger Term Quality | Includes relevant terms like 'technical debt', 'codebase', and 'prioritizing', but misses common variations users might say such as 'code quality', 'refactoring', 'legacy code', 'code smell', or 'tech debt'. | 2 / 3 |
Distinctiveness Conflict Risk | Technical debt analysis is a reasonably specific niche, but without explicit triggers it could overlap with general code review or code quality skills. The focus on 'software projects' and 'codebase' is somewhat generic. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is a comprehensive but bloated tutorial on technical debt rather than a lean, actionable guide. It explains concepts Claude already understands (what code duplication is, what testing debt means) and provides template-style examples rather than executable analysis tools. The content would benefit from aggressive trimming and splitting into focused reference documents.
Suggestions
Remove explanatory content about what technical debt types are - Claude knows this. Focus only on project-specific analysis commands and output formats.
Split into SKILL.md (overview + quick reference) with separate files for METRICS_TEMPLATES.md, REFACTORING_PATTERNS.md, and COMMUNICATION_TEMPLATES.md.
Add explicit validation checkpoints: 'Before refactoring, run X to baseline metrics. After each change, verify Y before proceeding.'
Replace illustrative pseudocode with actual executable scripts or tool commands (e.g., specific SonarQube queries, actual complexity analysis commands).
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~400 lines with extensive explanations Claude already knows (what technical debt is, basic refactoring patterns, standard metrics). The skill explains concepts like cyclomatic complexity, code duplication, and testing debt in tutorial fashion rather than providing actionable guidance. | 1 / 3 |
Actionability | Contains some concrete examples (Python code snippets, YAML configs, cost calculations) but much is pseudocode or template-style content. The examples are illustrative rather than executable - they show patterns but aren't copy-paste ready for actual analysis. | 2 / 3 |
Workflow Clarity | Has numbered sections (1-8) providing sequence, but lacks explicit validation checkpoints or feedback loops. For a skill involving potentially destructive refactoring operations, there's no 'validate before proceeding' pattern or error recovery guidance. | 2 / 3 |
Progressive Disclosure | Monolithic wall of text with no references to external files. All content is inline including detailed examples, metrics templates, and implementation strategies that could be split into separate reference documents (e.g., METRICS.md, REFACTORING_PATTERNS.md). | 1 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.