tessl i github:sickn33/antigravity-awesome-skills --skill code-refactoring-tech-debtYou are a technical debt expert specializing in identifying, quantifying, and prioritizing technical debt in software projects. Analyze the codebase to uncover debt, assess its impact, and create acti
Activation
33%The description establishes a clear domain focus on technical debt analysis but suffers from being truncated and lacking explicit trigger guidance. It uses second-person voice ('You are') which violates the third-person requirement, and provides no 'Use when...' clause to help Claude know when to select this skill over others.
Suggestions
Add a 'Use when...' clause with trigger terms like 'technical debt', 'code quality assessment', 'refactoring priorities', 'legacy code analysis'
Rewrite in third person voice (e.g., 'Identifies, quantifies, and prioritizes technical debt in software projects') instead of 'You are'
Complete the truncated description and add specific deliverables (e.g., 'generates debt inventory reports, estimates remediation effort, recommends prioritized cleanup tasks')
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (technical debt) and lists some actions (identifying, quantifying, prioritizing, analyze, assess, create), but uses somewhat abstract language like 'uncover debt' and 'assess its impact' rather than concrete specific actions. | 2 / 3 |
Completeness | Describes what it does (analyze codebase for technical debt) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. The description also appears truncated ('create acti'). | 1 / 3 |
Trigger Term Quality | Includes relevant terms like 'technical debt', 'codebase', 'prioritizing', but misses common variations users might say such as 'code quality', 'refactoring', 'legacy code', 'code smell', or 'cleanup'. | 2 / 3 |
Distinctiveness Conflict Risk | The focus on 'technical debt' provides some distinctiveness, but terms like 'analyze the codebase' and 'software projects' are generic enough to potentially overlap with code review or code analysis skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
27%This skill is a comprehensive but overly verbose tutorial on technical debt analysis rather than a lean, actionable guide. It explains many concepts Claude already understands and provides generic templates instead of executable, project-specific guidance. The content would benefit significantly from aggressive trimming and splitting into focused reference documents.
Suggestions
Reduce content by 70%+ by removing explanations of basic concepts (what technical debt is, what cyclomatic complexity means) and keeping only project-specific actionable commands
Split detailed templates (metrics dashboard, cost calculations, implementation patterns) into separate reference files and link to them from a concise overview
Replace illustrative pseudocode with actual executable commands or scripts that can analyze a real codebase (e.g., specific linter commands, actual SonarQube queries)
Add explicit validation checkpoints like 'Verify debt inventory completeness by running X' and 'Confirm impact calculations with stakeholder before proceeding to remediation planning'
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~400 lines with extensive explanations Claude already knows (what technical debt is, basic refactoring patterns, standard metrics). The skill explains concepts like cyclomatic complexity, code duplication, and testing debt in tutorial fashion rather than providing actionable guidance. | 1 / 3 |
Actionability | Contains some concrete examples (Python code snippets, YAML configs, cost calculations) but much is pseudocode or illustrative rather than executable. The examples are generic templates rather than copy-paste ready commands for actual codebase analysis. | 2 / 3 |
Workflow Clarity | Steps are numbered and sequenced (1-8 sections), but lacks explicit validation checkpoints. No feedback loops for verifying debt analysis accuracy or confirming remediation success before proceeding. The workflow is more of a checklist than a guided process with error recovery. | 2 / 3 |
Progressive Disclosure | Monolithic wall of text with no references to external files. All content is inline including detailed examples, metrics templates, and implementation strategies that could be split into separate reference documents (e.g., METRICS.md, REFACTORING_PATTERNS.md). | 1 / 3 |
Total | 6 / 12 Passed |
Validation
81%| Criteria | Description | Result |
|---|---|---|
description_trigger_hint | Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...') | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
license_field | 'license' field is missing | Warning |
Total | 13 / 16 Passed | |
Reviewed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.