tessl i github:sickn33/antigravity-awesome-skills --skill codebase-cleanup-refactor-cleanYou are a code refactoring expert specializing in clean code principles, SOLID design patterns, and modern software engineering best practices. Analyze and refactor the provided code to improve its quality, maintainability, and performance.
Activation
33%The description identifies the refactoring domain but relies on abstract concepts rather than concrete actions. It critically lacks explicit trigger guidance ('Use when...') which makes it difficult for Claude to know when to select this skill. The use of second person voice ('You are') violates the third-person requirement in the rubric.
Suggestions
Add an explicit 'Use when...' clause with trigger terms like 'refactor', 'clean up code', 'code smell', 'technical debt', 'simplify', 'restructure'.
Replace vague actions with specific concrete capabilities like 'extract methods, apply design patterns, reduce code duplication, simplify conditional logic, improve naming conventions'.
Rewrite in third person voice (e.g., 'Analyzes and refactors code to improve...' instead of 'You are a code refactoring expert').
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (code refactoring) and mentions some concepts (clean code principles, SOLID design patterns, best practices), but actions are vague ('analyze and refactor', 'improve quality') rather than listing specific concrete actions like 'extract methods', 'reduce cyclomatic complexity', or 'apply dependency injection'. | 2 / 3 |
Completeness | Describes what it does (analyze and refactor code) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, missing explicit trigger guidance caps this at 2, and the 'what' is also weak. | 1 / 3 |
Trigger Term Quality | Includes some relevant keywords like 'refactoring', 'clean code', 'SOLID', but misses common natural variations users might say such as 'code smell', 'technical debt', 'simplify code', 'restructure', or 'code cleanup'. | 2 / 3 |
Distinctiveness Conflict Risk | Somewhat specific to refactoring but could overlap with general code review skills, code optimization skills, or architecture skills. Terms like 'improve quality' and 'best practices' are generic enough to cause conflicts. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
35%This skill provides a reasonable structural framework for refactoring tasks but lacks the concrete, actionable content that would make it useful. It reads more like a high-level process description than executable guidance, with all substantive content deferred to an external playbook. The absence of any code examples, specific refactoring patterns, or concrete decision criteria significantly limits its practical value.
Suggestions
Add 2-3 concrete code examples showing before/after refactoring patterns (e.g., extracting a method, applying dependency injection)
Replace vague instructions like 'Identify high-impact refactor candidates' with specific criteria or heuristics (e.g., 'Functions over 50 lines, classes with more than 5 dependencies')
Include a concrete validation checklist with specific commands or test patterns rather than generic 'validate with tests'
Remove the persona framing ('You are a code refactoring expert...') as Claude already knows how to adopt roles
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Mostly efficient but includes some unnecessary framing ('You are a code refactoring expert...') and the 'Use this skill when/Do not use this skill when' sections add moderate overhead without providing actionable guidance. | 2 / 3 |
Actionability | Provides only vague, abstract guidance ('Identify high-impact refactor candidates', 'Apply changes with a focus on readability') with no concrete code examples, specific commands, or executable patterns. Defers all concrete content to an external resource. | 1 / 3 |
Workflow Clarity | Steps are listed in a logical sequence (identify, break down, apply, validate) but lack explicit validation checkpoints, specific criteria for 'high-impact candidates', or concrete feedback loops for error recovery. | 2 / 3 |
Progressive Disclosure | References external resource (implementation-playbook.md) appropriately, but the main content is too thin - it's essentially a stub pointing elsewhere rather than a useful overview with well-signaled deep-dives. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
75%| Criteria | Description | Result |
|---|---|---|
description_trigger_hint | Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...') | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
license_field | 'license' field is missing | Warning |
body_steps | No step-by-step structure detected (no ordered list); consider adding a simple workflow | Warning |
Total | 12 / 16 Passed | |
Reviewed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.