You are a code refactoring expert specializing in clean code principles, SOLID design patterns, and modern software engineering best practices. Analyze and refactor the provided code to improve its quality, maintainability, and performance.
39
24%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/code-refactoring-refactor-clean/SKILL.mdQuality
Discovery
14%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description reads more like a system prompt persona definition ('You are a code refactoring expert') than a skill description. It uses first/second person framing, lacks concrete actions, has no 'Use when' clause, and is too generic to be distinguishable from other code-related skills. The buzzword-heavy language ('clean code principles, SOLID design patterns, modern software engineering best practices') adds little discriminative value.
Suggestions
Replace the persona-style opening with third-person action verbs listing specific refactoring operations, e.g., 'Extracts methods, renames variables, reduces cyclomatic complexity, applies SOLID principles, and eliminates code duplication.'
Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks to refactor code, fix code smells, reduce technical debt, simplify complex functions, or apply design patterns.'
Remove the 'You are...' framing entirely and focus on what the skill does and when to select it, making it clearly distinguishable from general code review or linting skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague language like 'improve its quality, maintainability, and performance' and 'clean code principles, SOLID design patterns.' It does not list concrete actions like 'extract methods, rename variables, reduce cyclomatic complexity, split classes.' The actions 'analyze and refactor' are extremely broad. | 1 / 3 |
Completeness | There is no explicit 'Use when...' clause or equivalent trigger guidance. The 'what' is vaguely stated as 'analyze and refactor code,' and the 'when' is entirely missing. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and the weak 'what' brings it down further. | 1 / 3 |
Trigger Term Quality | Contains some relevant keywords like 'refactoring', 'SOLID', 'clean code', and 'design patterns' that users might mention. However, it misses common natural terms like 'simplify code', 'reduce complexity', 'code smell', 'technical debt', 'DRY', or 'extract method'. | 2 / 3 |
Distinctiveness Conflict Risk | This description is extremely generic and would conflict with any code review, code quality, linting, or general coding assistance skill. 'Analyze and refactor code' and 'best practices' could apply to nearly any programming-related skill. | 1 / 3 |
Total | 5 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is structurally organized but lacks actionable, concrete guidance — it reads more like a high-level process description than an executable skill. The instructions are entirely abstract with no code examples, specific refactoring patterns, or concrete techniques. It relies heavily on an external playbook file without providing enough standalone value in the main skill content.
Suggestions
Add at least 2-3 concrete before/after code examples demonstrating specific refactoring patterns (e.g., Extract Method, Replace Conditional with Polymorphism) to make the skill actionable.
Include explicit validation checkpoints in the workflow, such as 'Run existing tests after each refactoring step before proceeding' with specific commands or verification approaches.
Remove the 'Context' section and 'Limitations' boilerplate — these restate things Claude already knows and waste tokens.
Replace the vague '$ARGUMENTS' placeholder in Requirements with specific expected inputs (e.g., 'the code to refactor, the target language, any constraints on the refactoring scope').
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill includes some unnecessary sections like 'Context' which restates the obvious, and 'Use this skill when' / 'Do not use this skill when' sections that are somewhat generic. The 'Limitations' section contains boilerplate that Claude already knows. However, it's not excessively verbose. | 2 / 3 |
Actionability | The instructions are entirely abstract and vague — 'Assess code smells,' 'Propose a refactor plan,' 'Apply changes in small slices' — with no concrete code examples, specific commands, specific patterns, or executable guidance. It describes what to do rather than showing how to do it. | 1 / 3 |
Workflow Clarity | There is a rough sequence implied (assess → plan → apply → test), but validation checkpoints are vague ('verify regressions,' 'ensure tests pass') without concrete steps. For a refactoring skill involving potentially destructive changes, the lack of explicit validation/feedback loops is a gap. | 2 / 3 |
Progressive Disclosure | There is a reference to `resources/implementation-playbook.md` for detailed patterns, which is good progressive disclosure. However, the main content itself is thin and doesn't provide enough substance in the overview — it defers too much to the external file without giving a useful quick-start or concrete examples inline. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
93c57b2
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.