CtrlK
BlogDocsLog inGet started
Tessl Logo

jbvc/code-refactoring-refactor-clean

You are a code refactoring expert specializing in clean code principles, SOLID design patterns, and modern software engineering best practices. Analyze and refactor the provided code to improve its quality, maintainability, and performance.

46

Quality

46%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Overview
Quality
Evals
Security
Files

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a clear domain (code refactoring) and mentions relevant methodologies (SOLID, clean code), but it lacks concrete action specifics and has no explicit trigger guidance ('Use when...'). It also uses second-person framing ('You are...') which is inappropriate for a skill description — it reads more like a system prompt than a skill selector. The vague outcome language ('improve quality, maintainability, and performance') doesn't help Claude distinguish this skill from other code-related skills.

Suggestions

Add an explicit 'Use when...' clause with trigger terms like 'refactor', 'clean up code', 'code smell', 'technical debt', 'simplify', 'SOLID principles'.

Replace vague outcomes with specific concrete actions, e.g., 'Extracts methods, renames variables, decomposes large classes, removes code duplication, applies SOLID principles'.

Rewrite in third person voice ('Analyzes and refactors code...') instead of the current second-person system-prompt style ('You are a code refactoring expert').

DimensionReasoningScore

Specificity

Names the domain (code refactoring) and some actions ('analyze and refactor'), but the specific capabilities are vague — 'improve quality, maintainability, and performance' are abstract outcomes rather than concrete actions like 'extract methods, rename variables, decompose classes'.

2 / 3

Completeness

Describes what it does (analyze and refactor code) but has no explicit 'Use when...' clause or equivalent trigger guidance. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and the 'what' portion is also somewhat weak, placing this at 1.

1 / 3

Trigger Term Quality

Includes some relevant keywords like 'refactoring', 'clean code', 'SOLID', and 'design patterns' that users might mention, but misses common natural variations like 'code smell', 'technical debt', 'simplify code', 'restructure', or 'DRY'.

2 / 3

Distinctiveness Conflict Risk

The focus on refactoring and SOLID principles provides some distinction, but 'improve code quality' and 'best practices' are broad enough to overlap with general code review, linting, or code generation skills.

2 / 3

Total

7

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is a high-level, abstract template that lacks the concrete, actionable guidance needed to be truly useful. It reads more like a role description than an executable skill — there are no code examples, no specific refactoring patterns, no concrete commands, and no real validation steps. The reference to an external playbook is a good structural choice but doesn't compensate for the lack of substance in the main file.

Suggestions

Add concrete code examples showing before/after refactoring for at least 2-3 common patterns (e.g., Extract Method, Replace Conditional with Polymorphism) to make the skill actionable.

Replace vague instructions like 'Assess code smells' with specific checklists or heuristics (e.g., 'Flag methods >20 lines, classes with >5 dependencies, duplicated blocks >3 lines').

Add explicit validation checkpoints with concrete steps: e.g., '1. Run existing tests before changes. 2. Make one refactoring. 3. Run tests again. 4. If tests fail, revert and reassess. 5. Only proceed when green.'

Remove the redundant opening paragraph that repeats the skill description, and trim the 'Context' section which restates what's already obvious from the title and instructions.

DimensionReasoningScore

Conciseness

The skill includes some unnecessary framing (repeating the description as the opening line, 'Context' section restating the obvious, 'Use this skill when'/'Do not use this skill when' sections that Claude can infer). However, it's not excessively verbose and stays relatively brief overall.

2 / 3

Actionability

The instructions are entirely abstract and vague — 'Assess code smells,' 'Propose a refactor plan,' 'Apply changes in small slices' — with no concrete code examples, specific commands, specific refactoring patterns, or executable guidance. Everything describes rather than instructs.

1 / 3

Workflow Clarity

There is a rough sequence implied (assess → plan → apply → test), but validation checkpoints are vague ('verify regressions,' 'ensure tests pass') with no explicit feedback loops or concrete verification steps. For a skill involving potentially destructive code changes, this lacks the rigor needed for a score of 3.

2 / 3

Progressive Disclosure

The skill references `resources/implementation-playbook.md` for detailed patterns, which is good progressive disclosure. However, the reference is mentioned twice (in Instructions and Resources) and it's unclear what that file contains or how it's structured. The main content itself could benefit from better organization — the Output Format section mixes with instructions.

2 / 3

Total

7

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Reviewed

Table of Contents