Agent skill for analyze-code-quality - invoke with $agent-analyze-code-quality
37
3%
Does it follow best practices?
Impact
94%
1.49xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.agents/skills/agent-analyze-code-quality/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is essentially just a skill name restated with invocation syntax, providing no meaningful information about what the skill does, what specific capabilities it offers, or when it should be selected. It fails across all dimensions due to extreme vagueness and lack of any actionable detail or trigger guidance.
Suggestions
Replace the label with specific concrete actions, e.g., 'Analyzes code for complexity metrics, code smells, duplication, and style violations across Python, JavaScript, and TypeScript files.'
Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks for code review, code quality analysis, linting, static analysis, or wants to identify code smells and technical debt.'
Remove the invocation syntax ('invoke with $agent-analyze-code-quality') from the description and focus on capability and trigger information that helps Claude select the right skill.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description contains no concrete actions. 'Analyze code quality' is vague and does not specify what kind of analysis is performed (e.g., linting, complexity metrics, code smells, test coverage). It reads more like a label than a description. | 1 / 3 |
Completeness | The description fails to answer both 'what does this do' (beyond a vague label) and 'when should Claude use it.' There is no 'Use when...' clause or any explicit trigger guidance. | 1 / 3 |
Trigger Term Quality | The only potentially relevant term is 'code quality,' which is generic. It lacks natural keywords users might say such as 'lint,' 'code review,' 'code smells,' 'static analysis,' 'complexity,' 'refactor,' or 'clean code.' | 1 / 3 |
Distinctiveness Conflict Risk | 'Analyze code quality' is extremely generic and could overlap with any code review, linting, static analysis, or refactoring skill. There are no distinct triggers to differentiate it from similar skills. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
7%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is overwhelmingly YAML frontmatter configuration with minimal actionable body content. The actual instructions are generic descriptions of code quality concepts that Claude already knows, with no concrete guidance on how to use the allowed tools (Read, Grep, Glob) to perform analysis. The output template is the only useful element, but without a clear workflow for generating the data to fill it, the skill provides little practical value.
Suggestions
Add a concrete, step-by-step workflow showing how to use Glob, Grep, and Read to systematically analyze code (e.g., 'Step 1: Use Glob to find all source files matching src/**/*.ts, Step 2: Use Grep to find functions exceeding 50 lines...')
Remove the extensive YAML frontmatter configuration that isn't processed by Claude (triggers, hooks, optimization, integration sections) to dramatically reduce token usage
Remove explanations of concepts Claude already knows (what SOLID principles are, what code smells are) and replace with specific, executable patterns for detecting each issue using the available tools
Add concrete examples showing actual analysis of a code snippet with the expected output, rather than just a blank report template
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The vast majority of the file is YAML frontmatter with extensive configuration (triggers, hooks, optimization, integration, constraints, etc.) that is not actionable instruction content. The actual body content is generic and explains concepts Claude already knows (what code smells are, what SOLID principles are, what readability means). The code smell list and analysis criteria are basic knowledge for Claude. | 1 / 3 |
Actionability | The body provides no executable code, no concrete commands, and no specific step-by-step instructions. It lists abstract categories ('Identify code smells', 'Evaluate code complexity') without telling Claude how to actually perform these analyses using the available tools (Read, Grep, Glob). The output format template is the only semi-concrete element but is still a generic markdown template. | 1 / 3 |
Workflow Clarity | There is no clear workflow sequence for performing a code quality analysis. The content lists responsibilities and criteria but never describes the actual process: which files to scan first, how to use Grep/Glob to find issues, how to aggregate findings, or any validation checkpoints. The numbered 'key responsibilities' are goals, not steps. | 1 / 3 |
Progressive Disclosure | The content is relatively short in the body section and not a wall of text, but it doesn't reference any external files for deeper guidance. The YAML frontmatter mentions related agents (analyze-security, analyze-refactoring) but the body doesn't provide navigation to supplementary materials. The structure with headers is reasonable but could be better organized. | 2 / 3 |
Total | 5 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
f547cec
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.