Agent skill for analyze-code-quality - invoke with $agent-analyze-code-quality
Install with Tessl CLI
npx tessl i github:ruvnet/claude-flow --skill agent-analyze-code-quality35
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is essentially non-functional - it provides only invocation syntax without describing any capabilities, use cases, or trigger conditions. It fails all dimensions because it contains no substantive content that would help Claude select this skill appropriately from a pool of available skills.
Suggestions
Add concrete actions the skill performs (e.g., 'Analyzes code for complexity, duplication, maintainability issues, and style violations')
Include a 'Use when...' clause with natural trigger terms like 'code review', 'check code quality', 'find code smells', 'technical debt'
Specify what types of code or languages are supported to distinguish from other code analysis tools
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description contains no concrete actions - 'analyze-code-quality' is embedded in a tool name but no actual capabilities are described. It only states how to invoke the agent, not what it does. | 1 / 3 |
Completeness | Neither 'what does this do' nor 'when should Claude use it' is answered. The description only provides invocation syntax with no explanation of functionality or use cases. | 1 / 3 |
Trigger Term Quality | No natural keywords users would say are present. 'analyze-code-quality' is a hyphenated technical identifier, not natural language. Users would say 'check my code', 'code review', 'find bugs', etc. | 1 / 3 |
Distinctiveness Conflict Risk | While 'code quality' hints at a domain, the description is so vague it could conflict with any code-related skill (linting, testing, review, refactoring). No distinct triggers are provided. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
37%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a decent framework for code quality analysis with clear output formatting and criteria lists, but lacks actionable workflow guidance. The content tells Claude what to look for but not how to systematically find it using the available tools. The extensive YAML frontmatter adds bulk without proportional value to the actual instructions.
Suggestions
Add a concrete step-by-step workflow: 1) Use Glob to find files, 2) Use Grep to detect specific patterns (with actual regex examples), 3) Read files for detailed analysis, 4) Compile findings
Include executable examples showing how to use the allowed tools (Read, Grep, Glob) to detect specific code smells, e.g., 'grep -n "function" file.js | ... to find long methods'
Remove or significantly reduce the YAML frontmatter - most metadata (triggers, hooks, examples) doesn't help Claude execute the skill and wastes tokens
Add validation checkpoints, e.g., 'After scanning, verify file count matches expectations before proceeding to detailed analysis'
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is reasonably efficient but includes some unnecessary elements like the extensive YAML frontmatter that duplicates information, and the code smell list contains items Claude already knows. The actual instruction content is fairly lean. | 2 / 3 |
Actionability | Provides a clear output format template and lists specific criteria, but lacks executable code examples for actually performing the analysis. The guidance is more descriptive ('identify code smells') than instructive ('use Grep to find methods over 50 lines with: ...'). | 2 / 3 |
Workflow Clarity | No clear sequence of steps for performing the analysis. Lists responsibilities and criteria but doesn't explain how to systematically analyze code - no workflow for scanning files, no validation checkpoints, no order of operations. | 1 / 3 |
Progressive Disclosure | Content is reasonably organized with clear sections (responsibilities, criteria, code smells, output format), but everything is in one file with no references to external documentation. The YAML frontmatter is excessively long and could be separated. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.