Agent skill for code-analyzer - invoke with $agent-code-analyzer
43
Quality
17%
Does it follow best practices?
Impact
80%
2.05xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./.agents/skills/agent-code-analyzer/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is critically deficient across all dimensions. It provides only a tool invocation command without any explanation of capabilities, use cases, or trigger conditions. Claude would have no basis for selecting this skill appropriately from a list of available skills.
Suggestions
Add specific concrete actions the skill performs (e.g., 'Analyzes code for bugs, security vulnerabilities, and style issues' or 'Reviews code structure and suggests refactoring opportunities')
Include a 'Use when...' clause with natural trigger terms users would say (e.g., 'Use when the user asks to review code, find bugs, check for security issues, or analyze code quality')
Specify what types of code or languages are supported to distinguish from other potential code-related skills
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description contains no concrete actions whatsoever. 'Agent skill for code-analyzer' is completely abstract and does not describe what the skill actually does with code. | 1 / 3 |
Completeness | The description fails to answer both 'what does this do' and 'when should Claude use it'. It only provides invocation syntax ($agent-code-analyzer) with no functional description or trigger guidance. | 1 / 3 |
Trigger Term Quality | The only keyword is 'code-analyzer' which is a technical tool name, not natural language users would say. No terms like 'analyze code', 'review', 'lint', 'bugs', or specific languages are mentioned. | 1 / 3 |
Distinctiveness Conflict Risk | The term 'code-analyzer' is extremely generic and could conflict with any code review, linting, static analysis, or debugging skill. No distinguishing features or specific use cases are provided. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is overly verbose and descriptive rather than actionable. It explains what a code analyzer does conceptually but provides limited executable guidance. The document would benefit significantly from trimming explanatory content Claude already knows and focusing on specific commands, validation steps, and concrete implementation details.
Suggestions
Remove explanatory sections about what code quality, security review, and performance analysis mean - Claude knows these concepts. Focus only on project-specific commands and configurations.
Replace the descriptive 'Core Responsibilities' section with a concise quick-start showing the exact commands to run a complete analysis cycle.
Add explicit validation checkpoints in the workflow, such as 'If security scan returns critical issues, STOP and report before continuing to Phase 3'.
Move the detailed metrics definitions and example output to separate reference files, keeping SKILL.md as a lean operational guide.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive explanations of concepts Claude already knows (what code quality is, what security reviews involve, basic metrics definitions). The document is padded with unnecessary context like 'Core Responsibilities' sections that describe obvious tasks rather than providing actionable instructions. | 1 / 3 |
Actionability | Contains some concrete bash commands with npx claude-flow, but most content is descriptive rather than executable. The 'Analysis Workflow' phases describe what to do conceptually but lack complete, copy-paste ready implementations. The example output is illustrative but not instructive. | 2 / 3 |
Workflow Clarity | Has a three-phase workflow structure (Initial Scan, Deep Analysis, Report Generation) but lacks explicit validation checkpoints and error recovery steps. The phases describe activities but don't provide clear feedback loops for when analysis fails or produces unexpected results. | 2 / 3 |
Progressive Disclosure | Content is organized with headers and sections, but everything is inline in one monolithic document. No references to external files for detailed API documentation, examples, or advanced configurations. The document could benefit from splitting detailed metrics definitions and example outputs into separate reference files. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
b2618f9
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.