Agent skill for code-analyzer - invoke with $agent-code-analyzer
36
6%
Does it follow best practices?
Impact
80%
2.05xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./.agents/skills/agent-code-analyzer/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an extremely weak description that essentially only provides the skill's name and invocation command. It fails on every dimension: no concrete actions, no natural trigger terms, no 'what' or 'when' guidance, and no distinguishing characteristics. It would be nearly impossible for Claude to correctly select this skill from a pool of available skills.
Suggestions
Add specific concrete actions the skill performs, e.g., 'Analyzes code for complexity metrics, identifies code smells, detects potential bugs, and generates code quality reports.'
Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks to analyze code quality, review code structure, find code smells, measure complexity, or assess maintainability.'
Remove the invocation syntax from the description (it's operational metadata, not descriptive) and replace with domain-specific keywords users would naturally use, such as 'code review', 'static analysis', 'code quality', 'complexity'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description contains no concrete actions whatsoever. 'Agent skill for code-analyzer' is entirely vague and does not describe what the skill actually does—no specific capabilities like 'analyzes complexity', 'finds bugs', or 'reviews code structure' are mentioned. | 1 / 3 |
Completeness | The description fails to answer both 'what does this do' and 'when should Claude use it'. There is no explanation of capabilities and no 'Use when...' clause or equivalent trigger guidance. | 1 / 3 |
Trigger Term Quality | The only potentially relevant keyword is 'code-analyzer', which is a tool name rather than a natural user term. Users would say things like 'analyze my code', 'code review', 'find bugs', etc. The invocation syntax '$agent-code-analyzer' is not a natural trigger term. | 1 / 3 |
Distinctiveness Conflict Risk | The description is so generic that 'code-analyzer' could overlap with any code-related skill—linting, static analysis, code review, performance profiling, security scanning, etc. There is nothing to distinguish it from other code-related skills. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
12%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads as a high-level role description or agent specification rather than an actionable skill document. It extensively catalogs what a code analyzer should do without providing concrete, executable instructions for how to do any of it. The content is extremely verbose, explaining concepts Claude already understands (code quality metrics, security review categories, best practices) while failing to provide the specific commands, code patterns, or tool-specific guidance that would make it useful.
Suggestions
Replace the extensive lists of responsibilities and metrics with concrete, executable examples—show actual linting commands, specific static analysis tool invocations, or code patterns to detect rather than listing abstract categories.
Cut the content by at least 60%: remove sections like 'Core Responsibilities', 'Best Practices', and 'Analysis Metrics' that enumerate things Claude already knows, and focus only on project-specific tooling, conventions, and workflows.
Add explicit validation checkpoints to the workflow (e.g., 'verify linter exits with code 0 before proceeding to security scan') and include error recovery steps.
Split detailed reference material (memory keys, coordination protocols, example output templates) into separate referenced files, keeping SKILL.md as a concise overview with clear navigation links.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive lists of responsibilities, metrics, and best practices that Claude already knows. The content reads like a role description or job posting rather than actionable instructions. Sections like 'Core Responsibilities' enumerate obvious code analysis tasks (e.g., 'Evaluate naming conventions', 'Check for proper error handling') that add no new knowledge. | 1 / 3 |
Actionability | Despite its length, the skill provides almost no executable guidance. The bash commands reference a specific tool (`claude-flow@alpha`) but are templates with placeholder variables rather than real executable examples. The bulk of the content describes what to do abstractly ('Scan for common vulnerabilities', 'Identify performance bottlenecks') without any concrete code, commands, or specific techniques for actually performing these tasks. | 1 / 3 |
Workflow Clarity | There is a three-phase workflow (Initial Scan, Deep Analysis, Report Generation) with some sequencing, but validation checkpoints are absent. The phases are more conceptual categories than actionable steps—Phase 2 lists categories of analysis without specifying how to actually execute them or what to do when issues are found. No feedback loops for error recovery. | 2 / 3 |
Progressive Disclosure | The content is a monolithic wall of text with no references to external files for detailed information. Everything is inlined—metrics definitions, example outputs, coordination protocols, memory keys—resulting in a very long document that mixes overview-level and detail-level content without any navigation structure or file references. | 1 / 3 |
Total | 5 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
01070ed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.