CtrlK
BlogDocsLog inGet started
Tessl Logo

agent-code-analyzer

Agent skill for code-analyzer - invoke with $agent-code-analyzer

32

2.05x
Quality

0%

Does it follow best practices?

Impact

80%

2.05x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./.agents/skills/agent-code-analyzer/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an extremely weak description that essentially only names the skill and provides an invocation command. It fails on every dimension: no concrete actions, no natural trigger terms, no 'what' or 'when' guidance, and no distinguishing characteristics. It would be nearly impossible for Claude to correctly select this skill from a pool of available skills.

Suggestions

Add specific concrete actions the skill performs, e.g., 'Analyzes code for complexity metrics, identifies code smells, detects potential bugs, and generates code quality reports.'

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks to analyze code quality, review code complexity, find code smells, or generate static analysis reports.'

Remove the invocation instruction ('invoke with $agent-code-analyzer') from the description and replace it with capability and trigger information that helps Claude decide when to select this skill.

DimensionReasoningScore

Specificity

The description contains no concrete actions whatsoever. 'Agent skill for code-analyzer' is entirely vague and does not describe what the skill actually does beyond referencing its own name.

1 / 3

Completeness

The description fails to answer both 'what does this do' and 'when should Claude use it'. It only provides an invocation command ('$agent-code-analyzer') with no explanation of capabilities or usage triggers.

1 / 3

Trigger Term Quality

The only potentially relevant term is 'code-analyzer', which is a tool name rather than a natural keyword a user would say. There are no natural trigger terms like 'analyze code', 'review code', 'static analysis', etc.

1 / 3

Distinctiveness Conflict Risk

The description is so generic that 'code-analyzer' could overlap with any code review, linting, static analysis, or debugging skill. There is nothing to distinguish it from other code-related skills.

1 / 3

Total

4

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads as a high-level design document or role description rather than an actionable skill file. It extensively catalogs code analysis concepts Claude already understands while providing almost no concrete, executable instructions for actually performing analysis. The bash commands are incomplete templates, the workflow steps are abstract descriptions, and the entire file could be reduced to roughly 20% of its size while gaining actionability.

Suggestions

Replace the abstract 'Core Responsibilities' lists with concrete, executable commands and code snippets showing exactly how to perform each type of analysis (e.g., specific linter commands, actual security scanning tools with flags).

Add explicit validation checkpoints to the workflow, such as 'verify linter output contains no errors before proceeding to security scan' with concrete commands to check.

Remove the extensive categorization of analysis types (metrics lists, best practices platitudes) that Claude already knows, and replace with project-specific configurations, tool choices, and threshold values.

Extract reference material (metrics definitions, example output templates, memory key documentation) into separate bundle files and link to them from a concise overview in SKILL.md.

DimensionReasoningScore

Conciseness

Extremely verbose with extensive lists of concepts Claude already knows (what cyclomatic complexity is, what SQL injection is, what memory leaks are). The bulk of the content is descriptive categorization rather than actionable instruction. The 'Core Responsibilities' section is essentially a taxonomy of code analysis concepts that adds no operational value.

1 / 3

Actionability

Despite its length, the skill provides almost no executable guidance. The bash commands reference a specific tool (`claude-flow@alpha`) but are incomplete placeholders with template variables like `${results}` and `${summary}`. The 'Analysis Workflow' phases describe what to do abstractly ('Run linters and type checkers') without specifying which tools, commands, or configurations to use. The example analysis output is a template, not an executable procedure.

1 / 3

Workflow Clarity

The three-phase workflow (Initial Scan, Deep Analysis, Report Generation) is vaguely sequenced but lacks any validation checkpoints, error handling, or feedback loops. Phase 2 is entirely abstract bullet points with no concrete steps. There's no guidance on what to do if analysis fails, how to verify results, or how to handle edge cases.

1 / 3

Progressive Disclosure

The content is a monolithic wall of text with no references to external files and no bundle files to support it. All content is inline regardless of importance, with no clear hierarchy between essential quick-start information and reference material. Sections like 'Analysis Metrics' and 'Best Practices' could easily be separate reference files but are dumped inline.

1 / 3

Total

4

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
ruvnet/claude-flow
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.