CtrlK
BlogDocsLog inGet started
Tessl Logo

agent-code-analyzer

Agent skill for code-analyzer - invoke with $agent-code-analyzer

36

2.05x
Quality

6%

Does it follow best practices?

Impact

80%

2.05x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./.agents/skills/agent-code-analyzer/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an extremely weak description that essentially only provides the skill's name and invocation command. It fails on every dimension: no concrete actions, no natural trigger terms, no 'what' or 'when' guidance, and no distinguishing characteristics. It would be nearly impossible for Claude to correctly select this skill from a pool of available skills.

Suggestions

Add specific concrete actions the skill performs, e.g., 'Analyzes code for complexity metrics, identifies potential bugs, reviews code structure and dependencies.'

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks to analyze code, review code quality, check for code smells, or measure code complexity.'

Specify what types of code or languages are supported and what distinguishes this from other code-related skills (e.g., linting, formatting, refactoring).

DimensionReasoningScore

Specificity

The description contains no concrete actions whatsoever. 'Agent skill for code-analyzer' is entirely vague and does not describe what the skill actually does—no specific capabilities like 'analyzes complexity', 'finds bugs', or 'reviews code structure' are mentioned.

1 / 3

Completeness

The description fails to answer both 'what does this do' and 'when should Claude use it'. There is no explanation of capabilities and no 'Use when...' clause or equivalent trigger guidance.

1 / 3

Trigger Term Quality

The only potentially relevant keyword is 'code-analyzer', which is a tool name rather than a natural user term. Users would say things like 'analyze my code', 'code review', 'find bugs', etc. The invocation syntax '$agent-code-analyzer' is not a natural trigger term.

1 / 3

Distinctiveness Conflict Risk

The description is so generic that 'code-analyzer' could overlap with any code-related skill—linting, static analysis, code review, performance profiling, etc. There is nothing to distinguish it from other code-related skills.

1 / 3

Total

4

/

12

Passed

Implementation

12%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads as a high-level role description rather than actionable instructions for Claude. It is extremely verbose, listing dozens of responsibilities and metrics that Claude already understands, while providing almost no concrete, executable guidance. The bash commands shown are template placeholders rather than real commands, and the workflow lacks validation steps and error recovery.

Suggestions

Replace the extensive responsibility lists with specific, executable commands and code examples that Claude can actually run for code analysis (e.g., specific linter commands, concrete static analysis tool invocations).

Remove sections that describe concepts Claude already knows (code quality metrics definitions, generic best practices like 'provide specific recommendations') and focus only on project-specific configuration and tool usage.

Add explicit validation checkpoints and error-handling feedback loops to the workflow (e.g., 'If security scan finds critical issues, stop and report before proceeding to Phase 3').

Extract detailed sections (metrics definitions, example output templates, memory key references) into separate referenced files to keep the main skill concise and navigable.

DimensionReasoningScore

Conciseness

Extremely verbose with extensive lists of responsibilities, metrics, and best practices that Claude already knows. The content reads like a role description or job posting rather than actionable instructions. Sections like 'Core Responsibilities' enumerate obvious code analysis tasks (e.g., 'Analyze code structure and organization', 'Evaluate naming conventions') that add no new information.

1 / 3

Actionability

Despite its length, the skill provides almost no executable guidance. The bash commands reference a specific tool (`claude-flow@alpha`) but are template-like with unexplained variables (e.g., `${results}`, `${summary}`). Instructions like 'Run linters and type checkers' and 'Execute security scanners' are vague directives with no concrete commands, tools, or code to execute.

1 / 3

Workflow Clarity

There is a three-phase workflow (Initial Scan, Deep Analysis, Report Generation) with some sequencing, but validation checkpoints are absent. There's no feedback loop for when analysis finds issues—no 'if X fails, do Y' pattern. The phases are more conceptual categories than actionable steps.

2 / 3

Progressive Disclosure

The content is a monolithic wall of text with no references to external files for detailed information. Everything is inlined—metrics definitions, example outputs, coordination protocols, memory keys—resulting in a very long document that mixes overview content with details that could be separated.

1 / 3

Total

5

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
ruvnet/ruflo
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.