CtrlK
BlogDocsLog inGet started
Tessl Logo

agent-code-analyzer

Agent skill for code-analyzer - invoke with $agent-code-analyzer

36

2.05x
Quality

6%

Does it follow best practices?

Impact

80%

2.05x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./.agents/skills/agent-code-analyzer/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an extremely weak description that provides virtually no useful information for skill selection. It reads as a placeholder or auto-generated stub, containing only the tool name and invocation syntax without any explanation of capabilities, use cases, or trigger conditions. It would be nearly impossible for Claude to correctly select this skill from a pool of available skills.

Suggestions

Add specific concrete actions the skill performs, e.g., 'Analyzes code for complexity metrics, identifies code smells, detects potential bugs, and generates quality reports.'

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks to analyze code quality, review code complexity, find code smells, or generate code metrics.'

Replace the generic 'Agent skill for code-analyzer' framing with a third-person description of capabilities that distinguishes this from other code-related skills (e.g., linting, debugging, refactoring).

DimensionReasoningScore

Specificity

The description contains no concrete actions whatsoever. 'Agent skill for code-analyzer' is entirely vague and does not describe what the skill actually does—no verbs, no specific capabilities listed.

1 / 3

Completeness

The description fails to answer both 'what does this do' and 'when should Claude use it'. It only provides an invocation command ('$agent-code-analyzer') with no explanation of functionality or usage triggers.

1 / 3

Trigger Term Quality

The only potentially relevant term is 'code-analyzer', which is a tool name rather than a natural keyword a user would say. There are no natural language trigger terms like 'analyze code', 'review code', 'static analysis', etc.

1 / 3

Distinctiveness Conflict Risk

The description is so generic that 'code-analyzer' could overlap with any code review, linting, static analysis, or debugging skill. There is nothing to distinguish it from other code-related skills.

1 / 3

Total

4

/

12

Passed

Implementation

12%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads as a high-level role description rather than an actionable skill file. It extensively lists what a code analyzer should do (responsibilities, metrics, best practices) without providing concrete, executable instructions on how to do any of it. The content is heavily padded with information Claude already knows about code quality, security, and performance analysis, wasting significant token budget.

Suggestions

Replace the extensive responsibility lists with concrete, executable commands and code examples—e.g., specific linter commands, actual security scanning tool invocations, and real code snippets showing how to detect issues.

Remove sections that describe concepts Claude already knows (code quality metrics definitions, generic best practices like 'provide specific recommendations') and focus only on project-specific conventions and tool configurations.

Add explicit validation checkpoints and error-handling feedback loops to the workflow phases—e.g., 'If security scan finds critical issues, stop and report before proceeding to Phase 3.'

Extract detailed content (metrics definitions, example outputs, memory keys) into separate referenced files and keep SKILL.md as a concise overview with clear navigation links.

DimensionReasoningScore

Conciseness

Extremely verbose with extensive lists of responsibilities, metrics, and best practices that Claude already knows. The content reads like a role description or job posting rather than actionable instructions. Sections like 'Core Responsibilities' enumerate obvious code analysis tasks (e.g., 'Analyze code structure and organization', 'Evaluate naming conventions') that add no new information.

1 / 3

Actionability

Despite its length, the skill provides almost no executable guidance. The bash commands reference a specific tool (`claude-flow@alpha`) but are template-like with unexplained variables (e.g., `${results}`, `${summary}`). Instructions like 'Run linters and type checkers' and 'Execute security scanners' are vague directives with no concrete commands, tools, or code to execute.

1 / 3

Workflow Clarity

There is a three-phase workflow (Initial Scan, Deep Analysis, Report Generation) with some sequencing, but validation checkpoints are absent. There's no feedback loop for when analysis finds issues—no 'if X fails, do Y' pattern. The phases are more conceptual categories than actionable steps.

2 / 3

Progressive Disclosure

The content is a monolithic wall of text with no references to external files for detailed information. Everything is inlined—metrics definitions, example outputs, coordination protocols, memory keys—resulting in a very long document that mixes overview content with details that could be separated.

1 / 3

Total

5

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
ruvnet/ruflo
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.