CtrlK
BlogDocsLog inGet started
Tessl Logo

agent-analyze-code-quality

Agent skill for analyze-code-quality - invoke with $agent-analyze-code-quality

34

1.49x
Quality

0%

Does it follow best practices?

Impact

94%

1.49x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.agents/skills/agent-analyze-code-quality/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is essentially a bare invocation instruction with no substantive content. It fails to describe what the skill does, when it should be used, or provide any natural trigger terms. It would be nearly impossible for Claude to reliably select this skill from a pool of alternatives.

Suggestions

Add specific concrete actions the skill performs, e.g., 'Analyzes code for complexity, duplication, style violations, and potential bugs.'

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks for code review, code quality analysis, linting, static analysis, or wants to improve code maintainability.'

Remove the agent invocation syntax from the description and replace it with a clear, third-person explanation of capabilities and triggers.

DimensionReasoningScore

Specificity

The description contains no concrete actions. 'Analyze-code-quality' is embedded in a tool invocation name but no specific capabilities like 'detect code smells, measure cyclomatic complexity, check style violations' are listed.

1 / 3

Completeness

Neither 'what does this do' nor 'when should Claude use it' is answered. There is no 'Use when...' clause and no explanation of capabilities beyond the tool name itself.

1 / 3

Trigger Term Quality

No natural keywords a user would say are present. 'analyze-code-quality' is a hyphenated tool name, not natural language. Missing terms like 'code review', 'linting', 'code smells', 'static analysis', etc.

1 / 3

Distinctiveness Conflict Risk

The phrase 'analyze-code-quality' is extremely generic and could overlap with any code review, linting, testing, or static analysis skill. There are no distinct triggers to differentiate it.

1 / 3

Total

4

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is overwhelmingly YAML frontmatter with a thin, generic body that reads like a high-level checklist rather than actionable instructions. It explains concepts Claude already knows (code smells, SOLID principles, DRY/KISS), provides no concrete tool usage examples (e.g., specific Grep patterns to find long methods), and lacks any workflow for systematically performing an analysis. The extensive metadata (hooks, optimization, integration) appears to be aspirational configuration rather than functional content.

Suggestions

Replace the abstract criteria lists with concrete, executable workflows: e.g., 'Step 1: Run `Glob` to find all source files, Step 2: Use `Grep` with pattern X to find methods over 50 lines, Step 3: Read flagged files and assess complexity'.

Remove or drastically reduce the YAML frontmatter — most fields (triggers, hooks, optimization, integration) are not standard SKILL.md features and waste tokens on non-functional configuration.

Add specific tool usage examples showing how to use Read, Grep, and Glob to detect each category of code smell, rather than just listing smell names.

Include a clear sequential workflow with validation steps, such as verifying file counts before analysis and cross-checking findings before generating the final report.

DimensionReasoningScore

Conciseness

The vast majority of the file is YAML frontmatter that duplicates or explains concepts Claude already knows (what code smells are, what file patterns to scan, basic analysis criteria like SOLID/DRY/KISS). The actual body content is brief but the overall token cost is extremely high for what amounts to a generic code review checklist.

1 / 3

Actionability

The skill provides no executable code, no concrete commands, and no specific steps for performing analysis. It lists abstract categories (readability, maintainability, etc.) and code smell names without actionable instructions on how to detect them using the available tools (Read, Grep, Glob).

1 / 3

Workflow Clarity

There is no clear multi-step workflow for performing a code quality analysis. The 'key responsibilities' are listed as a numbered list but lack sequencing, validation checkpoints, or any indication of how to proceed through an analysis. The output format template is provided but there's no process to get there.

1 / 3

Progressive Disclosure

The content is a monolithic wall mixing extensive YAML configuration (triggers, hooks, optimization settings, integration metadata) with a thin body. There are no references to supporting files, no layered structure, and no clear navigation. The YAML frontmatter contains content that should either be in the body or in separate reference files.

1 / 3

Total

4

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
ruvnet/claude-flow
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.