CtrlK
BlogDocsLog inGet started
Tessl Logo

agent-performance-analyzer

Agent skill for performance-analyzer - invoke with $agent-performance-analyzer

35

1.00x
Quality

0%

Does it follow best practices?

Impact

99%

1.00x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.agents/skills/agent-performance-analyzer/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is critically deficient across all dimensions. It reads as a bare invocation instruction rather than a functional description, providing no information about what the skill does, what actions it performs, or when it should be selected. It would be nearly impossible for Claude to correctly choose this skill from a pool of available skills.

Suggestions

Add specific concrete actions the skill performs, e.g., 'Profiles application code, identifies performance bottlenecks, measures response times, and generates optimization recommendations.'

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user mentions slow performance, profiling, benchmarking, latency issues, CPU usage, memory leaks, or optimization.'

Remove the invocation instruction ('invoke with $agent-performance-analyzer') from the description and replace it with capability and context information that helps Claude decide when to select this skill.

DimensionReasoningScore

Specificity

The description contains no concrete actions whatsoever. 'Agent skill for performance-analyzer' is entirely vague and does not describe what the skill actually does—no specific capabilities like profiling, benchmarking, or identifying bottlenecks are mentioned.

1 / 3

Completeness

The description fails to answer both 'what does this do' and 'when should Claude use it'. There is no 'Use when...' clause and no explanation of capabilities. It only provides an invocation command.

1 / 3

Trigger Term Quality

The only potentially relevant term is 'performance-analyzer', which is a tool name rather than a natural keyword a user would say. No natural trigger terms like 'slow', 'bottleneck', 'profiling', 'latency', 'benchmark', or 'optimize' are included.

1 / 3

Distinctiveness Conflict Risk

'Performance-analyzer' is extremely generic and could overlap with any skill related to performance testing, code optimization, system monitoring, database tuning, or web performance. There are no distinct triggers to differentiate it.

1 / 3

Total

4

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads as a conceptual overview or design document about performance analysis rather than an actionable skill for Claude. It contains no executable code, no specific tool references, no concrete commands, and no real validation steps. Nearly all content describes general performance engineering concepts that Claude already knows, making it extremely token-inefficient.

Suggestions

Replace abstract workflow steps with concrete, executable commands or code snippets (e.g., specific profiling tools, actual metric collection scripts, real analysis commands)

Remove general knowledge sections like 'Bottleneck Types' and 'Detection Methods' that describe concepts Claude already understands, and focus on project-specific tooling and conventions

Add validation checkpoints with concrete criteria (e.g., 'Run `time make test` before and after optimization; verify >20% improvement before committing changes')

Extract detailed patterns and examples into a separate reference file, keeping SKILL.md as a concise quick-start guide with clear navigation to supplementary materials

DimensionReasoningScore

Conciseness

Extremely verbose with extensive lists of abstract concepts Claude already knows (bottleneck types, detection methods, optimization strategies). The content reads like a textbook chapter on performance analysis rather than actionable instructions. Most of the 150+ lines describe general knowledge without adding project-specific or tool-specific value.

1 / 3

Actionability

No executable code, no concrete commands, no specific tools or APIs to use. The 'workflow' steps are vague numbered lists like 'Gather execution metrics' and 'Profile resource usage' without specifying how. The optimization examples give hypothetical scenarios with no implementation details. Everything describes rather than instructs.

1 / 3

Workflow Clarity

The three-phase workflow (Data Collection, Analysis, Recommendation) consists entirely of abstract bullet points with no concrete steps, no validation checkpoints, and no feedback loops. There's no way for Claude to actually execute this workflow—it's a conceptual framework, not an operational procedure.

1 / 3

Progressive Disclosure

Monolithic wall of text with no references to external files. All content is inline regardless of depth or relevance. The document mixes high-level overview, detailed patterns, integration points, metrics, examples, and advanced features all in one long file with no navigation structure or cross-references.

1 / 3

Total

4

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
ruvnet/claude-flow
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.