Comprehensive performance analysis, bottleneck detection, and optimization recommendations for Claude Flow swarms
62
45%
Does it follow best practices?
Impact
97%
2.77xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.claude/skills/performance-analysis/SKILL.mdQuality
Discovery
40%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a specific domain (Claude Flow swarms) and lists relevant capabilities, but suffers from lack of explicit trigger guidance and somewhat abstract action descriptions. The distinctiveness is strong due to the niche focus, but the missing 'Use when...' clause significantly weakens its utility for skill selection.
Suggestions
Add a 'Use when...' clause with explicit triggers like 'Use when analyzing swarm performance, debugging slow agents, or optimizing multi-agent workflows'
Include natural user terms such as 'slow', 'speed', 'profiling', 'metrics', 'latency', or 'agent performance issues'
Make actions more concrete, e.g., 'measure agent latency, identify memory bottlenecks, generate optimization reports'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Claude Flow swarms) and lists three actions (performance analysis, bottleneck detection, optimization recommendations), but these are somewhat abstract categories rather than concrete specific actions like 'measure latency', 'identify memory leaks', or 'generate profiling reports'. | 2 / 3 |
Completeness | Describes what the skill does but completely lacks a 'Use when...' clause or any explicit trigger guidance. Per the rubric, missing explicit trigger guidance should cap completeness at 2, and this has no 'when' component at all. | 1 / 3 |
Trigger Term Quality | Includes relevant terms like 'performance', 'bottleneck', 'optimization', and 'Claude Flow swarms', but misses common variations users might say such as 'slow', 'speed up', 'profiling', 'metrics', 'latency', or 'throughput'. | 2 / 3 |
Distinctiveness Conflict Risk | The specific focus on 'Claude Flow swarms' creates a clear niche that is unlikely to conflict with general performance analysis or optimization skills. The domain is highly specific and distinct. | 3 / 3 |
Total | 8 / 12 Passed |
Implementation
50%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides comprehensive, actionable documentation with excellent executable examples and command syntax. However, it severely violates conciseness principles by including excessive detail that should be split into reference files, repeating information across sections, and explaining concepts Claude already understands. The workflow guidance lacks explicit validation steps for the auto-fix functionality.
Suggestions
Reduce SKILL.md to a 50-100 line overview with quick start examples, moving detailed API reference, output formats, and troubleshooting to separate files (e.g., REFERENCE.md, TROUBLESHOOTING.md)
Remove redundant content - bottleneck types are explained in at least 3 different sections; consolidate to one authoritative list
Add explicit validation workflow for --fix operations: run detect, apply fix, re-run detect to verify improvement, with expected output changes
Cut explanatory text like 'Comprehensive performance analysis suite for identifying bottlenecks' - the section headers already convey this
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at 500+ lines with significant redundancy. Multiple sections repeat similar information (e.g., bottleneck types listed multiple times), includes unnecessary explanations Claude would know (what bottlenecks are, basic concepts), and the output examples are overly detailed when one would suffice. | 1 / 3 |
Actionability | Provides fully executable commands with complete syntax, concrete examples for every feature, and copy-paste ready code snippets including bash commands, JavaScript integration, and CI/CD YAML configurations. | 3 / 3 |
Workflow Clarity | Commands are listed but lack explicit validation checkpoints. The --fix flag is mentioned but there's no clear workflow for validating fixes worked. The 'Fix Strategy' best practice mentions reviewing before applying but doesn't provide a concrete validation sequence. | 2 / 3 |
Progressive Disclosure | Has clear section headers and references to related files at the end, but the main content is a monolithic wall of text that should be split. The 500+ lines of detailed API reference, examples, and troubleshooting could be in separate files with SKILL.md serving as a concise overview. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (581 lines); consider splitting into references/ and linking | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
462536e
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.