CtrlK
BlogDocsLog inGet started
Tessl Logo

performance-analysis

Comprehensive performance analysis, bottleneck detection, and optimization recommendations for Claude Flow swarms

Install with Tessl CLI

npx tessl i github:ruvnet/agentic-flow --skill performance-analysis
What are skills?

54

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Discovery

40%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a specific domain (Claude Flow swarms) and lists relevant capability areas, providing good distinctiveness. However, it lacks explicit trigger guidance ('Use when...') which is critical for Claude to know when to select this skill, and the action terms are somewhat abstract rather than concrete. The description would benefit significantly from adding explicit usage triggers and more natural user-facing keywords.

Suggestions

Add a 'Use when...' clause with explicit triggers like 'Use when analyzing swarm performance, debugging slow agents, or optimizing Claude Flow workflows'

Include natural user terms such as 'slow', 'speed up', 'profiling', 'metrics', 'latency', 'agent performance', or 'workflow efficiency'

Make actions more concrete by specifying outputs like 'generates profiling reports', 'identifies memory bottlenecks', or 'recommends agent configuration changes'

DimensionReasoningScore

Specificity

Names the domain (Claude Flow swarms) and lists three actions (performance analysis, bottleneck detection, optimization recommendations), but these are somewhat abstract categories rather than concrete specific actions like 'measure latency', 'identify memory leaks', or 'generate profiling reports'.

2 / 3

Completeness

Describes what the skill does but completely lacks a 'Use when...' clause or any explicit trigger guidance. Per the rubric, missing explicit trigger guidance should cap completeness at 2, and this has no 'when' component at all.

1 / 3

Trigger Term Quality

Includes relevant terms like 'performance', 'bottleneck', 'optimization', and 'Claude Flow swarms', but misses common variations users might say such as 'slow', 'speed up', 'profiling', 'metrics', 'latency', or 'throughput'.

2 / 3

Distinctiveness Conflict Risk

The specific focus on 'Claude Flow swarms' creates a clear niche that is unlikely to conflict with general performance tools or other optimization skills. The domain is highly specific and distinct.

3 / 3

Total

8

/

12

Passed

Implementation

50%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides highly actionable, executable guidance with comprehensive command examples and integration patterns. However, it suffers from severe verbosity - the content could be reduced by 60-70% while preserving all essential information. The monolithic structure with everything inline makes it difficult to navigate and wastes context window tokens.

Suggestions

Reduce content by 60%+ by removing redundant examples (e.g., multiple similar bash command variations), obvious explanations, and consolidating output format examples into a single representative sample

Split detailed content into separate files: move API reference, CI/CD integration, and troubleshooting to linked documents, keeping SKILL.md as a concise overview with quick start only

Add explicit validation checkpoints to workflows, especially for the --fix operations (e.g., 'Verify current state → Apply fix → Validate improvement → Rollback if degraded')

Remove explanatory text that describes what things are (e.g., 'Bottleneck Detection: Identify performance bottlenecks...') and keep only the actionable commands and examples

DimensionReasoningScore

Conciseness

Extremely verbose at 500+ lines with extensive repetition. Includes obvious information Claude already knows (what bottlenecks are, basic concepts), redundant examples showing the same patterns multiple times, and excessive output format examples that could be condensed significantly.

1 / 3

Actionability

Provides fully executable commands with complete syntax, concrete options, and copy-paste ready examples. The bash commands, JavaScript MCP integration code, and CI/CD YAML are all directly usable without modification.

3 / 3

Workflow Clarity

Steps are listed but validation checkpoints are implicit rather than explicit. The 'Fix Strategy' best practice mentions reviewing before applying --fix, but the main workflow sections don't embed validation steps directly into the process flow. Missing explicit 'verify before proceeding' gates.

2 / 3

Progressive Disclosure

Has a 'See Also' section with references to related documentation, but the main content is a monolithic wall of text with everything inline. The 500+ lines of detailed API reference, examples, and troubleshooting could be split into separate files with the SKILL.md serving as an overview.

2 / 3

Total

8

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (564 lines); consider splitting into references/ and linking

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.