Agent skill for performance-analyzer - invoke with $agent-performance-analyzer
Install with Tessl CLI
npx tessl i github:ruvnet/claude-flow --skill agent-performance-analyzer39
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillEvaluation — 99%
↑ 1.00xAgent success when using this skill
Validation for skill structure
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is critically deficient across all dimensions. It functions only as an invocation reference rather than a proper skill description, providing no information about capabilities, use cases, or trigger conditions. Claude would have no basis for selecting this skill appropriately from a skill library.
Suggestions
Add specific concrete actions describing what the skill does (e.g., 'Profiles code execution time, identifies bottlenecks, measures memory usage, generates performance reports')
Include a 'Use when...' clause with natural trigger terms users would say (e.g., 'Use when the user mentions slow code, performance issues, profiling, benchmarks, or optimization')
Specify the domain/scope to distinguish from other potential performance skills (e.g., 'Python code performance' vs 'web page performance' vs 'database query performance')
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description contains no concrete actions whatsoever. 'Agent skill for performance-analyzer' is completely abstract and doesn't describe what the skill actually does. | 1 / 3 |
Completeness | The description fails to answer both 'what does this do' and 'when should Claude use it'. It only provides invocation syntax ($agent-performance-analyzer) with no functional description or trigger guidance. | 1 / 3 |
Trigger Term Quality | The only keyword is 'performance-analyzer' which is a technical tool name, not natural language users would say. Missing terms like 'analyze performance', 'profiling', 'benchmarks', 'slow code', etc. | 1 / 3 |
Distinctiveness Conflict Risk | 'Performance-analyzer' is vague enough to conflict with any performance-related skill. Without specifying what type of performance (code, system, web, database), it provides no clear niche. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
12%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is a conceptual overview document rather than an actionable skill file. It extensively describes what performance analysis is and lists abstract categories without providing any executable code, specific commands, or concrete implementation guidance. The content would be more appropriate as documentation rather than an operational skill.
Suggestions
Replace abstract numbered lists with actual executable code examples (e.g., specific commands to collect metrics, scripts to analyze performance data)
Remove explanatory content about what bottlenecks are and focus on specific actions Claude should take when invoked
Add concrete validation steps with actual commands (e.g., 'Run `time ./script.sh` to measure baseline, then compare after optimization')
Extract detailed reference material to separate files and keep SKILL.md as a concise quick-start with clear navigation to advanced topics
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive explanations of concepts Claude already understands (what bottlenecks are, basic analysis phases, common patterns). The content is heavily padded with organizational structure that adds little actionable value. | 1 / 3 |
Actionability | No executable code or concrete commands anywhere. All 'code blocks' contain abstract numbered lists or markdown templates rather than actual executable guidance. Describes concepts rather than providing copy-paste ready instructions. | 1 / 3 |
Workflow Clarity | Steps are listed in numbered sequences (Data Collection, Analysis, Recommendation phases), but they are abstract descriptions without validation checkpoints or concrete verification steps. No feedback loops for error recovery. | 2 / 3 |
Progressive Disclosure | Monolithic wall of text with no references to external files. All content is inline despite being lengthy. No clear navigation structure or signposted references to detailed materials. | 1 / 3 |
Total | 5 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.