Agent skill for performance-monitor - invoke with $agent-performance-monitor
Install with Tessl CLI
npx tessl i github:ruvnet/claude-flow --skill agent-performance-monitor40
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillEvaluation — 100%
↑ 2.43xAgent success when using this skill
Validation for skill structure
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is critically deficient across all dimensions. It functions more as a label than a description, providing only a name and invocation command without explaining capabilities, use cases, or trigger conditions. Claude would have no basis for selecting this skill appropriately.
Suggestions
Add specific concrete actions the skill performs (e.g., 'Monitors CPU usage, memory consumption, disk I/O, and network throughput')
Include an explicit 'Use when...' clause with natural trigger terms users would say (e.g., 'Use when the user asks about system performance, slowdowns, resource usage, or wants to diagnose bottlenecks')
Remove the invocation command from the description and focus on functional capabilities that distinguish this from other monitoring or diagnostic skills
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description contains no concrete actions whatsoever. 'Agent skill for performance-monitor' is completely abstract and doesn't describe what the skill actually does. | 1 / 3 |
Completeness | The description fails to answer both 'what does this do' and 'when should Claude use it'. It only provides an invocation command, not functional information. | 1 / 3 |
Trigger Term Quality | The only keyword is 'performance-monitor' which is technical jargon. No natural user terms like 'slow', 'speed', 'metrics', 'CPU', 'memory', or 'benchmark' are included. | 1 / 3 |
Distinctiveness Conflict Risk | The term 'performance-monitor' is vague and could conflict with many monitoring-related skills. Without specific capabilities listed, there's no clear niche. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
14%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is an extensive code showcase rather than actionable guidance. It presents elaborate class structures and theoretical implementations but fails to provide clear instructions on how to actually invoke and use the performance monitor agent. The content would benefit from dramatic reduction, focusing on the actual CLI commands and practical usage patterns rather than illustrative pseudocode.
Suggestions
Reduce content by 80%+ - remove illustrative class implementations and keep only the operational commands section with concrete CLI examples
Add a clear workflow section showing: 1) How to start monitoring, 2) How to interpret results, 3) How to respond to alerts - with validation steps
Split detailed API/code references into separate files (e.g., METRICS_API.md, ANOMALY_DETECTION.md) and link from a concise overview
Replace pseudocode classes with actual executable examples or remove them entirely - Claude understands these patterns without illustration
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with ~600+ lines of code. Much of this is illustrative pseudocode showing class structures Claude already understands. Concepts like statistical anomaly detection, percentile calculations, and WebSocket subscriptions don't need this level of explanation. | 1 / 3 |
Actionability | Contains operational commands section with concrete CLI examples which is good, but the bulk of the content is illustrative JavaScript classes that aren't directly executable - they reference undefined methods, missing imports, and hypothetical MCP endpoints that may not exist. | 2 / 3 |
Workflow Clarity | No clear workflow or sequence for how to actually use this agent. The content describes capabilities and shows code structures but never explains when to use what, in what order, or how to validate results. Missing validation checkpoints entirely. | 1 / 3 |
Progressive Disclosure | Monolithic wall of code with no references to external files. All content is inline despite being far too long. No clear navigation structure - just sequential code blocks with minimal organization beyond section headers. | 1 / 3 |
Total | 5 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (677 lines); consider splitting into references/ and linking | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.