CtrlK
BlogDocsLog inGet started
Tessl Logo

agent-performance-monitor

Agent skill for performance-monitor - invoke with $agent-performance-monitor

35

2.43x
Quality

0%

Does it follow best practices?

Impact

100%

2.43x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.agents/skills/agent-performance-monitor/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an extremely weak description that provides essentially no useful information beyond the skill's name. It fails on all dimensions: it describes no concrete actions, includes no natural trigger terms, answers neither 'what' nor 'when', and is indistinguishable from any other monitoring-related skill.

Suggestions

Add specific concrete actions the skill performs, e.g., 'Monitors CPU usage, memory consumption, disk I/O, and network throughput for running processes and system resources.'

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks about system performance, resource usage, slow processes, CPU load, memory leaks, or wants to profile application performance.'

Remove the invocation syntax from the description and replace it with capability-focused language that helps Claude distinguish this skill from other tools.

DimensionReasoningScore

Specificity

The description contains no concrete actions whatsoever. 'Agent skill for performance-monitor' is entirely vague and does not describe what the skill actually does.

1 / 3

Completeness

Neither 'what does this do' nor 'when should Claude use it' is answered. There is no description of capabilities and no 'Use when...' clause or equivalent trigger guidance.

1 / 3

Trigger Term Quality

The only keyword is 'performance-monitor' which is a tool name, not a natural user term. Users would say things like 'check performance', 'CPU usage', 'memory', 'latency', etc. The invocation syntax '$agent-performance-monitor' is not a natural trigger term.

1 / 3

Distinctiveness Conflict Risk

The description is so vague that 'performance-monitor' could overlap with any monitoring, profiling, benchmarking, or diagnostics skill. There are no distinct triggers to differentiate it.

1 / 3

Total

4

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is an extensive but non-functional collection of illustrative JavaScript pseudocode and bash commands. None of the code is executable, no clear workflow is defined, and the massive volume of placeholder classes and methods wastes token budget without providing actionable guidance. The content reads more like a design document or architecture proposal than an operational skill.

Suggestions

Replace illustrative pseudocode with actual executable commands or code snippets that Claude can run, focusing on the CLI commands in the 'Operational Commands' section as the primary interface.

Add a clear step-by-step workflow (e.g., 1. Start monitoring, 2. Check metrics, 3. Analyze bottlenecks, 4. Validate SLA compliance) with explicit validation checkpoints at each stage.

Reduce the content by 80%+ by removing all placeholder class definitions and keeping only the concrete commands, expected outputs, and decision criteria.

Move any detailed reference material (analytics formulas, anomaly detection approaches) to separate linked files and keep SKILL.md as a concise overview with navigation.

DimensionReasoningScore

Conciseness

Extremely verbose at ~500+ lines. Most code is non-executable pseudocode with placeholder methods (e.g., `this.getCPUUsage()`, `this.loadTimeSeriesModel()`) that explain concepts Claude already understands. Classes like StatisticalAnomalyDetector, MLAnomalyDetector are referenced but never defined. The entire file could be reduced to the operational commands section plus a brief architecture overview.

1 / 3

Actionability

Almost none of the code is executable—it's all illustrative pseudocode with undefined methods, unimported dependencies, and fictional MCP calls (e.g., `mcp.agent_list`, `mcp.bottleneck_analyze`). The bash commands at the end reference `npx claude-flow` subcommands that may or may not exist, with no verification steps. Nothing is copy-paste ready.

1 / 3

Workflow Clarity

There is no clear workflow or sequence of steps for performing performance monitoring. The content is organized as a collection of class definitions and code snippets with no ordering, no validation checkpoints, and no guidance on when or how to use each component. A user/agent would not know where to start or what sequence to follow.

1 / 3

Progressive Disclosure

The entire skill is a monolithic wall of code blocks with no references to external files and no layered structure. Hundreds of lines of illustrative code are inlined that could be separated into reference documents. The 'Integration Points' section hints at other agents but provides no links or navigation.

1 / 3

Total

4

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (677 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Repository
ruvnet/claude-flow
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.