Agent skill for performance-analyzer - invoke with $agent-performance-analyzer
35
0%
Does it follow best practices?
Impact
99%
1.00xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.agents/skills/agent-performance-analyzer/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is essentially a label and invocation instruction with no substantive content. It fails across all dimensions: it doesn't describe what the skill does, when to use it, or provide any natural trigger terms. It would be nearly impossible for Claude to correctly select this skill from a pool of available skills.
Suggestions
Add specific concrete actions the skill performs, e.g., 'Profiles code execution, identifies performance bottlenecks, measures response times, and generates optimization recommendations.'
Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user mentions slow performance, profiling, benchmarking, latency, optimization, or bottleneck analysis.'
Specify the domain or technology scope to reduce conflict risk, e.g., whether this analyzes web app performance, database queries, API endpoints, or system resources.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description contains no concrete actions whatsoever. 'Agent skill for performance-analyzer' is entirely vague and does not describe what the skill actually does—no specific capabilities like profiling, benchmarking, or identifying bottlenecks are mentioned. | 1 / 3 |
Completeness | The description fails to answer both 'what does this do' and 'when should Claude use it'. There is no 'Use when...' clause and no explanation of capabilities. It only provides an invocation command. | 1 / 3 |
Trigger Term Quality | The only potentially relevant keyword is 'performance-analyzer', which is a tool name rather than a natural user term. Users would say things like 'slow', 'bottleneck', 'profiling', 'optimize performance', none of which appear here. | 1 / 3 |
Distinctiveness Conflict Risk | The description is so generic that 'performance' could overlap with many domains—web performance, database performance, code performance, system performance. There are no distinct triggers to differentiate it from other skills. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is a high-level conceptual overview of performance analysis that reads like a generic knowledge base article rather than an actionable skill for Claude. It contains no executable code, no specific tools or commands, no concrete procedures, and extensively explains concepts Claude already understands. The entire document could be replaced with a concise set of specific steps and tools for actual performance analysis.
Suggestions
Replace abstract workflow steps with concrete, executable commands (e.g., specific profiling tools, shell commands for metric collection, scripts to run)
Remove sections that describe general concepts Claude already knows (bottleneck types, what KPIs are, best practices for monitoring) and focus on project-specific procedures
Add actual code examples that can be copy-pasted and executed, such as profiling scripts, metric collection commands, or analysis queries
Include validation checkpoints in the workflow (e.g., 'Run X command and verify output shows Y before proceeding to optimization')
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive lists of abstract concepts Claude already knows (what bottleneck types are, what KPIs are, what continuous monitoring means). The content reads like a textbook chapter rather than actionable instructions. Most sections describe general performance analysis concepts without adding anything Claude couldn't infer. | 1 / 3 |
Actionability | No executable code, no concrete commands, no specific tools or APIs to use. The 'workflow' steps are vague numbered lists like 'Gather execution metrics' and 'Profile resource usage' without specifying how. The code blocks contain plain text lists, not executable commands. The optimization examples are hypothetical narratives, not reproducible procedures. | 1 / 3 |
Workflow Clarity | The three-phase workflow (Data Collection, Analysis, Recommendation) consists of abstract bullet points with no concrete steps, no validation checkpoints, and no feedback loops. There's no guidance on what tools to run, what output to check, or how to verify results. Steps like 'Identify hotspots' and 'Correlate metrics' are not actionable instructions. | 1 / 3 |
Progressive Disclosure | Monolithic wall of text with no references to external files. All content is inline despite being over 150 lines. Sections like 'Advanced Features' mention ML-based prediction and automated optimization with zero detail or pointers to implementation. No navigation structure or cross-references to supplementary materials. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
0d9f9b1
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.