Execute this skill enables AI assistant to detect and resolve performance bottlenecks in applications. it analyzes cpu, memory, i/o, and database performance to identify areas of concern. use this skill when you need to diagnose slow application performance, op... Use when optimizing performance. Trigger with phrases like 'optimize', 'performance', or 'speed up'.
45
33%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/performance/bottleneck-detector/skills/detecting-performance-bottlenecks/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description covers the basics with both 'what' and 'when' clauses present, and includes some useful trigger terms. However, it suffers from truncation, uses first/second person voice ('you need to'), contains the awkward prefix 'Execute this skill enables AI assistant to', and could be more specific about concrete actions and distinct from other performance-related skills.
Suggestions
Remove the 'Execute this skill enables AI assistant to' prefix and rewrite in third person voice (e.g., 'Detects and resolves performance bottlenecks...').
Expand trigger terms to include common variations like 'latency', 'memory leak', 'profiling', 'slow query', 'high CPU usage', 'response time'.
Fix the truncated description and add more specific concrete actions (e.g., 'profiles code execution paths, identifies slow database queries, detects memory leaks') to improve distinctiveness from other optimization skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | It names the domain (performance bottlenecks) and mentions some specific areas (CPU, memory, I/O, database performance), but the description is truncated ('op...') and doesn't fully list concrete actions. The actions mentioned are mostly 'detect', 'resolve', and 'analyze' which are somewhat generic. | 2 / 3 |
Completeness | It answers both 'what' (detect and resolve performance bottlenecks, analyze CPU/memory/I/O/database) and 'when' (explicit 'Use when optimizing performance' clause with trigger phrases). Despite the truncation, both components are present and explicit. | 3 / 3 |
Trigger Term Quality | Includes some natural trigger terms like 'optimize', 'performance', 'speed up', 'slow application performance', but misses common variations like 'latency', 'profiling', 'bottleneck', 'slow query', 'memory leak', 'high CPU'. Coverage is partial. | 2 / 3 |
Distinctiveness Conflict Risk | The performance optimization niche is reasonably specific, but terms like 'optimize' and 'performance' are broad enough to potentially conflict with database-specific optimization skills, frontend performance skills, or general code review skills. The description doesn't clearly narrow its scope to a distinct niche. | 2 / 3 |
Total | 9 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill content is almost entirely boilerplate with no actionable, concrete, or executable guidance. It describes what performance bottleneck detection is at a high level without providing any specific commands, profiling tools, code snippets, diagnostic queries, or step-by-step workflows. The document reads like a product marketing page rather than an instruction set for an AI assistant.
Suggestions
Replace abstract descriptions with concrete, executable examples: include actual profiling commands (e.g., `py-spy`, `perf`, `EXPLAIN ANALYZE` for SQL), diagnostic code snippets, and specific tool invocations for each bottleneck category.
Add a clear multi-step diagnostic workflow with validation checkpoints, e.g., '1. Run `top`/`htop` to identify CPU-bound processes → 2. Profile with `py-spy record -o profile.svg --pid <PID>` → 3. Analyze flame graph → 4. Verify fix by re-profiling'.
Remove all generic boilerplate sections (Prerequisites, Output, Error Handling, Integration, Resources) that contain no specific information, and replace with targeted content like specific tool configurations or common bottleneck patterns with remediation code.
Add concrete code examples for each bottleneck type (e.g., a before/after showing an N+1 query fix, a memory leak detection with `tracemalloc`, an I/O optimization with async patterns).
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with no actionable content. The entire document explains concepts Claude already knows (what bottlenecks are, what CPU/memory/I/O means), includes boilerplate sections like 'Prerequisites', 'Output', 'Error Handling' that say nothing specific, and the 'Instructions' section is completely generic filler. | 1 / 3 |
Actionability | No concrete code, commands, tools, or executable guidance anywhere. Every section describes what the skill 'will do' in abstract terms without providing any actual profiling commands, diagnostic scripts, query analysis techniques, or specific remediation code examples. Statements like 'Provide code examples and recommendations' without actually providing them are purely aspirational. | 1 / 3 |
Workflow Clarity | The 'How It Works' section lists three abstract phases with no concrete steps, no specific tools or commands, no validation checkpoints, and no feedback loops. The examples describe what the skill 'will' do rather than showing how to actually perform the analysis. There is no actionable workflow a reader could follow. | 1 / 3 |
Progressive Disclosure | The document is a monolithic wall of vague text with no references to external files, no links to detailed guides, and no meaningful structure beyond generic section headers. Sections like 'Resources' point to nothing specific ('Project documentation', 'Related skills and commands'). | 1 / 3 |
Total | 4 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
c8a915c
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.