Execute this skill enables AI assistant to detect and resolve performance bottlenecks in applications. it analyzes cpu, memory, i/o, and database performance to identify areas of concern. use this skill when you need to diagnose slow application performance, op... Use when optimizing performance. Trigger with phrases like 'optimize', 'performance', or 'speed up'.
45
33%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/performance/bottleneck-detector/skills/detecting-performance-bottlenecks/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description covers the basics of what the skill does and when to use it, including explicit trigger phrases. However, it suffers from truncation ('op...'), uses first/second person voice ('you need to'), and could be more specific about concrete actions. The trigger terms are reasonable but not comprehensive enough to cover the full range of performance-related user queries.
Suggestions
Fix the truncated text and remove the 'Execute this skill' preamble and second-person voice ('you need to') — rewrite in third person (e.g., 'Detects and resolves performance bottlenecks...').
Expand trigger terms to include common variations like 'latency', 'memory leak', 'profiling', 'slow query', 'high CPU usage', 'response time', and 'throughput'.
Add more specific concrete actions beyond 'detect and resolve' — e.g., 'profiles execution hotspots, identifies slow database queries, analyzes memory allocation patterns, recommends caching strategies'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | It names the domain (performance bottlenecks) and mentions some specific areas (CPU, memory, I/O, database performance), but the description is truncated ('op...') and doesn't fully list concrete actions. The actions mentioned are mostly 'detect', 'resolve', and 'analyze' which are somewhat generic. | 2 / 3 |
Completeness | It explicitly answers both 'what does this do' (detect and resolve performance bottlenecks, analyze CPU/memory/I/O/database) and 'when should Claude use it' with a clear 'Use when optimizing performance' clause and explicit trigger phrases. | 3 / 3 |
Trigger Term Quality | It includes some natural trigger terms like 'optimize', 'performance', 'speed up', and 'slow application performance', but misses common variations like 'latency', 'profiling', 'bottleneck', 'slow query', 'memory leak', 'high CPU', or 'response time'. | 2 / 3 |
Distinctiveness Conflict Risk | While it focuses on application performance optimization which is a somewhat specific niche, terms like 'optimize' and 'performance' are broad enough to potentially conflict with database-specific optimization skills, frontend performance skills, or general code review skills. | 2 / 3 |
Total | 9 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is almost entirely generic boilerplate with no actionable, concrete, or skill-specific content. It describes what a bottleneck detector would do in abstract terms but provides zero executable code, specific diagnostic commands, profiling tool usage, or concrete remediation examples. Multiple sections are redundant and padded with information Claude already knows.
Suggestions
Replace abstract descriptions with concrete, executable examples: include actual profiling commands (e.g., `py-spy`, `perf`, `EXPLAIN ANALYZE`), specific code patterns for detecting memory leaks, and real remediation code snippets.
Add a clear diagnostic workflow with explicit steps: e.g., 1) Collect metrics using X tool, 2) Identify bottleneck category, 3) Apply specific fix, 4) Validate improvement by re-measuring — with validation checkpoints at each stage.
Remove all generic boilerplate sections (Prerequisites, Instructions, Output, Error Handling, Resources) that contain no skill-specific information and waste token budget.
Provide concrete examples with input/output: show a slow query and its optimized version, a memory leak pattern and its fix, or a CPU-bound loop and its parallelized alternative.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive padding. The 'Overview' section restates the title, 'How It Works' describes abstract steps Claude already knows, 'When to Use' repeats the description, and sections like 'Prerequisites', 'Instructions', 'Output', and 'Error Handling' are generic boilerplate with no skill-specific value. Nearly every section explains concepts Claude already understands. | 1 / 3 |
Actionability | No concrete code, commands, or executable guidance anywhere. The examples describe what the skill 'will do' in abstract terms rather than providing actual profiling commands, diagnostic scripts, SQL EXPLAIN examples, or specific remediation code. Instructions like 'Invoke this skill when the trigger conditions are met' are completely vague. | 1 / 3 |
Workflow Clarity | No clear multi-step diagnostic workflow with specific tools, commands, or validation checkpoints. The 'How It Works' section lists three abstract phases without any concrete steps. There are no feedback loops, no validation steps, and no guidance on how to actually measure or verify that a bottleneck has been resolved. | 1 / 3 |
Progressive Disclosure | Monolithic wall of text with no references to external files and no bundle files to support it. Content is poorly organized with many redundant sections (Overview, How It Works, When to Use, Instructions all overlap). The 'Resources' section lists 'Project documentation' and 'Related skills' without any actual links or file references. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3a2d27d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.