Execute this skill enables AI assistant to profile application performance, analyzing cpu usage, memory consumption, and execution time. it is triggered when the user requests performance analysis, bottleneck identification, or optimization recommendations. the... Use when optimizing performance. Trigger with phrases like 'optimize', 'performance', or 'speed up'.
45
33%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/performance/application-profiler/skills/profiling-application-performance/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description covers both what the skill does and when to use it, which is good for completeness. However, it suffers from poor formatting ('Execute this skill enables AI assistant', truncation with 'the...'), uses first/third person inconsistently, and the trigger terms are somewhat generic, risking overlap with other optimization-focused skills. The specificity of capabilities is moderate but undermined by the garbled opening and truncation.
Suggestions
Clean up the malformed opening ('Execute this skill enables AI assistant') and the truncation ('the...') — rewrite in proper third person voice like 'Profiles application performance by analyzing CPU usage, memory consumption, and execution time.'
Add more distinctive trigger terms and natural user phrases such as 'slow', 'profiling', 'benchmark', 'latency', 'memory leak', 'CPU usage' to improve trigger term coverage and reduce conflict risk with generic optimization skills.
Narrow the scope to distinguish from other optimization skills — specify that this is for runtime/application profiling rather than general code optimization.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (performance profiling) and some actions (analyzing CPU usage, memory consumption, execution time, bottleneck identification, optimization recommendations), but the description is muddled with formatting issues ('Execute this skill enables AI assistant') and truncated ('the...'), reducing clarity of the concrete actions listed. | 2 / 3 |
Completeness | Explicitly answers both 'what' (profile application performance, analyze CPU/memory/execution time, identify bottlenecks, provide optimization recommendations) and 'when' (Use when optimizing performance, triggered by phrases like 'optimize', 'performance', 'speed up'). | 3 / 3 |
Trigger Term Quality | Includes some natural trigger terms like 'optimize', 'performance', 'speed up', 'bottleneck identification', and 'performance analysis', but misses common variations like 'slow', 'profiling', 'latency', 'benchmark', 'CPU', 'memory leak', or 'execution time'. | 2 / 3 |
Distinctiveness Conflict Risk | The performance profiling niche is somewhat specific, but broad terms like 'optimize' and 'performance' could easily overlap with other optimization-related skills (e.g., database optimization, code refactoring, build optimization). The description doesn't narrow sufficiently to application runtime profiling. | 2 / 3 |
Total | 9 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill content is almost entirely devoid of actionable information. It reads like a marketing description or placeholder template rather than an operational guide for Claude. Every section describes what the skill does in abstract terms without providing any concrete commands, code examples, tool invocations, or specific profiling techniques for any of the mentioned stacks (Node.js, Python, Java).
Suggestions
Replace abstract descriptions with concrete, executable profiling commands for each stack (e.g., `node --prof app.js`, `python -m cProfile script.py`, `jcmd <pid> JFR.start`).
Add actual code examples showing how to interpret profiling output, including sample output snippets and what to look for in each.
Remove filler sections (Overview, When to Use, Integration, Prerequisites, Instructions, Output, Error Handling, Resources) that contain only generic platitudes, and consolidate into a lean quick-start with stack-specific subsections.
Add a concrete workflow with validation steps, e.g., 'Run profiler → parse output → identify top N hotspots → suggest specific optimization patterns → verify improvement with re-profile'.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with no actionable density. Explains concepts Claude already knows (what profiling is, what CPU usage means), includes filler sections like 'Overview', 'When to Use', 'Integration', 'Prerequisites', 'Instructions', 'Output', 'Error Handling', and 'Resources' that are all vague platitudes adding no real information. Nearly every section could be deleted without losing useful content. | 1 / 3 |
Actionability | No concrete code, commands, or executable guidance anywhere. The examples describe what the skill 'will do' in abstract terms rather than showing actual profiling commands, code snippets, or tool invocations. Instructions like 'Invoke this skill when the trigger conditions are met' and 'Provide necessary context and parameters' are completely vacuous. | 1 / 3 |
Workflow Clarity | The 'How It Works' section lists abstract steps like 'Identify Application Stack' and 'Analyze Performance Metrics' without any concrete commands, tool calls, or validation checkpoints. There are no feedback loops, no error recovery steps, and no actual workflow that Claude could follow to perform profiling. | 1 / 3 |
Progressive Disclosure | Monolithic wall of text with many sections that are all shallow and vague. No references to external files, no clear hierarchy of information. The 'Resources' section just says 'Project documentation' and 'Related skills and commands' without any actual links or file references. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
c8a915c
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.