Execute this skill enables AI assistant to profile application performance, analyzing cpu usage, memory consumption, and execution time. it is triggered when the user requests performance analysis, bottleneck identification, or optimization recommendations. the... Use when optimizing performance. Trigger with phrases like 'optimize', 'performance', or 'speed up'.
45
33%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/performance/application-profiler/skills/profiling-application-performance/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description covers both what the skill does and when to use it, which is good for completeness. However, it suffers from poor formatting ('Execute this skill enables AI assistant'), a truncation ('the...'), and uses a mix of first/third person framing that reduces professionalism. The trigger terms are reasonable but could be more comprehensive, and the scope is broad enough to potentially conflict with other optimization-focused skills.
Suggestions
Clean up the malformed opening ('Execute this skill enables AI assistant') and the truncation ('the...') — rewrite in proper third person voice like 'Profiles application performance by analyzing CPU usage, memory consumption, and execution time.'
Add more natural trigger term variations such as 'slow', 'profiling', 'latency', 'benchmark', 'memory leak', 'CPU usage' to improve matching coverage.
Narrow the scope to reduce conflict risk — specify the type of applications or environments (e.g., 'Python applications', 'web services', 'runtime profiling') to distinguish from other optimization skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (performance profiling) and some actions (analyzing CPU usage, memory consumption, execution time, bottleneck identification, optimization recommendations), but the description is muddled with formatting issues ('Execute this skill enables AI assistant') and truncated ('the...'), reducing clarity of the concrete actions listed. | 2 / 3 |
Completeness | Explicitly answers both 'what' (profile application performance, analyze CPU usage, memory consumption, execution time, identify bottlenecks, provide optimization recommendations) and 'when' (Use when optimizing performance, triggered by phrases like 'optimize', 'performance', 'speed up'). | 3 / 3 |
Trigger Term Quality | Includes some natural trigger terms like 'optimize', 'performance', 'speed up', 'bottleneck identification', and 'performance analysis', but misses common variations like 'slow', 'profiling', 'latency', 'benchmark', 'CPU', 'memory leak', or 'execution time' in the trigger clause specifically. | 2 / 3 |
Distinctiveness Conflict Risk | The performance profiling niche is somewhat specific, but broad terms like 'optimize' and 'performance' could overlap with other optimization-related skills (e.g., database optimization, code refactoring, build optimization). The description doesn't narrow sufficiently to application runtime profiling. | 2 / 3 |
Total | 9 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is almost entirely generic filler content with no actionable, concrete guidance. It reads like a template that was never filled in with real information — there are no profiling commands, no code examples, no tool-specific instructions, and no validation steps. Nearly every section could apply to any skill if you swapped out the word 'profiling.'
Suggestions
Replace abstract descriptions with concrete, executable profiling commands for each stack (e.g., `node --prof app.js`, `py-spy record -o profile.svg -- python app.py`, `async-profiler` for Java).
Add actual code examples showing how to interpret profiling output, identify hotspots, and apply specific optimizations with before/after comparisons.
Define a clear multi-step workflow with validation checkpoints, e.g., 'Run profiler → verify output file exists → analyze top N hotspots → recommend specific changes → re-profile to confirm improvement.'
Remove all generic filler sections (Output, Error Handling, Instructions, Resources) that contain no specific information, or replace them with concrete details about the profiling tools and their actual error modes.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive padding. Explains obvious concepts Claude already knows (what profiling is, when to use it), includes generic filler sections like 'Output: The skill produces structured output relevant to the task' and 'Instructions: Invoke this skill when the trigger conditions are met.' The 'Overview' restates the title. Multiple sections add zero actionable information. | 1 / 3 |
Actionability | No concrete code, commands, or executable guidance anywhere. Examples describe what the skill 'will do' in abstract terms rather than showing how. No actual profiling commands, no code snippets, no tool invocations. Sections like 'Instructions' and 'Output' are entirely vague placeholders with no specifics. | 1 / 3 |
Workflow Clarity | The 'How It Works' section lists abstract steps like 'Identifies main application entry points' without any concrete commands, validation checkpoints, or error recovery loops. The 'Instructions' section is four generic bullet points that could apply to literally any skill. No validation steps for a multi-step profiling process. | 1 / 3 |
Progressive Disclosure | Monolithic wall of text with no references to external files, no bundle files, and no meaningful content organization. Multiple sections contain filler content that could be removed entirely. The 'Resources' section lists 'Project documentation' and 'Related skills and commands' with no actual links or paths. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3a2d27d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.