Percentile Analyzer - Auto-activating skill for Performance Testing. Triggers on: percentile analyzer, percentile analyzer Part of the Performance Testing skill category.
36
3%
Does it follow best practices?
Impact
100%
1.00xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./planned-skills/generated/10-performance-testing/percentile-analyzer/SKILL.mdQuality
Discovery
7%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is extremely weak across all dimensions. It reads like auto-generated boilerplate with no substantive content about what the skill does, when to use it, or what user queries should trigger it. The duplicated trigger term and absence of concrete actions or natural keywords make it nearly useless for skill selection.
Suggestions
Add concrete actions describing what the skill does, e.g., 'Analyzes response time percentiles (p50, p95, p99) from load test results, identifies latency outliers, and generates performance distribution reports.'
Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks about percentile analysis, latency distributions, p99/p95 metrics, response time analysis, or performance test results.'
Remove the duplicated trigger term and replace with diverse natural keywords users would actually say, such as 'percentile', 'p99', 'latency', 'response time distribution', 'performance metrics', 'load test analysis'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names a domain ('Performance Testing') and a tool name ('Percentile Analyzer') but provides no concrete actions. There is no indication of what the skill actually does—no verbs describing specific capabilities like 'calculates percentiles', 'analyzes latency distributions', or 'generates performance reports'. | 1 / 3 |
Completeness | The description fails to answer both 'what does this do' and 'when should Claude use it'. There is no explanation of capabilities and no explicit 'Use when...' clause—only a vague 'Triggers on' line with a duplicated term. | 1 / 3 |
Trigger Term Quality | The only trigger terms listed are 'percentile analyzer' repeated twice. There are no natural user keywords like 'latency', 'p99', 'p95', 'response time', 'performance metrics', 'percentile', or 'load test results' that a user would naturally say. | 1 / 3 |
Distinctiveness Conflict Risk | The term 'Percentile Analyzer' is somewhat specific and unlikely to conflict with many other skills, but the broad 'Performance Testing' category label and lack of concrete scope means it could overlap with other performance-related skills. | 2 / 3 |
Total | 5 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is an empty placeholder with no substantive content. It contains only generic boilerplate descriptions that could apply to any skill topic, with zero actionable information about percentile analysis, performance testing metrics, or how to actually compute or interpret percentiles. It provides no value beyond what Claude already knows.
Suggestions
Add concrete, executable code examples for computing percentiles (e.g., P50, P95, P99) from load test results using tools like k6, JMeter, or Python/numpy.
Define a clear workflow for percentile analysis: collect data → compute percentiles → interpret results → identify bottlenecks, with specific commands or scripts at each step.
Include specific guidance on when to use different percentiles (P50 vs P95 vs P99), threshold recommendations, and how to detect latency outliers.
Remove all generic filler text ('Provides step-by-step guidance', 'Follows industry best practices') and replace with actual domain-specific instructions and examples.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is padded with generic filler that tells Claude nothing it doesn't already know. Phrases like 'Provides step-by-step guidance' and 'Follows industry best practices' are vacuous. There is zero domain-specific information about percentile analysis. | 1 / 3 |
Actionability | There are no concrete code examples, commands, formulas, or specific instructions. The entire skill describes what it could do rather than providing any executable guidance on how to actually perform percentile analysis. | 1 / 3 |
Workflow Clarity | No workflow, steps, or process is defined. The skill claims to provide 'step-by-step guidance' but contains none. There are no validation checkpoints or sequenced operations. | 1 / 3 |
Progressive Disclosure | There are no references to supporting files, no bundle files exist, and the content is a monolithic block of generic boilerplate with no meaningful structure or navigation to deeper material. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3efb53b
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.