Response Time Analyzer - Auto-activating skill for Performance Testing. Triggers on: response time analyzer, response time analyzer Part of the Performance Testing skill category.
34
0%
Does it follow best practices?
Impact
100%
1.00xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./planned-skills/generated/10-performance-testing/response-time-analyzer/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is essentially a template placeholder with no substantive content. It names the skill and its category but provides zero information about what it actually does, what specific capabilities it offers, or when it should be selected. The trigger terms are just the skill name repeated, offering no useful matching surface.
Suggestions
Add specific concrete actions the skill performs, e.g., 'Analyzes HTTP response times, calculates percentile distributions (p50, p95, p99), identifies slow endpoints, and generates performance reports.'
Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks about response times, latency analysis, API performance, slow requests, load test results, or performance bottlenecks.'
Remove the redundant duplicate trigger term and replace with diverse natural keywords users would actually say, such as 'latency', 'slow responses', 'performance metrics', 'throughput', 'load testing'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names a domain ('Performance Testing') and a tool name ('Response Time Analyzer') but provides no concrete actions. There is no indication of what the skill actually does—no verbs describing specific capabilities like 'measures latency', 'generates percentile reports', or 'identifies slow endpoints'. | 1 / 3 |
Completeness | The description fails to answer 'what does this do' beyond naming itself, and the 'when' clause is essentially just the skill's own name repeated. There is no explicit 'Use when...' guidance with meaningful triggers. | 1 / 3 |
Trigger Term Quality | The only trigger terms listed are 'response time analyzer' repeated twice. There are no natural user keywords like 'latency', 'slow response', 'performance metrics', 'p99', 'load testing', or 'API response time' that a user would naturally say. | 1 / 3 |
Distinctiveness Conflict Risk | The description is so vague that it could overlap with any performance-related skill. 'Performance Testing' is a broad category, and without specific actions or distinct triggers, it would be difficult to distinguish from other performance or monitoring skills. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is an empty template/placeholder with no actual instructional content. It repeatedly references 'response time analyzer' without ever defining what it does, how to use it, or providing any code, commands, tool configurations, or concrete guidance. It fails on every dimension of the rubric.
Suggestions
Add concrete, executable code examples for response time analysis using specific tools (e.g., k6 scripts, JMeter configurations, or Python-based benchmarking with requests/aiohttp).
Define a clear workflow: e.g., 1) Define test scenarios, 2) Configure load tool, 3) Run baseline test, 4) Analyze percentiles (p50/p95/p99), 5) Validate against SLAs, with explicit validation checkpoints.
Remove all boilerplate sections ('When to Use', 'Example Triggers', 'Capabilities') that describe the skill meta-information rather than teaching how to perform response time analysis.
Add concrete examples of response time metrics interpretation, including sample output data and how to identify bottlenecks from the results.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is entirely filler and boilerplate. It explains nothing Claude doesn't already know, repeats the phrase 'response time analyzer' excessively, and provides zero substantive information about how to actually analyze response times. | 1 / 3 |
Actionability | There are no concrete code examples, commands, tool configurations, or executable guidance whatsoever. Every section is vague and abstract — 'Provides step-by-step guidance' without actually providing any steps. | 1 / 3 |
Workflow Clarity | No workflow, sequence, or process is defined. The skill claims to provide 'step-by-step guidance' but contains zero steps. There are no validation checkpoints or any operational instructions. | 1 / 3 |
Progressive Disclosure | The content is a flat, shallow placeholder with no meaningful structure. There are no references to detailed files, no quick-start content, and no navigation to deeper resources. The sections that exist contain no real content. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
c8a915c
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.