CtrlK
BlogDocsLog inGet started
Tessl Logo

response-time-analyzer

Response Time Analyzer - Auto-activating skill for Performance Testing. Triggers on: response time analyzer, response time analyzer Part of the Performance Testing skill category.

34

1.00x
Quality

0%

Does it follow best practices?

Impact

100%

1.00x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./planned-skills/generated/10-performance-testing/response-time-analyzer/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is essentially a template placeholder with no substantive content. It names the skill and its category but provides zero information about what it actually does, what inputs it takes, or what outputs it produces. The trigger terms are just the skill name repeated, offering no help for skill selection.

Suggestions

Add specific concrete actions the skill performs, e.g., 'Analyzes HTTP response times, calculates percentile distributions (p50, p95, p99), identifies slow endpoints, and generates performance reports from load test results.'

Add a 'Use when...' clause with natural trigger terms like 'Use when the user asks about response times, latency analysis, slow API endpoints, performance bottlenecks, load test results, or percentile metrics.'

Remove the duplicated trigger term and replace with diverse natural keywords users would actually say, such as 'latency', 'slow responses', 'performance metrics', 'load testing results', 'API timing'.

DimensionReasoningScore

Specificity

The description names a domain ('Performance Testing') and a tool name ('Response Time Analyzer') but provides no concrete actions. There is no indication of what the skill actually does—no verbs describing specific capabilities like 'measures latency', 'generates percentile reports', or 'identifies slow endpoints'.

1 / 3

Completeness

The description fails to answer 'what does this do' beyond naming itself, and the 'when' clause is essentially just the skill's own name repeated. There is no explicit 'Use when...' guidance with meaningful triggers.

1 / 3

Trigger Term Quality

The only trigger terms listed are 'response time analyzer' repeated twice. There are no natural user keywords like 'latency', 'slow response', 'performance metrics', 'p99', 'load testing', or 'API response time' that a user would naturally say.

1 / 3

Distinctiveness Conflict Risk

The description is so vague that it could overlap with any performance-related skill. 'Performance Testing' is a broad category, and without specific actions or distinct triggers, it would be difficult to distinguish from other performance or monitoring skills.

1 / 3

Total

4

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is a hollow placeholder with no substantive content. It contains only meta-descriptions of what it would do without any actual instructions, code examples, tool configurations, or workflows for response time analysis. It provides zero value to Claude beyond trigger-word matching.

Suggestions

Add concrete, executable code examples for response time analysis using specific tools (e.g., k6 scripts, JMeter configurations, or Python-based benchmarking with requests/aiohttp).

Define a clear multi-step workflow for analyzing response times: collecting data, identifying percentiles (p50/p95/p99), detecting anomalies, and reporting results with validation checkpoints.

Include specific examples of input data and expected output formats (e.g., a sample response time distribution analysis with thresholds and pass/fail criteria).

Remove all meta-description sections ('Purpose', 'When to Use', 'Example Triggers') that describe the skill itself rather than teaching how to perform response time analysis.

DimensionReasoningScore

Conciseness

The content is almost entirely filler and meta-description. It explains what the skill does in abstract terms without providing any actual technical content. Phrases like 'Provides step-by-step guidance' and 'Follows industry best practices' are empty padding.

1 / 3

Actionability

There is zero concrete, executable guidance — no code, no commands, no specific tools usage, no configuration examples, no actual response time analysis techniques. The entire content describes rather than instructs.

1 / 3

Workflow Clarity

No workflow is defined at all. There are no steps, no sequence, no validation checkpoints. The skill claims to provide 'step-by-step guidance' but contains none.

1 / 3

Progressive Disclosure

No bundle files exist and no references to external resources are provided. The content is a monolithic block of vague descriptions with no structured navigation to deeper content.

1 / 3

Total

4

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.