Locust Test Creator - Auto-activating skill for Performance Testing. Triggers on: locust test creator, locust test creator Part of the Performance Testing skill category.
36
3%
Does it follow best practices?
Impact
100%
1.00xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./planned-skills/generated/10-performance-testing/locust-test-creator/SKILL.mdQuality
Discovery
7%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is essentially a title and category label with no substantive content. It fails to describe what the skill actually does (e.g., generating Locust test scripts, configuring virtual users, defining task sets) and provides no guidance on when Claude should select it. The duplicated trigger term adds no value.
Suggestions
Add specific concrete actions the skill performs, e.g., 'Generates Locust load test scripts with configurable user scenarios, task sets, and request patterns for HTTP endpoints.'
Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks to create load tests, performance tests, stress tests, or mentions Locust, locustfile, or load testing Python scripts.'
Remove the duplicated trigger term and expand with natural keyword variations users would actually say, such as 'load test', 'stress test', 'performance benchmark', 'locustfile', 'concurrent users'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names the tool ('Locust') and domain ('Performance Testing') but does not describe any concrete actions. There are no specific capabilities listed like 'creates load test scripts', 'configures user scenarios', or 'generates locustfiles'. | 1 / 3 |
Completeness | The 'what' is extremely vague (no concrete actions described beyond the name) and the 'when' is missing entirely—there is no 'Use when...' clause or equivalent explicit trigger guidance. The description only states what it is called and its category. | 1 / 3 |
Trigger Term Quality | The trigger terms are just 'locust test creator' repeated twice. Missing natural variations users would say like 'load test', 'performance test', 'locustfile', 'stress test', 'load testing script', '.py locust', etc. | 1 / 3 |
Distinctiveness Conflict Risk | The mention of 'Locust' specifically (a Python load testing framework) provides some distinctiveness from generic testing skills, but the lack of specific actions or triggers means it could overlap with other performance testing or load testing skills. | 2 / 3 |
Total | 5 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is entirely a meta-description placeholder with no actual instructional content. It contains no executable code, no Locust-specific guidance, no workflow steps, and no examples. It would provide Claude with zero useful information for creating Locust performance tests.
Suggestions
Add a concrete, executable example of a basic Locust test file (locustfile.py) with a simple user class, task set, and run command.
Define a clear workflow: 1) identify endpoints to test, 2) write locustfile.py with user behaviors, 3) run with specific CLI command, 4) validate results against thresholds.
Include specific Locust patterns such as sequential tasks, weighted tasks, custom load shapes, and how to configure ramp-up/ramp-down with concrete code examples.
Remove all generic meta-content (trigger descriptions, capability lists) and replace with actionable technical guidance that Claude doesn't already know.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is padded with generic filler that tells Claude nothing useful. Phrases like 'Provides step-by-step guidance' and 'Follows industry best practices' are vacuous. It explains what triggers the skill rather than providing any actual technical content about creating Locust tests. | 1 / 3 |
Actionability | There is zero concrete guidance—no code, no commands, no examples of Locust test files, no configuration snippets. The entire content describes the skill abstractly rather than instructing Claude how to actually create Locust tests. | 1 / 3 |
Workflow Clarity | No workflow, steps, or process is defined. There is no sequence for creating a Locust test, no validation checkpoints, and no error handling guidance. | 1 / 3 |
Progressive Disclosure | The content is a monolithic block of generic meta-description with no references to supporting files, no structured sections with real content, and no navigation to deeper materials. There are no bundle files to support it either. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3efb53b
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.