Create and execute load tests for performance validation using k6, JMeter, and Artillery. Use when validating application performance under load conditions or identifying bottlenecks. Trigger with phrases like "run load test", "create stress test", or "validate performance under load".
Install with Tessl CLI
npx tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill running-load-tests60
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-crafted skill description that excels across all dimensions. It specifies concrete actions with named tools, provides natural trigger phrases users would actually say, and clearly distinguishes itself from other testing or performance-related skills. The description follows best practices by using third person voice and explicitly stating both capabilities and usage triggers.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions ('Create and execute load tests') and names specific tools (k6, JMeter, Artillery) along with the purpose (performance validation, identifying bottlenecks). | 3 / 3 |
Completeness | Clearly answers both what (create/execute load tests using k6, JMeter, Artillery) and when (validating performance under load, identifying bottlenecks) with explicit trigger phrases provided. | 3 / 3 |
Trigger Term Quality | Includes natural trigger phrases users would say: 'run load test', 'create stress test', 'validate performance under load'. Also includes domain terms like 'performance validation' and 'bottlenecks'. | 3 / 3 |
Distinctiveness Conflict Risk | Clear niche focused specifically on load/stress testing with named tools. Distinct triggers like 'load test', 'stress test', and specific tool names make it unlikely to conflict with general testing or performance monitoring skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
20%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill content is primarily descriptive rather than instructive, explaining what load testing is and what Claude 'will do' rather than providing concrete, executable guidance. It lacks any actual code examples for k6, JMeter, or Artillery despite claiming to support all three. The content would benefit significantly from replacing explanatory prose with executable script templates and specific commands.
Suggestions
Replace the abstract 'Examples' section with actual executable k6/JMeter/Artillery script templates that can be copy-pasted and modified
Remove the 'Overview', 'How It Works', and 'When to Use This Skill' sections - Claude doesn't need to be told what it will do
Add concrete validation steps such as 'Run k6 check output.json to verify results' and explicit thresholds for pass/fail criteria
Include specific CLI commands for each tool (e.g., 'k6 run --vus 100 --duration 30s script.js') rather than abstract instructions
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive explanation of concepts Claude already knows. Sections like 'Overview', 'How It Works', and 'When to Use This Skill' explain what load testing is and describe Claude's own capabilities rather than providing actionable guidance. | 1 / 3 |
Actionability | No executable code or concrete commands provided. The 'Examples' section describes what the skill 'will do' rather than showing actual k6/JMeter/Artillery scripts. Instructions are abstract ('Generate appropriate load test scripts') rather than copy-paste ready. | 1 / 3 |
Workflow Clarity | Steps are listed in the Instructions section with a clear sequence, but there are no validation checkpoints or feedback loops. Missing explicit verification steps between script generation and execution, and no guidance on what to do if tests fail validation. | 2 / 3 |
Progressive Disclosure | Content is organized into sections but is monolithic - everything is in one file with no references to external documentation. The Resources section mentions documentation but provides no actual links. Content that could be split (tool-specific examples) is absent entirely. | 2 / 3 |
Total | 6 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 13 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 13 / 16 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.