CtrlK
BlogDocsLog inGet started
Tessl Logo

blazemeter-performance-testing

Comprehensive guide for BlazeMeter Performance Testing, including load configuration, reporting, JMeter configuration, Taurus, scenarios, and advanced features. Use when working with Performance tests for (1) Configuring load settings and distribution, (2) Creating and running tests (JMeter, Browser, URL/API, Multi-Test), (3) Analyzing reports and filtering data, (4) Configuring JMeter properties and scenarios, (5) Using Taurus for test configuration, (6) Advanced features (AI Log Analysis, APM Integration, Network Emulation, Mainframe Testing), (7) Troubleshooting test issues, or any other Performance Testing tasks.

61

Quality

71%

Does it follow best practices?

Impact

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./resources/skills/blazemeter-performance-testing/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly identifies the BlazeMeter performance testing domain, lists specific concrete capabilities across seven numbered categories, and includes an explicit 'Use when' clause with detailed trigger scenarios. The description uses proper third-person voice and includes numerous natural trigger terms that users would employ when seeking help with BlazeMeter or performance testing tasks.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: configuring load settings, creating/running tests (with specific types), analyzing reports, configuring JMeter properties, using Taurus, AI Log Analysis, APM Integration, Network Emulation, Mainframe Testing, and troubleshooting.

3 / 3

Completeness

Clearly answers both 'what' (comprehensive guide for BlazeMeter Performance Testing with specific capabilities listed) and 'when' (explicit 'Use when working with Performance tests for...' clause with seven numbered trigger scenarios plus a catch-all).

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'BlazeMeter', 'Performance Testing', 'load', 'JMeter', 'Taurus', 'reports', 'API', 'Browser', 'Network Emulation', 'Mainframe Testing', 'APM Integration'. These cover a wide range of terms a user working with BlazeMeter would naturally use.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive due to the specific focus on BlazeMeter as a product, combined with specific tools like JMeter and Taurus. Unlikely to conflict with generic testing or other performance tool skills due to the clear product-specific niche.

3 / 3

Total

12

/

12

Passed

Implementation

42%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill is well-structured as a navigation hub with clear progressive disclosure to reference files, and provides useful MCP tool integration details. However, it suffers from significant verbosity with redundant sections (e.g., 'When to Use Each Reference' duplicates reference file descriptions), lacks concrete executable examples with actual parameter values, and the workflows miss validation checkpoints needed for reliable test execution.

Suggestions

Remove the redundant 'When to Use Each Reference' and 'When to Use MCP Tools' sections, as this information is already conveyed by the reference file descriptions and tool listings respectively.

Add concrete examples with actual parameter values for MCP tool calls, e.g., show a real `blazemeter_tests` call with action `configure_load` including sample users/duration/ramp-up values.

Add validation checkpoints to workflows, such as verifying test configuration after creation (read back the test) and checking execution status before retrieving results.

Replace the 'Quick Start' section with a genuinely actionable quick-start example showing a minimal end-to-end test creation and execution flow with specific tool calls.

DimensionReasoningScore

Conciseness

The content is verbose and repetitive. The 'When to Use MCP Tools' section restates what's already obvious from the tool descriptions. The 'When to Use Each Reference' section at the bottom repeats what's already stated in the Reference Files section headers. The 'Quick Start' section is just a table of contents, not actionable guidance. Multiple sections explain things Claude can infer.

1 / 3

Actionability

The MCP tool descriptions provide concrete tool names, actions, and required arguments, and the example workflows give step-by-step sequences. However, there are no executable code examples, no actual parameter values shown, and no concrete input/output examples demonstrating real usage patterns.

2 / 3

Workflow Clarity

The 'Creating and Running a Performance Test' and 'Analyzing Test Results' workflows provide clear sequences, but they lack validation checkpoints. There's no guidance on what to do if a step fails, no verification that load configuration was applied correctly before starting execution, and no feedback loops for error recovery.

2 / 3

Progressive Disclosure

The skill has a clear overview structure with well-organized, one-level-deep references to seven separate reference files. Each reference is clearly signaled with descriptive labels indicating what topics it covers, making navigation straightforward.

3 / 3

Total

8

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
Blazemeter/bzm-mcp
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.