Comprehensive guide for BlazeMeter Functional Testing, including GUI Functional Tests, API Tests (deprecated), Action Library, and debugging. Use when working with Functional Testing for (1) Creating GUI Functional Tests (YAML, Java IDE, Python IDE), (2) Managing Functional Tests (duplicate, delete, move, rename), (3) Using test data in Functional Tests, (4) Working with Action Library, (5) Debugging Functional Tests, (6) Understanding browser support, or any other Functional Testing tasks. Note - API Functional Tests are deprecated in favor of API Monitoring.
57
65%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./resources/skills/blazemeter-functional-testing/SKILL.mdQuality
Discovery
85%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly identifies its domain (BlazeMeter Functional Testing), lists specific capabilities with good granularity, and includes an explicit 'Use when' clause with numbered trigger scenarios. The deprecation note for API Functional Tests is a helpful disambiguation detail. The main weakness is that trigger terms lean toward product-specific jargon rather than natural user language.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description lists multiple specific concrete actions: creating GUI Functional Tests (with specific formats: YAML, Java IDE, Python IDE), managing tests (duplicate, delete, move, rename), using test data, working with Action Library, debugging, and understanding browser support. These are detailed and actionable. | 3 / 3 |
Completeness | Clearly answers both 'what' (comprehensive guide for BlazeMeter Functional Testing covering GUI tests, API tests, Action Library, debugging) and 'when' (explicit 'Use when working with Functional Testing for...' clause with six numbered trigger scenarios). The deprecation note adds useful context. | 3 / 3 |
Trigger Term Quality | Includes relevant keywords like 'BlazeMeter', 'Functional Testing', 'GUI Functional Tests', 'API Tests', 'Action Library', 'debugging', 'YAML', 'Java IDE', 'Python IDE'. However, it's somewhat jargon-heavy and may miss natural user phrasings like 'functional test automation', 'test scripts', 'record and playback', or 'BlazeMeter GUI testing'. | 2 / 3 |
Distinctiveness Conflict Risk | Highly distinctive due to the specific product name 'BlazeMeter' and the focus on 'Functional Testing' as a distinct feature area. The enumerated sub-capabilities (GUI tests, Action Library, debugging) create a clear niche that is unlikely to conflict with other skills like performance testing or API monitoring. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
44%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill serves primarily as a navigation hub to reference files, which it does reasonably well with clear progressive disclosure. However, the body content itself lacks actionable, executable guidance—the MCP tools section describes tools abstractly without concrete examples, and the workflow lacks validation steps or error handling. The content could be significantly tightened by removing redundant sections and adding concrete examples.
Suggestions
Add a concrete, executable MCP tool example with actual parameters and expected response structure (e.g., `blazemeter_tests` call with specific `project_id` and sample JSON response).
Add validation/error-handling steps to the Example Workflow, such as checking execution status codes, handling failed tests, and retry logic.
Remove the 'When to Use Each Reference' section as it duplicates information already conveyed by the Reference Files section headings and descriptions.
Replace the 'When to Use MCP Tools' bullet list with a brief decision matrix or remove it entirely—the current content is too vague to be useful.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content has some unnecessary repetition (e.g., 'When to Use Each Reference' largely duplicates the Reference Files section headings) and the 'When to Use MCP Tools' section is vague filler. However, it's not egregiously verbose and avoids explaining basic concepts Claude already knows. | 2 / 3 |
Actionability | The MCP tools section provides specific tool names and parameter details, which is somewhat actionable. However, there are no executable code examples, no concrete YAML snippets, no actual API call examples with expected responses, and the 'Example Workflow' is just a list of abstract steps rather than copy-paste ready commands. | 2 / 3 |
Workflow Clarity | The 'Example Workflow' section is a shallow sequence with no validation checkpoints, no error handling, and no feedback loops. For a testing tool where test execution can fail, there's no guidance on what to do when tests fail, how to interpret results, or how to retry. The workflow is essentially 'list, read, monitor, review' with no substance. | 1 / 3 |
Progressive Disclosure | The skill is well-structured as an overview with clear one-level-deep references to specific topic files (gui-tests.md, api-tests.md, action-library.md, debugging.md, browsers.md). Each reference is clearly labeled with its contents, and navigation is straightforward. | 3 / 3 |
Total | 8 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
6395eba
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.