Test Retry Config - Auto-activating skill for Test Automation. Triggers on: test retry config, test retry config Part of the Test Automation skill category.
36
3%
Does it follow best practices?
Impact
99%
1.00xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./planned-skills/generated/09-test-automation/test-retry-config/SKILL.mdQuality
Discovery
7%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is extremely weak across all dimensions. It reads as an auto-generated stub with no concrete actions, no meaningful trigger terms beyond the skill name repeated, and no explanation of when or why Claude should select it. It provides virtually no information for Claude to make an informed skill selection decision.
Suggestions
Add concrete actions describing what the skill does, e.g., 'Configures test retry policies, sets retry counts, defines backoff strategies, and handles flaky test mitigation in test automation frameworks.'
Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks about retrying failed tests, configuring retry logic, handling flaky tests, setting retry counts, or adjusting test rerun behavior.'
Remove the duplicate trigger term ('test retry config' is listed twice) and expand with natural language variations users might actually say, such as 'flaky tests', 'rerun failed tests', 'retry policy', 'test retries'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description provides no concrete actions. It only names itself ('Test Retry Config') and states it's part of 'Test Automation' but never describes what it actually does—no verbs like 'configures', 'sets up', 'modifies', etc. | 1 / 3 |
Completeness | The description fails to answer 'what does this do' beyond the name itself, and the 'when' clause is essentially just the skill name repeated as a trigger. There is no meaningful explanation of either what or when. | 1 / 3 |
Trigger Term Quality | The only trigger term listed is 'test retry config' repeated twice. This is a narrow, technical phrase unlikely to match natural user language. Missing common variations like 'flaky tests', 'retry logic', 'test rerun', 'retry policy', 'test failure retry', etc. | 1 / 3 |
Distinctiveness Conflict Risk | The phrase 'test retry config' is fairly niche and unlikely to conflict with many other skills, but the lack of specificity about what it actually does means it could overlap with broader test automation or test configuration skills. | 2 / 3 |
Total | 5 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is essentially a placeholder with no substantive content. It describes the concept of 'test retry config' at a meta level without providing any actual instructions, code examples, or configuration patterns for any test framework (Jest, pytest, etc.). It fails on every dimension because it contains no actionable information whatsoever.
Suggestions
Add concrete, executable code examples for configuring test retries in specific frameworks (e.g., Jest's `jest.retryTimes()`, pytest-rerunfailures `--reruns` flag, Cypress retry configuration).
Include a clear workflow: identify flaky test → configure retry mechanism → validate retry behavior → review retry logs to fix root cause.
Remove all generic 'meta' sections (Purpose, When to Use, Capabilities, Example Triggers) and replace with actual technical content—configuration snippets, CLI commands, and best practices for retry thresholds.
Add framework-specific sections or link to separate files for each framework (Jest, pytest, Cypress, etc.) with copy-paste-ready configuration examples.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is padded with generic filler text that provides no actionable information. Phrases like 'Provides step-by-step guidance' and 'Follows industry best practices' are vague platitudes that waste tokens without teaching Claude anything it doesn't already know. | 1 / 3 |
Actionability | There is zero concrete guidance—no code examples, no specific commands, no configuration snippets, no framework-specific retry patterns. The skill describes what it could do rather than actually instructing Claude how to do it. | 1 / 3 |
Workflow Clarity | No workflow, steps, or sequence of any kind is provided. There are no validation checkpoints, no error handling guidance, and no actual process for configuring test retries. | 1 / 3 |
Progressive Disclosure | The content is a monolithic block of generic text with no references to detailed materials, no links to framework-specific guides, and no structured navigation to deeper content. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3e83543
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.