Accessibility Audit Runner - Auto-activating skill for Frontend Development. Triggers on: accessibility audit runner, accessibility audit runner Part of the Frontend Development skill category.
35
3%
Does it follow best practices?
Impact
95%
1.02xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./planned-skills/generated/05-frontend-dev/accessibility-audit-runner/SKILL.mdQuality
Discovery
7%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is essentially a placeholder that restates the skill name without providing any meaningful detail about what the skill does or when it should be used. It lacks concrete actions, natural trigger terms, and explicit usage guidance, making it nearly useless for skill selection among a large set of available skills.
Suggestions
Add specific concrete actions the skill performs, e.g., 'Runs automated accessibility audits using axe-core, checks WCAG 2.1 compliance, identifies missing ARIA attributes, and generates remediation reports.'
Add a 'Use when...' clause with natural trigger terms like 'accessibility check', 'a11y audit', 'WCAG compliance', 'screen reader compatibility', 'aria labels', or 'accessibility issues'.
Remove the duplicate trigger term ('accessibility audit runner' is listed twice) and replace with diverse, natural phrases users would actually say.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names a domain ('accessibility audit runner') but describes no concrete actions. There are no specific capabilities listed such as 'runs axe-core scans', 'checks WCAG compliance', 'generates accessibility reports', etc. | 1 / 3 |
Completeness | The description fails to answer 'what does this do' beyond the name itself, and the 'when' clause is essentially just the skill name repeated as a trigger. There is no explicit 'Use when...' guidance with meaningful triggers. | 1 / 3 |
Trigger Term Quality | The trigger terms are just 'accessibility audit runner' repeated twice. It misses natural user phrases like 'a11y', 'WCAG', 'screen reader', 'accessibility check', 'aria attributes', or 'accessibility testing'. | 1 / 3 |
Distinctiveness Conflict Risk | The term 'accessibility audit' is somewhat specific to a niche, which reduces conflict risk with unrelated skills. However, the lack of detail means it could overlap with other accessibility-related skills or general frontend testing skills. | 2 / 3 |
Total | 5 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is an empty placeholder with no actionable content. It repeats the skill name without providing any concrete guidance on running accessibility audits—no tools, no code, no workflows, no examples. It would provide zero value to Claude beyond what it already knows.
Suggestions
Add concrete, executable examples using specific accessibility audit tools (e.g., axe-core, pa11y, Lighthouse CLI) with actual commands and code snippets.
Define a clear multi-step workflow: e.g., 1) run audit with specific tool, 2) parse results, 3) prioritize violations by severity, 4) generate report—with validation at each step.
Remove all boilerplate sections (Purpose, When to Use, Example Triggers, Capabilities) that describe the skill meta-information rather than teaching how to perform the task.
Include a concrete example showing sample audit output and how to interpret/act on specific WCAG violations (e.g., missing alt text, insufficient color contrast).
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is entirely filler and boilerplate. It explains nothing Claude doesn't already know, repeats the phrase 'accessibility audit runner' excessively, and provides zero substantive information about how to actually run an accessibility audit. | 1 / 3 |
Actionability | There are no concrete steps, no executable code, no specific commands, no tool references (e.g., axe-core, Lighthouse, pa11y), and no examples of actual audit output or configuration. The content is entirely abstract and descriptive. | 1 / 3 |
Workflow Clarity | There is no workflow whatsoever—no steps, no sequence, no validation checkpoints. The bullet 'Validates outputs against common standards' is a vague claim with no supporting detail. | 1 / 3 |
Progressive Disclosure | The content is a monolithic block of generic text with no references to external files, no structured sections with real content, and no navigation to deeper material. There are no bundle files to support it either. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
13d35b8
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.