Uses Chrome DevTools MCP for accessibility (a11y) debugging and auditing based on web.dev guidelines. Use when testing semantic HTML, ARIA labels, focus states, keyboard navigation, tap targets, and color contrast.
81
85%
Does it follow best practices?
Impact
55%
1.01xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly communicates its purpose, tools, and trigger conditions. It uses third person voice, lists specific concrete capabilities, includes a well-formed 'Use when...' clause with natural trigger terms, and occupies a distinct niche that minimizes conflict risk with other skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: accessibility debugging and auditing, testing semantic HTML, ARIA labels, focus states, keyboard navigation, tap targets, and color contrast. | 3 / 3 |
Completeness | Clearly answers both what ('accessibility debugging and auditing based on web.dev guidelines') and when ('Use when testing semantic HTML, ARIA labels, focus states, keyboard navigation, tap targets, and color contrast') with explicit trigger guidance. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural terms users would say: 'accessibility', 'a11y', 'semantic HTML', 'ARIA labels', 'focus states', 'keyboard navigation', 'tap targets', 'color contrast'. These are all terms a developer would naturally use when seeking accessibility help. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with a clear niche: accessibility auditing via Chrome DevTools MCP with web.dev guidelines. The combination of tool (Chrome DevTools MCP), domain (accessibility), and specific triggers (ARIA, focus states, tap targets) makes it very unlikely to conflict with other skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
70%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured accessibility debugging skill with clear workflow patterns and good progressive disclosure to external snippet files. Its main weakness is that actionability suffers from delegating most executable code to external references while keeping only one inline code example, and some sections include explanatory text that Claude wouldn't need. The workflow clarity is strong with logical sequencing and verification steps throughout.
Suggestions
Inline at least one representative snippet from references/a11y-snippets.md (e.g., the orphaned inputs or tap target snippet) so the skill is more immediately actionable without requiring file lookups.
Trim explanatory sentences like 'Start by running a Lighthouse accessibility audit to get a comprehensive baseline' and 'This tool provides a high-level score and lists specific failing elements with remediation advice'—these describe rather than instruct.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Generally efficient but includes some unnecessary explanatory text (e.g., explaining what the accessibility tree is, what Lighthouse scores mean). The 'Core Concepts' section about accessibility tree vs DOM is borderline useful but could be tighter. Some workflow descriptions have filler sentences like 'Start by running a Lighthouse accessibility audit to get a comprehensive baseline.' | 2 / 3 |
Actionability | Provides concrete tool names and parameters (e.g., `press_key` with `Tab`, `list_console_messages` with `types: ["issue"]`), and includes one executable code snippet for parsing Lighthouse reports. However, many steps reference snippets in an external file without showing the actual code, and several instructions are descriptive rather than executable (e.g., 'Ensure interactive elements have an accessible name'). | 2 / 3 |
Workflow Clarity | The 8 workflow patterns are clearly sequenced with numbered steps. Validation is embedded naturally—Lighthouse audit provides baseline scores, snapshots verify focus movement, and there are explicit verification steps (e.g., 'Locate the element marked as focused in the snapshot to verify focus moved to the expected interactive element'). The troubleshooting section provides fallback guidance. | 3 / 3 |
Progressive Disclosure | Excellent structure with clear overview sections and well-signaled one-level-deep references to `references/a11y-snippets.md` for detailed code snippets. The main file stays focused on workflows while delegating implementation details appropriately. References are consistently formatted with descriptive labels. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
1b857c9
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.