Test web applications with screen readers including VoiceOver, NVDA, and JAWS. Use when validating screen reader compatibility, debugging accessibility issues, or ensuring assistive technology support.
80
71%
Does it follow best practices?
Impact
100%
1.07xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./tests/ext_conformance/artifacts/agents-wshobson/accessibility-compliance/skills/screen-reader-testing/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-crafted skill description that clearly identifies its domain (screen reader testing), lists specific tools by name, and provides explicit trigger guidance via a 'Use when' clause. It uses third person voice correctly and covers natural keywords users would employ when seeking this capability. The description is concise yet comprehensive.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Test web applications with screen readers', names specific tools (VoiceOver, NVDA, JAWS), and mentions validating compatibility, debugging accessibility issues, and ensuring assistive technology support. | 3 / 3 |
Completeness | Clearly answers both 'what' (test web applications with screen readers including VoiceOver, NVDA, and JAWS) and 'when' (validating screen reader compatibility, debugging accessibility issues, ensuring assistive technology support) with an explicit 'Use when' clause. | 3 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'screen readers', 'VoiceOver', 'NVDA', 'JAWS', 'accessibility issues', 'assistive technology', 'screen reader compatibility'. These cover the major terms a user would naturally use when needing this skill. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche focused specifically on screen reader testing with named tools (VoiceOver, NVDA, JAWS). Unlikely to conflict with general accessibility skills or general testing skills due to the specific screen reader focus. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is a comprehensive reference document rather than a focused skill guide. While it excels in actionability with concrete code examples and specific commands, it is far too verbose for a SKILL.md file, containing extensive reference material (keyboard shortcuts, full JS implementations, usage statistics) that should be in separate files. The lack of progressive disclosure and the monolithic structure make it inefficient as context window content.
Suggestions
Extract keyboard shortcut tables, full testing checklists, and per-screen-reader details into separate reference files (e.g., VOICEOVER.md, NVDA.md, CHECKLISTS.md) and link to them from the main skill.
Reduce the main SKILL.md to a concise overview covering: testing priority order, the 3-5 most critical things to verify, and common issue/fix patterns, with references to detailed files.
Remove information Claude already knows (e.g., what screen readers are, basic ARIA concepts, screen reader market share percentages) and focus on non-obvious testing gotchas and workflows.
Add an overarching testing workflow with validation steps: e.g., 1) Run automated checks first, 2) Test with NVDA+Firefox, 3) Verify fixes, 4) Cross-check with VoiceOver, 5) Document findings.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~400+ lines. Includes extensive keyboard shortcut references, full testing checklists, complete JavaScript implementations, and screen reader usage statistics that Claude already knows or could look up. Much of this is reference material that belongs in separate files, not inline. | 1 / 3 |
Actionability | Highly actionable with concrete HTML code examples showing issues and fixes, executable JavaScript for focus management and tab navigation, specific keyboard commands, and step-by-step testing scripts. All code is copy-paste ready. | 3 / 3 |
Workflow Clarity | The NVDA test script provides a clear sequential workflow, and checklists are well-structured. However, there's no overarching workflow tying the testing process together (e.g., when to use which screen reader, how to prioritize findings), and no validation/verification steps for confirming fixes actually resolve the issues. | 2 / 3 |
Progressive Disclosure | Monolithic wall of content with everything inline. The keyboard shortcuts, full code examples, testing checklists, and per-screen-reader details should be split into separate reference files. No use of progressive disclosure - the skill dumps everything at once with no signposting to external files. | 1 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (546 lines); consider splitting into references/ and linking | Warning |
Total | 10 / 11 Passed | |
47823e3
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.