Test web applications with screen readers including VoiceOver, NVDA, and JAWS. Use when validating screen reader compatibility, debugging accessibility issues, or ensuring assistive technology support.
87
71%
Does it follow best practices?
Impact
98%
1.03xAverage score across 6 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/accessibility-compliance/skills/screen-reader-testing/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-crafted skill description that clearly identifies its domain (screen reader testing), lists specific tools by name, and provides explicit trigger guidance via a 'Use when' clause. It uses third person voice throughout and covers natural keywords users would employ when seeking this capability. The description is concise yet comprehensive.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Test web applications with screen readers', names specific tools (VoiceOver, NVDA, JAWS), and mentions validating compatibility, debugging accessibility issues, and ensuring assistive technology support. | 3 / 3 |
Completeness | Clearly answers both 'what' (test web applications with screen readers including VoiceOver, NVDA, and JAWS) and 'when' (validating screen reader compatibility, debugging accessibility issues, ensuring assistive technology support) with an explicit 'Use when' clause. | 3 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'screen readers', 'VoiceOver', 'NVDA', 'JAWS', 'accessibility issues', 'assistive technology', 'screen reader compatibility'. These cover the major terms a user would naturally use when needing this skill. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche focused specifically on screen reader testing with named tools (VoiceOver, NVDA, JAWS). Unlikely to conflict with general accessibility skills or general testing skills due to the specific screen reader focus. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill excels in actionability with concrete, executable code examples and specific commands, but is severely undermined by its monolithic structure and verbosity. At 400+ lines with inline reference tables, complete JavaScript implementations, and exhaustive keyboard shortcut lists, it consumes far too many tokens for a SKILL.md file. The content would benefit enormously from splitting reference material into separate files and keeping only a concise overview with navigation links.
Suggestions
Split keyboard shortcut references, full code examples, and per-screen-reader details into separate files (e.g., VOICEOVER.md, NVDA.md, JAWS.md, EXAMPLES.md) and link to them from a concise overview
Remove information Claude already knows (e.g., what screen reader modes are, basic ARIA concepts, usage percentage statistics) and focus only on testing-specific guidance
Add explicit validation/feedback loops to the testing workflows: 'If X fails, check Y, then retry Z'
Reduce the main SKILL.md to under 100 lines with a quick-start testing workflow and clear pointers to detailed reference files
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is extremely verbose at ~400+ lines, including extensive reference tables, full keyboard shortcut lists, complete JavaScript implementations, and detailed checklists that Claude already knows or could derive. The screen reader mode table, usage percentages, and lengthy testing scripts add significant token cost without proportional value. Much of this is reference material that should be in separate files. | 1 / 3 |
Actionability | The skill provides fully concrete, executable guidance: specific keyboard commands, complete HTML code examples with before/after fixes, working JavaScript implementations for focus trapping and tab navigation, and step-by-step testing scripts. Everything is copy-paste ready. | 3 / 3 |
Workflow Clarity | The NVDA test script provides a clear sequential workflow, and the VoiceOver checklist is well-structured. However, there are no explicit validation checkpoints or feedback loops for when tests fail—the testing process lacks 'if this fails, do X' recovery steps, which is important for debugging accessibility issues. | 2 / 3 |
Progressive Disclosure | This is a monolithic wall of text with everything inline—keyboard shortcuts, full code examples, checklists, debugging tips, and best practices for 5+ screen readers all in one file. The keyboard command references, code examples, and per-screen-reader details should be split into separate reference files with clear navigation links from the main skill. | 1 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (539 lines); consider splitting into references/ and linking | Warning |
Total | 10 / 11 Passed | |
27a7ed9
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.