Test web applications with screen readers including VoiceOver, NVDA, and JAWS. Use when validating screen reader compatibility, debugging accessibility issues, or ensuring assistive technology support.
Install with Tessl CLI
npx tessl i github:wshobson/agents --skill screen-reader-testing88
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillAgent success when using this skill
Validation for skill structure
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-crafted skill description that excels across all dimensions. It uses third person voice, lists specific screen reader tools by name, provides concrete use cases, and includes an explicit 'Use when...' clause with natural trigger terms. The description clearly carves out a distinct niche for screen reader testing that wouldn't overlap with general accessibility or testing skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Test web applications with screen readers', names specific tools (VoiceOver, NVDA, JAWS), and describes purposes like 'validating screen reader compatibility, debugging accessibility issues, ensuring assistive technology support'. | 3 / 3 |
Completeness | Clearly answers both what ('Test web applications with screen readers including VoiceOver, NVDA, and JAWS') and when ('Use when validating screen reader compatibility, debugging accessibility issues, or ensuring assistive technology support') with explicit trigger guidance. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural terms users would say: 'screen readers', 'VoiceOver', 'NVDA', 'JAWS', 'accessibility issues', 'assistive technology'. These are the exact terms developers and testers use when working on accessibility. | 3 / 3 |
Distinctiveness Conflict Risk | Clear niche focused specifically on screen reader testing with named tools. Unlikely to conflict with general accessibility skills or web testing skills due to the specific focus on screen readers and assistive technology. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a highly actionable and well-structured skill for screen reader testing with excellent executable code examples and clear testing workflows. The main weakness is its monolithic structure - at 400+ lines, it would benefit from splitting detailed command references and test scripts into separate files. Some introductory content about screen reader basics could be trimmed since Claude already understands these concepts.
Suggestions
Split detailed command references (VoiceOver, NVDA, JAWS, TalkBack) into separate reference files and link from the main skill
Move the extensive testing checklists and scripts to a separate TEST-SCRIPTS.md file
Remove or condense the 'Core Concepts' section - Claude doesn't need explanations of what screen readers are or their market share percentages
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is comprehensive but includes some redundancy and could be tightened. The tables and command references are efficient, but sections like 'Core Concepts' explain things Claude likely knows (what screen readers are, basic usage percentages). The extensive checklists and test scripts add value but contribute to overall length. | 2 / 3 |
Actionability | Excellent actionability with specific keyboard commands, executable JavaScript code examples, complete HTML snippets, and step-by-step test scripts. Commands are copy-paste ready with clear formatting (e.g., 'VO + Right Arrow' for navigation). The debugging function and focus management code are fully executable. | 3 / 3 |
Workflow Clarity | Clear multi-step workflows with explicit testing scripts that include numbered steps and checkboxes. The NVDA Test Script section provides a complete sequence with validation checkpoints ('Check: Label read?', 'Check: Errors announced?'). Modal dialog example includes proper focus management with error recovery. | 3 / 3 |
Progressive Disclosure | Content is well-organized with clear sections and headers, but it's monolithic - all content is inline rather than split into separate reference files. The extensive command references, test scripts, and code examples could be moved to separate files (e.g., VOICEOVER-COMMANDS.md, TEST-SCRIPTS.md) with the main skill providing an overview. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (546 lines); consider splitting into references/ and linking | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.