Test web applications with screen readers including VoiceOver, NVDA, and JAWS. Use when validating screen reader compatibility, debugging accessibility issues, or ensuring assistive technology support.
Install with Tessl CLI
npx tessl i github:Dicklesworthstone/pi_agent_rust --skill screen-reader-testing90
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that hits all the marks. It uses third person voice, lists specific screen reader tools by name, provides concrete actions, and includes an explicit 'Use when...' clause with natural trigger terms. The description is concise yet comprehensive, making it easy for Claude to select this skill when users mention screen readers, accessibility testing, or specific tools like VoiceOver/NVDA/JAWS.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Test web applications with screen readers', names specific tools (VoiceOver, NVDA, JAWS), and describes concrete use cases (validating compatibility, debugging issues, ensuring support). | 3 / 3 |
Completeness | Clearly answers both what ('Test web applications with screen readers including VoiceOver, NVDA, and JAWS') and when ('Use when validating screen reader compatibility, debugging accessibility issues, or ensuring assistive technology support'). | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural terms users would say: 'screen readers', 'VoiceOver', 'NVDA', 'JAWS', 'accessibility issues', 'assistive technology'. These are exactly the terms developers would use when needing this skill. | 3 / 3 |
Distinctiveness Conflict Risk | Clear niche focused specifically on screen reader testing with named tools. Unlikely to conflict with general accessibility skills or other testing skills due to the specific focus on assistive technology and named screen readers. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a strong, highly actionable skill with excellent workflow clarity through detailed testing checklists and scripts. The main weakness is length - the comprehensive command references and code examples make it verbose, and some content could be split into reference files. Minor verbosity in explanatory sections could be trimmed.
Suggestions
Split detailed command references (VoiceOver, NVDA, JAWS commands) into a separate COMMANDS.md reference file
Move the extensive code examples (modal dialog, tab interface, live regions) to an EXAMPLES.md file with brief inline summaries
Remove obvious best practices that Claude already knows (e.g., 'Test with actual screen readers', 'Use semantic HTML first')
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is comprehensive but includes some content Claude likely knows (basic explanations of what screen readers are, general best practices). The tables and command references are valuable, but the document could be tightened by removing obvious guidance like 'Test with actual screen readers - Not just simulators'. | 2 / 3 |
Actionability | Excellent actionability with specific keyboard commands, executable JavaScript code examples, complete HTML snippets, and step-by-step testing scripts. The modal dialog example includes both HTML structure and working JavaScript for focus management. | 3 / 3 |
Workflow Clarity | Clear testing workflows with explicit checklists (VoiceOver Testing Checklist, NVDA Test Script) that include validation checkpoints. The step-by-step scripts guide through initial load, navigation, forms, and dynamic content testing with specific verification points. | 3 / 3 |
Progressive Disclosure | Content is well-organized with clear sections, but the document is quite long (~400 lines) and could benefit from splitting detailed command references and code examples into separate files. External resources are linked at the end but inline content could be better distributed. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (546 lines); consider splitting into references/ and linking | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.