CtrlK
BlogDocsLog inGet started
Tessl Logo

screen-reader-testing

Test web applications with screen readers including VoiceOver, NVDA, and JAWS. Use when validating screen reader compatibility, debugging accessibility issues, or ensuring assistive technology support.

87

1.03x
Quality

71%

Does it follow best practices?

Impact

98%

1.03x

Average score across 6 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/accessibility-compliance/skills/screen-reader-testing/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-crafted skill description that clearly defines its scope around screen reader testing for web applications. It names specific tools (VoiceOver, NVDA, JAWS), uses natural trigger terms users would employ, and includes an explicit 'Use when' clause with clear trigger scenarios. The description is concise yet comprehensive, making it easy for Claude to select appropriately from a large skill set.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'Test web applications with screen readers', names specific tools (VoiceOver, NVDA, JAWS), and describes concrete use cases like 'validating screen reader compatibility', 'debugging accessibility issues', and 'ensuring assistive technology support'.

3 / 3

Completeness

Clearly answers both 'what' (test web applications with screen readers including VoiceOver, NVDA, and JAWS) and 'when' (explicit 'Use when' clause covering validating compatibility, debugging accessibility issues, or ensuring assistive technology support).

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'screen readers', 'VoiceOver', 'NVDA', 'JAWS', 'accessibility issues', 'assistive technology', 'screen reader compatibility'. These cover the major screen reader products and common accessibility terminology.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive niche focused specifically on screen reader testing, naming three specific tools. Unlikely to conflict with general accessibility skills or general testing skills due to the specific focus on screen readers and named assistive technologies.

3 / 3

Total

12

/

12

Passed

Implementation

42%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill excels in actionability with concrete, executable code examples and specific commands, but is severely undermined by its monolithic structure and verbosity. At 400+ lines covering five screen readers with full keyboard shortcut references, complete JavaScript implementations, and detailed checklists all inline, it consumes excessive tokens and would benefit enormously from splitting into separate reference files with a concise overview in SKILL.md.

Suggestions

Split screen reader-specific content (VoiceOver, NVDA, JAWS, TalkBack) into separate reference files and link to them from a concise overview in SKILL.md

Remove information Claude already knows, such as basic ARIA concepts, what screen reader modes are, and general best practices like 'use semantic HTML first'

Move the complete JavaScript implementations (focus trapping, tab navigation) and HTML pattern examples into a separate PATTERNS.md reference file

Add explicit validation/verification steps to the testing workflows, e.g., 'After fixing an ARIA issue, re-test with the screen reader to confirm the fix before moving to the next item'

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~400+ lines, including extensive reference tables, full keyboard shortcut lists, complete JavaScript implementations, and detailed checklists that Claude already knows or could derive. The screen reader mode table, usage percentages, and lengthy testing scripts add significant token cost without proportional value. Much of this is reference material that should be in separate files.

1 / 3

Actionability

The skill provides fully concrete, executable guidance: specific keyboard commands, complete HTML code examples with before/after fixes, working JavaScript implementations for focus trapping and tab navigation, and step-by-step testing scripts. Everything is copy-paste ready.

3 / 3

Workflow Clarity

The NVDA test script and VoiceOver checklist provide clear sequences, but there are no explicit validation checkpoints or feedback loops for error recovery. The testing workflows list steps but don't specify what to do when issues are found beyond 'fix it.' For accessibility testing involving potentially destructive ARIA changes, verification steps are implicit rather than explicit.

2 / 3

Progressive Disclosure

This is a monolithic wall of text with all content inline—keyboard shortcuts, code examples, checklists, debugging tips, and best practices for 5 different screen readers all in one file. The extensive reference material (shortcut tables, complete JS implementations, testing checklists) should be split into separate files with clear navigation links from a concise overview.

1 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (539 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Repository
wshobson/agents
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.