CtrlK
BlogDocsLog inGet started
Tessl Logo

screen-reader-testing

Test web applications with screen readers including VoiceOver, NVDA, and JAWS. Use when validating screen reader compatibility, debugging accessibility issues, or ensuring assistive technology support.

87

1.03x
Quality

71%

Does it follow best practices?

Impact

98%

1.03x

Average score across 6 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/accessibility-compliance/skills/screen-reader-testing/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-crafted skill description that clearly defines its scope (screen reader testing for web apps), names specific tools, and provides explicit 'Use when' triggers. It follows the recommended pattern closely, uses third person voice, and includes natural keywords that users would employ when seeking this functionality.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'Test web applications with screen readers', names specific tools (VoiceOver, NVDA, JAWS), and mentions validating compatibility, debugging accessibility issues, and ensuring assistive technology support.

3 / 3

Completeness

Clearly answers both what ('Test web applications with screen readers including VoiceOver, NVDA, and JAWS') and when ('Use when validating screen reader compatibility, debugging accessibility issues, or ensuring assistive technology support') with explicit trigger guidance.

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'screen readers', 'VoiceOver', 'NVDA', 'JAWS', 'accessibility issues', 'assistive technology', 'screen reader compatibility'. These cover the major terms a user would naturally use when needing this skill.

3 / 3

Distinctiveness Conflict Risk

Occupies a clear niche focused specifically on screen reader testing for web applications, naming specific tools (VoiceOver, NVDA, JAWS). This is distinct from general accessibility skills or general web testing skills, making conflicts unlikely.

3 / 3

Total

12

/

12

Passed

Implementation

42%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill is highly actionable with excellent concrete examples, working code, and specific commands for each screen reader. However, it is severely over-long and monolithic—it reads more like a comprehensive reference manual than a focused skill file. The lack of progressive disclosure and excessive verbosity significantly undermine its effectiveness as a SKILL.md that competes for context window space.

Suggestions

Split screen reader-specific sections (VoiceOver, NVDA, JAWS, TalkBack) into separate reference files and link to them from a concise overview in SKILL.md

Move the complete code examples (modal dialog, tab interface, live regions) into a separate EXAMPLES.md or PATTERNS.md file

Remove information Claude already knows (what screen readers are, basic ARIA concepts, usage percentages) and focus on the non-obvious testing workflows and common pitfalls

Add explicit validation/feedback loops to the testing workflows: 'If X is not announced, check Y, then verify Z'

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~400+ lines, including extensive reference tables, full keyboard shortcut lists, complete JavaScript implementations, and detailed checklists that Claude already knows or could derive. The screen reader mode table, usage percentages, and lengthy testing scripts add significant token cost without proportional value. Much of this is reference material that should be in separate files.

1 / 3

Actionability

The skill provides fully concrete, executable guidance: specific keyboard commands, complete HTML code examples with before/after fixes, working JavaScript implementations for focus trapping and tab navigation, and step-by-step testing scripts. Everything is copy-paste ready.

3 / 3

Workflow Clarity

The NVDA test script section provides a clear sequential workflow, and the VoiceOver checklist is well-structured. However, there are no explicit validation checkpoints or feedback loops for when tests fail—the testing process lacks 'if this fails, do X' recovery steps, which is important for debugging accessibility issues.

2 / 3

Progressive Disclosure

This is a monolithic wall of text with no references to external files. The extensive keyboard shortcut lists, full code examples for modals/tabs/live regions, and per-screen-reader sections should be split into separate reference files. Everything is inlined, making the skill extremely long and hard to navigate.

1 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (539 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Repository
wshobson/agents
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.