CtrlK
BlogDocsLog inGet started
Tessl Logo

screen-reader-testing

Test web applications with screen readers including VoiceOver, NVDA, and JAWS. Use when validating screen reader compatibility, debugging accessibility issues, or ensuring assistive technology support.

87

1.03x
Quality

71%

Does it follow best practices?

Impact

98%

1.03x

Average score across 6 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/accessibility-compliance/skills/screen-reader-testing/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-crafted skill description that clearly identifies its domain (screen reader testing), lists specific tools by name, and provides explicit trigger guidance via a 'Use when' clause. It uses third person voice throughout and covers natural keywords users would employ when seeking this capability. The description is concise yet comprehensive, making it easy to distinguish from related but different skills.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'Test web applications with screen readers', names specific tools (VoiceOver, NVDA, JAWS), and mentions validating compatibility, debugging accessibility issues, and ensuring assistive technology support.

3 / 3

Completeness

Clearly answers both 'what' (test web applications with screen readers including VoiceOver, NVDA, and JAWS) and 'when' (validating screen reader compatibility, debugging accessibility issues, ensuring assistive technology support) with an explicit 'Use when' clause.

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'screen readers', 'VoiceOver', 'NVDA', 'JAWS', 'accessibility issues', 'assistive technology', 'screen reader compatibility'. These cover the major terms a user would naturally use when needing this skill.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive niche focused specifically on screen reader testing with named tools (VoiceOver, NVDA, JAWS). Unlikely to conflict with general accessibility skills or general web testing skills due to the specific screen reader focus.

3 / 3

Total

12

/

12

Passed

Implementation

42%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is a comprehensive screen reader testing reference with excellent actionability—concrete commands, executable code, and detailed checklists. However, it is far too verbose for a SKILL.md, reading more like a full documentation site than a concise skill. The content desperately needs progressive disclosure: keyboard shortcuts, full code examples, and per-screen-reader details should be split into referenced files.

Suggestions

Split per-screen-reader details (VoiceOver, NVDA, JAWS, TalkBack) into separate reference files and link to them from a concise overview in SKILL.md

Move the full code examples (modal dialog, tab interface, live regions) into a separate EXAMPLES.md or PATTERNS.md file

Remove explanatory content Claude already knows (what screen readers are, market share percentages, what browse/focus modes are) and focus on the testing workflow

Add an overarching testing workflow at the top that sequences the full process: setup → keyboard-only test → screen reader test → fix → verify fix → retest

DimensionReasoningScore

Conciseness

Extremely verbose at ~400+ lines. Includes extensive reference material (keyboard shortcuts for 5 screen readers, full code examples for modals/tabs/live regions) that could be in separate files. The 'Core Concepts' section explains screen reader modes and market share—context Claude already knows. Much of this reads like a comprehensive reference manual rather than a lean skill.

1 / 3

Actionability

Highly actionable with specific keyboard commands, executable HTML/JS code examples, concrete testing checklists, and step-by-step test scripts. The common issues & fixes section provides copy-paste ready before/after HTML patterns.

3 / 3

Workflow Clarity

The NVDA test script provides a clear sequential workflow, and checklists are well-structured. However, there's no overarching workflow tying the testing process together (e.g., when to use which screen reader, how to prioritize findings), and no validation/verification steps for confirming fixes actually resolve the issues detected.

2 / 3

Progressive Disclosure

Monolithic wall of content with no references to external files despite being an ideal candidate for splitting (e.g., per-screen-reader command references, code examples, checklists). Everything is inlined in a single massive document with no bundle files to support it.

1 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (539 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Repository
wshobson/agents
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.