CtrlK
BlogDocsLog inGet started
Tessl Logo

screen-reader-testing

Test web applications with screen readers including VoiceOver, NVDA, and JAWS. Use when validating screen reader compatibility, debugging accessibility issues, or ensuring assistive technology support.

80

1.07x
Quality

71%

Does it follow best practices?

Impact

100%

1.07x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./tests/ext_conformance/artifacts/agents-wshobson/accessibility-compliance/skills/screen-reader-testing/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-crafted skill description that clearly defines its scope around screen reader testing for web applications. It names specific tools, provides concrete actions, and includes an explicit 'Use when' clause with natural trigger terms. The description is concise, uses third person voice, and would be easily distinguishable from other skills in a large skill library.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'Test web applications with screen readers', names specific tools (VoiceOver, NVDA, JAWS), and mentions validating compatibility, debugging accessibility issues, and ensuring assistive technology support.

3 / 3

Completeness

Clearly answers both 'what' (test web applications with screen readers including VoiceOver, NVDA, and JAWS) and 'when' (explicit 'Use when' clause covering validating screen reader compatibility, debugging accessibility issues, or ensuring assistive technology support).

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'screen readers', 'VoiceOver', 'NVDA', 'JAWS', 'accessibility issues', 'assistive technology', 'screen reader compatibility'. These cover the major terms a user would naturally use when needing this skill.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive niche focused specifically on screen reader testing with named tools (VoiceOver, NVDA, JAWS). This is clearly distinguishable from general accessibility skills or general web testing skills, with specific trigger terms that reduce conflict risk.

3 / 3

Total

12

/

12

Passed

Implementation

42%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill is highly actionable with excellent concrete examples, keyboard shortcuts, and code patterns, but it suffers severely from being a monolithic reference document rather than a focused skill. At 400+ lines, it dumps extensive reference material inline that Claude largely already knows (standard ARIA patterns, semantic HTML), and fails to use progressive disclosure to split screen-reader-specific details and code patterns into separate files.

Suggestions

Split screen reader-specific sections (VoiceOver, NVDA, JAWS, TalkBack) into separate referenced files (e.g., VOICEOVER.md, NVDA.md) and keep only a brief overview with links in the main skill.

Move the common test scenarios (modal, live regions, tabs) and their full code examples into a separate PATTERNS.md file, referencing it from the main skill.

Remove explanations of concepts Claude already knows well—ARIA roles, semantic HTML principles, what screen reader modes are—and focus on project-specific or non-obvious testing guidance.

Add an overarching testing workflow with explicit validation checkpoints: e.g., 1) Run automated checks first, 2) Test with NVDA, 3) Log issues, 4) Fix and re-test, 5) Verify with VoiceOver.

DimensionReasoningScore

Conciseness

This is extremely verbose at ~400+ lines. It includes extensive reference material (keyboard shortcuts, full checklists, complete code examples for modals/tabs/live regions) that could easily be split into separate files. Much of this is general accessibility knowledge Claude already possesses—ARIA patterns, semantic HTML principles, and standard screen reader commands are well-documented knowledge.

1 / 3

Actionability

The skill provides fully concrete, executable guidance: specific keyboard commands, complete HTML/JS code examples for common patterns (modals, tabs, live regions), step-by-step test scripts, and specific fix examples for common issues. Everything is copy-paste ready.

3 / 3

Workflow Clarity

The NVDA test script section provides a clear sequential workflow, and the VoiceOver checklist is well-structured. However, there's no overarching testing workflow that sequences across screen readers, no validation checkpoints for confirming fixes work, and no feedback loop for iterating on discovered issues.

2 / 3

Progressive Disclosure

This is a monolithic wall of text with everything inline. The detailed keyboard shortcuts for each screen reader, complete code examples for modals/tabs/live regions, and full testing checklists should all be in separate referenced files. The skill would benefit enormously from a concise overview pointing to VOICEOVER.md, NVDA.md, JAWS.md, PATTERNS.md, etc.

1 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (546 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Repository
Dicklesworthstone/pi_agent_rust
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.