CtrlK
CommunityDocumentationLog inGet started
Tessl Logo

screen-reader-testing

tessl i github:wshobson/agents --skill screen-reader-testing

Test web applications with screen readers including VoiceOver, NVDA, and JAWS. Use when validating screen reader compatibility, debugging accessibility issues, or ensuring assistive technology support.

87%

Overall

SKILL.md
Review
Evals

Validation

81%
CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (546 lines); consider splitting into references/ and linking

Warning

metadata_version

'metadata' field is not a dictionary

Warning

license_field

'license' field is missing

Warning

Total

13

/

16

Passed

Implementation

77%

This is a highly actionable and well-structured skill for screen reader testing with excellent concrete examples, keyboard commands, and testing checklists. The main weakness is verbosity - the document could be more concise by removing explanatory content Claude already knows and splitting detailed references into separate files for better progressive disclosure.

Suggestions

Move detailed keyboard command tables for each screen reader into separate reference files (e.g., VOICEOVER_COMMANDS.md, NVDA_COMMANDS.md) and keep only essential commands inline

Remove or condense the 'Core Concepts' section - Claude knows what screen readers are and their market share data is not essential for testing

Consider splitting the 'Common Test Scenarios' code examples into a separate EXAMPLES.md file, keeping only brief references in the main skill

DimensionReasoningScore

Conciseness

The content is comprehensive but includes some redundancy and explanatory content that Claude would already know (e.g., explaining what screen readers are, basic concepts). The tables and command references are efficient, but the overall length (~500 lines) could be trimmed by removing obvious information and consolidating similar sections.

2 / 3

Actionability

Excellent actionability with concrete keyboard commands, executable JavaScript code examples, copy-paste ready HTML snippets, and specific testing checklists. The modal dialog, live regions, and tab interface examples are fully executable and demonstrate real implementation patterns.

3 / 3

Workflow Clarity

Clear testing workflows with explicit checklists and step-by-step test scripts for NVDA. The testing priority section provides clear sequencing (minimum vs comprehensive coverage), and the checklists include validation checkpoints for each testing phase.

3 / 3

Progressive Disclosure

Content is well-organized with clear sections and headers, but it's a monolithic document that could benefit from splitting detailed command references and code examples into separate files. The external resource links at the end are helpful but inline content is quite dense.

2 / 3

Total

10

/

12

Passed

Activation

100%

This is an excellent skill description that follows best practices. It uses third person voice, lists specific screen reader tools by name, clearly states what the skill does, and provides explicit 'Use when' triggers with natural keywords users would actually say when needing accessibility testing help.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'Test web applications with screen readers', names specific tools (VoiceOver, NVDA, JAWS), and describes concrete use cases (validating compatibility, debugging issues, ensuring support).

3 / 3

Completeness

Clearly answers both what ('Test web applications with screen readers including VoiceOver, NVDA, and JAWS') and when ('Use when validating screen reader compatibility, debugging accessibility issues, or ensuring assistive technology support') with explicit trigger guidance.

3 / 3

Trigger Term Quality

Excellent coverage of natural terms users would say: 'screen readers', 'VoiceOver', 'NVDA', 'JAWS', 'accessibility issues', 'assistive technology'. These are exactly the terms someone needing this skill would use.

3 / 3

Distinctiveness Conflict Risk

Clear niche focused specifically on screen reader testing and assistive technology. The specific tool names (VoiceOver, NVDA, JAWS) and accessibility focus make it highly distinguishable from general testing or accessibility skills.

3 / 3

Total

12

/

12

Passed

Reviewed

Table of Contents

ValidationImplementationActivation

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.