You are an accessibility expert specializing in WCAG compliance, inclusive design, and assistive technology compatibility. Conduct audits, identify barriers, and provide remediation guidance.
46
33%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/accessibility-compliance-accessibility-audit/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear domain (accessibility and WCAG compliance) and lists some high-level actions, but it lacks a 'Use when...' clause, uses first/second-person-adjacent framing ('You are an accessibility expert'), and misses many natural trigger terms users would employ. The role-playing preamble ('You are an accessibility expert') wastes space that could be used for concrete capability descriptions and trigger guidance.
Suggestions
Add an explicit 'Use when...' clause with trigger scenarios, e.g., 'Use when the user asks about accessibility, WCAG compliance, screen reader compatibility, ARIA attributes, or ADA requirements.'
Replace the role-playing opener ('You are an accessibility expert') with third-person capability statements, e.g., 'Conducts WCAG accessibility audits, identifies barriers for users with disabilities, and provides remediation guidance for HTML, CSS, and ARIA issues.'
Include more natural trigger terms users would say, such as 'screen reader', 'a11y', 'alt text', 'keyboard navigation', 'color contrast', 'ADA', and 'ARIA'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (accessibility, WCAG compliance) and some actions (conduct audits, identify barriers, provide remediation guidance), but these are somewhat general and not highly concrete—e.g., it doesn't specify what types of audits, what kinds of barriers, or what remediation looks like in practice. | 2 / 3 |
Completeness | Describes what the skill does (conduct audits, identify barriers, provide remediation) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and the 'when' is entirely absent, warranting a score of 1. | 1 / 3 |
Trigger Term Quality | Includes relevant keywords like 'WCAG', 'accessibility', 'inclusive design', 'assistive technology', and 'audits', but misses common user-facing variations such as 'screen reader', 'a11y', 'ADA compliance', 'alt text', 'aria labels', or 'keyboard navigation' that users would naturally mention. | 2 / 3 |
Distinctiveness Conflict Risk | The accessibility/WCAG focus provides some distinctiveness, but the broad framing ('inclusive design', 'identify barriers', 'remediation guidance') could overlap with general web development, UX design, or compliance-related skills without clearer scoping. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads more like a high-level process outline than actionable guidance. It lacks concrete tools, commands, code examples, or specific WCAG criteria references that would make it immediately useful. The workflow sequence is reasonable but needs validation checkpoints and concrete details to be effective for real audits.
Suggestions
Add concrete tool names and executable commands (e.g., `npx axe-core`, `lighthouse --accessibility`, specific browser DevTools steps) instead of abstract instructions like 'Run automated scans'.
Include at least one concrete example: e.g., a sample finding mapped to a WCAG criterion with severity, user impact, and a specific remediation code snippet.
Add explicit validation checkpoints in the workflow, such as 'Verify zero critical violations in axe scan before proceeding to manual checks' and a re-test gate with pass/fail criteria.
Remove the redundant 'Context' section which restates the skill description, and trim the 'Do not use' section to save tokens.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill has some unnecessary padding—the 'Context' section restates the description, and the 'Use this skill when' / 'Do not use this skill when' sections add moderate value but are somewhat verbose. The instructions themselves are reasonably lean. | 2 / 3 |
Actionability | The instructions are entirely abstract and descriptive ('Run automated scans', 'Perform manual checks') with no concrete commands, tool names, code snippets, or specific examples. There is nothing copy-paste ready or executable. | 1 / 3 |
Workflow Clarity | Steps are listed in a logical sequence (scope → scan → manual check → map → remediate → re-test), but there are no explicit validation checkpoints, no feedback loops for error recovery, and the re-test step is vague rather than a concrete verification gate. | 2 / 3 |
Progressive Disclosure | There is a reference to `resources/implementation-playbook.md` for detailed content, which is good one-level-deep disclosure. However, the main skill body is thin enough that it's unclear what value the split provides, and the reference is mentioned twice (in Instructions and Resources) without clear signaling of what's inside. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
d739c8b
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.