You are an accessibility expert specializing in WCAG compliance, inclusive design, and assistive technology compatibility. Conduct audits, identify barriers, and provide remediation guidance.
Install with Tessl CLI
npx tessl i github:sickn33/antigravity-awesome-skills --skill accessibility-compliance-accessibility-audit56
Quality
37%
Does it follow best practices?
Impact
89%
1.34xAverage score across 3 eval scenarios
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/accessibility-compliance-accessibility-audit/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description establishes a clear accessibility domain focus with relevant terminology like WCAG and assistive technology, but suffers from two key weaknesses: it uses second person voice ('You are') which violates the third-person requirement, and it completely lacks explicit trigger guidance for when Claude should select this skill. The actions listed are somewhat generic rather than concrete accessibility-specific tasks.
Suggestions
Add a 'Use when...' clause with explicit triggers like 'Use when the user asks about accessibility, WCAG compliance, screen reader support, color contrast, keyboard navigation, or ADA requirements'
Rewrite in third person voice: 'Conducts accessibility audits, identifies WCAG violations, tests assistive technology compatibility' instead of 'You are an expert'
Add more specific concrete actions and natural trigger terms users would say: 'a11y', 'alt text', 'aria labels', 'color contrast checker', 'screen reader testing'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (accessibility/WCAG) and some actions ('conduct audits, identify barriers, provide remediation guidance'), but these are somewhat general rather than listing multiple concrete specific actions like 'test screen reader compatibility, validate color contrast ratios, check keyboard navigation'. | 2 / 3 |
Completeness | Describes what it does but completely lacks a 'Use when...' clause or any explicit trigger guidance. Per rubric guidelines, missing explicit trigger guidance caps completeness at 2, and this has no 'when' component at all. | 1 / 3 |
Trigger Term Quality | Includes relevant terms like 'WCAG', 'accessibility', 'inclusive design', 'assistive technology', but misses common user variations like 'a11y', 'screen reader', 'ADA compliance', 'color contrast', 'keyboard navigation', or 'alt text'. | 2 / 3 |
Distinctiveness Conflict Risk | The accessibility/WCAG focus provides some distinctiveness, but 'conduct audits' and 'provide guidance' are generic enough to potentially overlap with other audit or compliance-focused skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a reasonable high-level framework for accessibility auditing with good progressive disclosure to a detailed playbook. However, it suffers from lack of actionability - the instructions read as abstract guidance rather than concrete, executable steps. The skill would benefit significantly from specific tool recommendations, example commands, and concrete validation criteria.
Suggestions
Add specific tool names and commands for automated scanning (e.g., 'Run axe-core: `npx axe-cli https://example.com`' or 'Use Lighthouse: `lighthouse --accessibility-only`')
Include concrete examples of manual checks with specific criteria (e.g., 'Tab through all interactive elements - verify visible focus indicator with minimum 3:1 contrast ratio')
Add validation checkpoints with explicit pass/fail criteria between steps (e.g., 'Proceed to manual testing only when automated scan shows <X critical violations')
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably efficient but includes some redundancy - the description in the header repeats the system prompt context, and the 'Context' section restates what's already clear from the skill's purpose. | 2 / 3 |
Actionability | The instructions are vague and abstract - 'Run automated scans', 'Perform manual checks' without specifying which tools, commands, or concrete steps. No executable code, specific tool names, or copy-paste ready commands are provided. | 1 / 3 |
Workflow Clarity | Steps are listed in a logical sequence (scope → scan → manual → map → remediate → re-test), but lacks validation checkpoints, specific criteria for when to proceed between steps, and no feedback loops for handling failed re-tests. | 2 / 3 |
Progressive Disclosure | Good structure with clear overview and a single well-signaled reference to the implementation playbook for detailed procedures. The 'Use/Do not use' sections help with scoping, and the reference is one level deep. | 3 / 3 |
Total | 8 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.