tessl i github:sickn33/antigravity-awesome-skills --skill accessibility-compliance-accessibility-auditYou are an accessibility expert specializing in WCAG compliance, inclusive design, and assistive technology compatibility. Conduct audits, identify barriers, and provide remediation guidance.
Validation
69%| Criteria | Description | Result |
|---|---|---|
description_trigger_hint | Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...') | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
license_field | 'license' field is missing | Warning |
body_output_format | No obvious output/return/format terms detected; consider specifying expected outputs | Warning |
body_steps | No step-by-step structure detected (no ordered list); consider adding a simple workflow | Warning |
Total | 11 / 16 Passed | |
Implementation
57%This skill provides a reasonable high-level framework for accessibility auditing with good structural organization and appropriate delegation to a detailed playbook. However, it lacks the concrete, executable guidance that would make it immediately actionable - no specific tool commands, example outputs, or validation checkpoints are provided in the main skill file.
Suggestions
Add specific tool examples for automated scanning (e.g., 'Run axe-core: `npx axe-cli https://example.com`') to improve actionability
Include a sample finding format or severity mapping table so Claude knows the expected output structure
Add explicit validation checkpoints between steps, such as 'Verify scan completed with <X violations before proceeding to manual checks'
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is reasonably efficient but includes some redundancy - the persona description appears in both the header and Context section, and phrases like 'ensure digital products are accessible to all users' add little value for Claude. | 2 / 3 |
Actionability | Instructions provide a clear checklist of steps but lack concrete examples - no specific tool commands, code snippets, or sample outputs. 'Run automated scans' and 'Perform manual checks' are directional but not executable without the referenced playbook. | 2 / 3 |
Workflow Clarity | The six-step workflow provides a logical sequence from scoping to re-testing, but lacks explicit validation checkpoints or feedback loops. There's no guidance on what to do if automated scans fail or how to verify remediation success before proceeding. | 2 / 3 |
Progressive Disclosure | Appropriately structured with a concise overview and clear one-level-deep reference to the implementation playbook. The 'Use/Do not use' sections help with scoping, and the resource link is well-signaled. | 3 / 3 |
Total | 9 / 12 Passed |
Activation
33%The description establishes a clear accessibility domain and mentions relevant technical standards (WCAG), but suffers from two key weaknesses: it uses second person voice ('You are') which violates the third-person requirement, and it completely lacks explicit trigger guidance for when Claude should select this skill. The actions listed are somewhat generic rather than concrete accessibility-specific tasks.
Suggestions
Add a 'Use when...' clause with explicit triggers like 'Use when the user asks about accessibility, WCAG compliance, screen reader compatibility, color contrast, keyboard navigation, or ADA requirements'
Rewrite in third person voice: 'Conducts accessibility audits, identifies WCAG compliance issues, tests assistive technology compatibility, and provides remediation guidance'
Include more natural trigger terms users would say: 'a11y', 'screen reader', 'alt text', 'color contrast', 'keyboard accessible', 'ADA'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (accessibility/WCAG) and some actions ('conduct audits, identify barriers, provide remediation guidance'), but these are somewhat general rather than listing multiple concrete specific actions like 'test screen reader compatibility, validate color contrast ratios, check keyboard navigation'. | 2 / 3 |
Completeness | Describes what the skill does but completely lacks a 'Use when...' clause or any explicit trigger guidance. Per rubric guidelines, missing explicit trigger guidance caps completeness at 2, and this has no 'when' component at all. | 1 / 3 |
Trigger Term Quality | Includes relevant terms like 'WCAG', 'accessibility', 'inclusive design', 'assistive technology', but misses common user variations like 'a11y', 'screen reader', 'ADA compliance', 'color contrast', 'keyboard navigation', or 'alt text'. | 2 / 3 |
Distinctiveness Conflict Risk | The accessibility/WCAG focus provides some distinctiveness, but 'conduct audits' and 'provide guidance' are generic enough to potentially overlap with other audit or compliance-related skills. The domain is clear but triggers aren't sharply defined. | 2 / 3 |
Total | 7 / 12 Passed |
Reviewed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.