tessl i github:vercel-labs/agent-skills --skill web-design-guidelinesReview UI code for Web Interface Guidelines compliance. Use when asked to "review my UI", "check accessibility", "audit design", "review UX", or "check my site against best practices".
Validation
94%| Criteria | Description | Result |
|---|---|---|
license_field | 'license' field is missing | Warning |
Total | 15 / 16 Passed | |
Implementation
73%This skill is concise and well-structured for its simple purpose, appropriately delegating detailed rules to an external source. However, it lacks concrete examples of the WebFetch tool usage and the expected output format, and provides no error handling guidance for fetch failures or edge cases.
Suggestions
Add a concrete example showing WebFetch syntax: `WebFetch("https://raw.githubusercontent.com/...")`
Include a brief example of the expected output format (e.g., `src/Button.tsx:42 - Missing aria-label`) rather than fully delegating to fetched content
Add error handling guidance: what to do if fetch fails or returns unexpected content
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is lean and efficient, avoiding unnecessary explanations. It assumes Claude knows how to fetch URLs and read files, providing only the essential workflow and source URL. | 3 / 3 |
Actionability | The skill provides a clear workflow but lacks concrete executable examples. It references 'WebFetch' without showing exact usage syntax, and the output format is delegated entirely to the fetched guidelines rather than shown inline. | 2 / 3 |
Workflow Clarity | Steps are listed in sequence but lack validation checkpoints. There's no guidance on what to do if the fetch fails, if guidelines are malformed, or how to handle partial file matches. | 2 / 3 |
Progressive Disclosure | For a simple skill under 50 lines, the structure is appropriate. It clearly signals that detailed rules come from an external source (the fetched URL) and keeps the SKILL.md as a concise overview. | 3 / 3 |
Total | 10 / 12 Passed |
Activation
82%This is a solid description with excellent trigger term coverage and proper completeness structure. The main weakness is the lack of specific concrete actions in the capability statement - it tells us it reviews for compliance but not what specific checks or outputs it provides. The distinctiveness could also be improved by clarifying what makes this different from general code review.
Suggestions
Add specific concrete actions like 'checks color contrast ratios, validates ARIA labels, audits keyboard navigation, reviews semantic HTML structure'
Clarify what 'Web Interface Guidelines' refers to (WCAG, internal standards, etc.) to improve distinctiveness from general UI review skills
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (UI code, Web Interface Guidelines) and the general action (review for compliance), but doesn't list specific concrete actions like 'check color contrast', 'validate ARIA labels', or 'audit navigation patterns'. | 2 / 3 |
Completeness | Clearly answers both what (review UI code for Web Interface Guidelines compliance) and when (explicit 'Use when' clause with multiple trigger scenarios). The structure follows the recommended pattern. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural user phrases: 'review my UI', 'check accessibility', 'audit design', 'review UX', 'check my site against best practices' - these are terms users would naturally say when needing this skill. | 3 / 3 |
Distinctiveness Conflict Risk | While 'Web Interface Guidelines compliance' is somewhat specific, terms like 'review UX' and 'audit design' could overlap with general code review or design feedback skills. The accessibility focus helps but isn't exclusive enough. | 2 / 3 |
Total | 10 / 12 Passed |
Reviewed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.