Automated UX review rules optimized for AI-driven design evaluations, addressing gaps in usability and user empowerment. Complementary to laws-of-ux skill, focusing on efficiency, control, cognitive workload, learnability, and personalization.
36
33%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/ai-ux-enhancements/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies its domain and lists focus areas but fails to describe concrete actions the skill performs and entirely lacks explicit trigger guidance ('Use when...'). The relationship to the 'laws-of-ux' skill is mentioned but the boundary is not clearly delineated, creating potential overlap. The language leans toward abstract categorization rather than actionable specificity.
Suggestions
Add an explicit 'Use when...' clause with trigger scenarios, e.g., 'Use when the user asks for a UX review, usability audit, design critique, or wants to evaluate interface efficiency, cognitive load, or learnability.'
Replace abstract phrasing with concrete actions, e.g., 'Evaluates UI designs for efficiency bottlenecks, assesses cognitive workload, checks learnability of interactions, and recommends personalization improvements.'
Clarify the boundary with laws-of-ux more explicitly, e.g., 'Unlike laws-of-ux which covers established UX principles and heuristics, this skill focuses on actionable review criteria for efficiency, control, and cognitive load.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (UX review) and lists some focus areas (efficiency, control, cognitive workload, learnability, personalization), but doesn't describe concrete actions like 'evaluate', 'generate reports', or 'audit interfaces'. The language is more about what the skill covers than what it does. | 2 / 3 |
Completeness | Describes what the skill covers at a high level but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. The description only implies context through its relationship to 'laws-of-ux skill', which is insufficient. | 1 / 3 |
Trigger Term Quality | Includes some relevant terms like 'UX review', 'usability', 'cognitive workload', 'learnability', and 'personalization', but misses common user phrases like 'user experience audit', 'UI review', 'design feedback', 'heuristic evaluation', or 'accessibility'. The term 'AI-driven design evaluations' is somewhat jargon-heavy. | 2 / 3 |
Distinctiveness Conflict Risk | Attempts to differentiate itself from the 'laws-of-ux' skill by stating it's complementary and focusing on specific areas, but the boundary between the two is not clearly drawn. Terms like 'usability' and 'user empowerment' could easily overlap with a general UX skill. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill provides a well-structured taxonomy of 12 UX review rules with clear fail messages and automation suggestions, which is genuinely useful domain knowledge. However, it is significantly too verbose—explaining scope, background, and framing that Claude doesn't need—and lacks executable code examples that would make it truly actionable. The workflow for actually conducting a review is underspecified, with no validation steps or feedback loops.
Suggestions
Cut the Background & Scope section to 1-2 lines and remove metadata like version/date/purpose paragraph to improve conciseness significantly.
Replace prose 'Automation ideas' with executable code snippets (e.g., actual DOM queries, axe-core CLI commands, or Python/JS scripts) for at least the top-priority rules (9, 12, 1).
Add a concrete end-to-end workflow with validation checkpoints: e.g., 1) Run accessibility scan → 2) Parse results → 3) Validate findings against context → 4) Generate structured report with severity scores.
Extract detailed per-rule automation guidance into a separate RULES_DETAIL.md reference file, keeping SKILL.md as a concise overview with rule names, one-line checks, and fail messages only.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is excessively verbose with unnecessary metadata (version, date, purpose paragraph), background explanations Claude doesn't need, and repeated framing about what the rules complement. The 'Background & Scope' section explains context that could be a single sentence. Each rule includes 'Automation ideas' that are helpful but padded with excessive detail. | 1 / 3 |
Actionability | The rules provide concrete checks and specific fail messages, which is good. However, there's no executable code—only conceptual 'automation ideas' described in prose. The guidance is specific enough to act on but lacks copy-paste-ready implementations (e.g., actual axe-core commands, DOM query selectors, or script snippets). | 2 / 3 |
Workflow Clarity | The implementation guidance section provides a priority order and tool mapping, which helps sequence work. However, there are no validation checkpoints, no feedback loops for when checks fail, and no clear end-to-end workflow for running a complete review (e.g., how to aggregate results, handle conflicts between rules, or produce the final report). | 2 / 3 |
Progressive Disclosure | The content is organized into clear thematic sections (Efficiency, Empowerment, Cognitive, Learnability, Personalization) which aids navigation. However, the entire skill is monolithic—all 12 rules with full detail are inline. The automation ideas and tool integrations could be split into reference files, and there are no bundle files to support progressive disclosure. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata.version' is missing | Warning |
Total | 10 / 11 Passed | |
4d83977
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.