CtrlK
BlogDocsLog inGet started
Tessl Logo

ai-ux-enhancements

Automated UX review rules optimized for AI-driven design evaluations, addressing gaps in usability and user empowerment. Complementary to laws-of-ux skill, focusing on efficiency, control, cognitive workload, learnability, and personalization.

51

Quality

41%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/ai-ux-enhancements/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies its domain (UX review) and attempts to carve out a niche relative to a sibling skill, but it lacks concrete actions and has no explicit trigger guidance ('Use when...'). The language is more abstract and categorical than actionable, making it difficult for Claude to confidently select this skill from a large pool.

Suggestions

Add a 'Use when...' clause with explicit triggers, e.g., 'Use when the user asks for a UX audit, usability review, design critique, or feedback on interface efficiency and learnability.'

Replace abstract focus areas with concrete actions, e.g., 'Evaluates UI designs for efficiency bottlenecks, checks cognitive load issues, assesses user control and personalization options, and generates actionable UX improvement recommendations.'

Clarify the boundary with the laws-of-ux skill more explicitly, e.g., 'Unlike laws-of-ux which applies established UX principles, this skill performs structured audits focused on measurable usability gaps.'

DimensionReasoningScore

Specificity

Names the domain (UX review) and lists some focus areas (efficiency, control, cognitive workload, learnability, personalization), but does not describe concrete actions like 'evaluate', 'generate reports', or 'audit interfaces'. The language remains at the category level rather than specifying what the skill actually does.

2 / 3

Completeness

Describes what the skill covers at a high level but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and the 'what' is also weak (no concrete actions), so this scores a 1.

1 / 3

Trigger Term Quality

Includes some relevant terms like 'UX review', 'usability', 'cognitive workload', 'learnability', and 'personalization', but misses common natural user phrases like 'user experience audit', 'UI feedback', 'design critique', 'heuristic evaluation', or 'accessibility review'. The term 'AI-driven design evaluations' is somewhat jargon-heavy.

2 / 3

Distinctiveness Conflict Risk

Attempts to differentiate from a 'laws-of-ux' skill by calling itself 'complementary' and listing specific focus areas, which helps somewhat. However, the overlap with any general UX or design review skill remains significant, and the boundary between this and the laws-of-ux skill is not crisply defined.

2 / 3

Total

7

/

12

Passed

Implementation

50%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides a well-organized and comprehensive set of UX review heuristics with useful automation hints and fail messages. Its main weaknesses are the lack of executable code/commands (automation ideas remain abstract), absence of a clear step-by-step review workflow with validation checkpoints, and moderate verbosity in framing sections that don't add actionable value for Claude.

Suggestions

Add executable code snippets for at least the highest-priority rules (e.g., actual axe-core CLI commands, DOM query selectors for detecting missing aria-labels, Lighthouse programmatic API usage).

Define a clear step-by-step review workflow with explicit validation checkpoints, such as: 1) Run accessibility scan → 2) Parse results → 3) Cross-check against Nielsen/Laws of UX findings to avoid duplicates → 4) Generate structured report.

Remove or drastically shorten the Background & Scope and Purpose sections — Claude doesn't need to be told what UX heuristics are or why automation matters.

Consider splitting the 12 detailed rule definitions into a separate RULES_REFERENCE.md file, keeping SKILL.md as a concise overview with quick-start workflow and links.

DimensionReasoningScore

Conciseness

The skill includes some unnecessary preamble (Background & Scope, version metadata, date) and explanatory framing that Claude doesn't need. The rules themselves are reasonably efficient, but the overall document could be tightened significantly — e.g., the 'Purpose' paragraph and 'Background & Scope' section add little actionable value.

2 / 3

Actionability

Each rule provides concrete check criteria, automation ideas, and fail message examples, which is good. However, there is no executable code — no actual axe-core commands, no DOM query snippets, no concrete script examples. The 'Automation ideas' are suggestions rather than copy-paste-ready implementations, making this more of a checklist than a fully actionable skill.

2 / 3

Workflow Clarity

The Implementation Guidance section provides a priority order and tool suggestions, which is helpful. However, there's no clear step-by-step workflow for conducting a review, no validation checkpoints (e.g., 'verify no duplicates with Nielsen heuristics before reporting'), and no feedback loop for handling ambiguous findings or iterating on results.

2 / 3

Progressive Disclosure

The content is well-structured with clear headings and numbered rules organized into thematic categories. However, at ~150+ lines it's a monolithic document that could benefit from splitting detailed rule definitions into a separate reference file, keeping SKILL.md as a concise overview with links. No external file references are provided.

2 / 3

Total

8

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_version

'metadata.version' is missing

Warning

metadata_field

'metadata' should map string keys to string values

Warning

Total

9

/

11

Passed

Repository
RoleModel/rolemodel-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.