Automated UX review rules optimized for AI-driven design evaluations, addressing gaps in usability and user empowerment. Complementary to laws-of-ux skill, focusing on efficiency, control, cognitive workload, learnability, and personalization.
51
41%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/ai-ux-enhancements/SKILL.mdSkill Name: AI-Automated UX Heuristic Checks – non-duplicative Complement
Version: 1.0
Date Created: February 25, 2026
Purpose: Provide a concise, highly automatable set of UX design rules that complement (without duplicating) Nielsen's 10 Usability Heuristics and the Laws of UX (lawsofux.com). Optimized for AI-driven / programmatic design reviews, linting tools, accessibility scanners, computer-vision UI analyzers, and automated prototyping checks.
This skill is used after / in parallel with checks based on:
The 12 rules below target gaps — especially in:
All rules are selected because they are measurable / detectable via automation (code inspection, DOM analysis, accessibility APIs, performance metrics, pattern matching, NLP, CV layout analysis, etc.).
Enable Shortcuts for Frequent Users
Check: Look for keyboard shortcuts, gesture support, or command palette / quick actions for ≥ 80% of primary / frequent operations.
Automation ideas: Scan for keydown, keyup, shortcut attributes; check tooltips / help menus for accelerator labels; verify aria-keyshortcuts where applicable.
Fail message example: "Missing accelerators for power users (e.g., no keyboard shortcut for 'Save', 'Undo', 'Search')."
Optimize User Efficiency
Check: Minimize interaction cost — aim for shallow navigation depth (≤ 3 clicks/taps for 90% of core tasks) and fast perceived performance.
Automation ideas: Static path analysis, Lighthouse Performance/SEO scores, simulated task completion time, element count per view.
Fail message example: "Task requires >3 steps / excessive scrolling; consider progressive disclosure or smart defaults."
Support Internal Locus of Control
Check: Users should feel they initiate and direct actions; avoid unexpected system interruptions or forced flows.
Automation ideas: Detect modal pop-ups without user trigger, auto-play/auto-advance carousels/videos, forced redirects, high % of system-initiated events in interaction logs.
Fail message example: "System-initiated modal / auto-advance interrupts user flow → reduces sense of control."
Encourage Explorable Interfaces
Check: Allow safe trial-and-error (non-destructive previews, undo at multiple levels, draft / preview modes).
Automation ideas: Check for preview buttons, non-permanent form states, undo/redo presence, destructive action confirmations with cancel option.
Fail message example: "No preview or safe experimentation mechanism detected for high-stakes actions."
Automate Unwanted Workload
Check: Eliminate manual calculations, copying, repetitive entry; provide auto-complete, smart defaults, calculations.
Automation ideas: Scan for input fields lacking auto-suggest / calculator integrations; detect manual date/math entry vs. picker / formula support.
Fail message example: "Users must manually calculate totals / convert units — automation opportunity missed."
Fuse and Summarize Data
Check: Aggregate raw data into meaningful summaries, charts, cards, or KPIs instead of showing long raw tables/lists.
Automation ideas: Detect tables > 20 rows without summary view; check for presence of aggregated visualizations / totals.
Fail message example: "Raw data table shown without summary, chart, or key metrics → high cognitive load."
Use Judicious Redundancy
Check: Repeat only mission-critical information in 1–2 strategic locations (e.g., total in header + footer). Avoid useless or excessive repetition.
Automation ideas: NLP similarity analysis across labels / text nodes; flag strings with > 90% similarity when they appear 3+ times within the same view, excluding clearly mission-critical information intentionally repeated in ≤2 locations.
Fail message example: "Excessive repeated text detected (e.g., same CTA copy 5× on screen)."
Provide Multiple Data Codings
Check: Critical status / priority items use ≥2 visual channels (color + icon + size + position).
Automation ideas: CSS / style analysis for combined encodings; accessibility tools flag color-only meaning.
Fail message example: "Error / success status communicated by color alone — violates redundancy best practice."
Ensure Self-Descriptiveness
Check: Every interactive element explains itself (clear labels, tooltips, aria-label, visible help text, contextual instructions).
Automation ideas: Run axe-core / WAVE → flag missing alt text, aria-label, title, visible labels.
Fail message example: "Icon-only button lacks visible label or tooltip → not self-descriptive."
Promote Suitability for Learning
Check: Interface supports quick mastery (progressive disclosure, onboarding hints, low initial complexity).
Automation ideas: Count visible elements on first screen (< 50–60 recommended); detect tour / tooltip / helper presence on first load.
Fail message example: "First-view complexity too high (X elements); consider progressive disclosure."
Support Individualization
Check: Offer meaningful customization (theme, layout density, content filters, default views, font size).
Automation ideas: Scan for settings / preferences menu; check localStorage / user profile API calls for saved prefs.
Fail message example: "No personalization options detected (theme, density, saved filters, etc.)."
Accommodate Diversity
Check: Meet modern accessibility + cultural / locale sensitivity standards (WCAG 2.2 AA minimum, RTL support, date/number formatting).
Automation ideas: Lighthouse Accessibility score ≥ 90–95; locale-aware formatting checks; cultural marker detection (icons, colors).
Fail message example: "Accessibility violations detected (contrast, keyboard nav, screen reader issues)."
Use this skill to generate structured, actionable feedback reports that extend — but never repeat — Nielsen + Laws of UX findings.
Last updated: February 25, 2026
097ad6b
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.