Conduct design interviews, generate five distinct UI variations in a temporary design lab, collect feedback, and produce implementation plans. Use when the user wants to explore UI design options, redesign existing components, or create new UI with multiple approaches to compare.
75
63%
Does it follow best practices?
Impact
97%
1.64xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./data/skills-md/0xdesign/design-plugin/design-lab/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly articulates a specific multi-step design exploration workflow. It uses third person voice correctly, provides concrete actions, and includes an explicit 'Use when...' clause with natural trigger terms. The description is concise yet comprehensive, making it easy for Claude to distinguish this skill from general UI or coding skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'conduct design interviews', 'generate five distinct UI variations in a temporary design lab', 'collect feedback', and 'produce implementation plans'. These are clear, actionable steps. | 3 / 3 |
Completeness | Clearly answers both 'what' (conduct design interviews, generate UI variations, collect feedback, produce implementation plans) and 'when' with an explicit 'Use when...' clause covering three trigger scenarios. | 3 / 3 |
Trigger Term Quality | Includes natural keywords users would say: 'UI design options', 'redesign existing components', 'create new UI', 'multiple approaches to compare', 'design interviews', 'feedback'. These cover a good range of how users would phrase design exploration requests. | 3 / 3 |
Distinctiveness Conflict Risk | The combination of design interviews, five distinct variations in a 'design lab', feedback collection, and implementation plans creates a very specific niche. The multi-variation comparison workflow is distinctive and unlikely to conflict with general UI or coding skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is comprehensive in scope but severely undermined by its verbosity—it's a monolithic document that tries to contain every detail inline rather than using progressive disclosure. While the multi-phase workflow is well-conceived with good abort handling and error recovery, it suffers from internal contradictions and missing validation checkpoints. The content explains many things Claude already knows and repeats critical points excessively rather than stating them once clearly.
Suggestions
Split into SKILL.md (overview + phase summaries) with separate files for interview templates, variant generation guidelines, feedback system setup, and cleanup procedures—each referenced once from the main file.
Remove explanations of concepts Claude knows (what Tailwind config contains, what CSS variables are, what accessibility means) and replace with terse directives like 'Read tailwind.config.js → extract theme tokens for use across variants'.
Add explicit validation checkpoints: verify variants render (check for build errors), verify FeedbackOverlay integration (check import exists), verify cleanup completeness (list remaining files) before proceeding to next phase.
Fix the internal contradiction: the example session flow (step 8) says 'Plugin starts: pnpm dev' but Phase 4 explicitly says never to start the dev server. Remove the incorrect example step.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~500+ lines. Extensively explains concepts Claude already knows (what Tailwind is, how CSS variables work, what accessibility means, basic framework detection). The interview questions are spelled out in exhaustive detail with every option listed, which could be condensed to a brief template. The entire feedback overlay section repeats the same point ('NEVER SKIP', 'CRITICAL', 'MOST IMPORTANT REQUIREMENT') multiple times. | 1 / 3 |
Actionability | Provides concrete directory structures, JSON schemas, and code snippets for route integration and feedback overlay. However, much of the guidance is still at the level of description rather than executable code—variant generation is described conceptually rather than with templates, and the interview questions are structured descriptions rather than directly usable tool invocations. The example session flow at the end incorrectly says 'Plugin starts: pnpm dev' contradicting Phase 4's explicit instruction not to do this. | 2 / 3 |
Workflow Clarity | The 8-phase workflow is clearly sequenced with numbered phases and sub-steps, and includes abort handling and error recovery. However, there's an internal contradiction (example session says to start dev server, Phase 4 says never to), and validation checkpoints are weak—there's no step to verify the generated variants actually render correctly, no validation that the FeedbackOverlay was properly integrated, and no check that cleanup succeeded completely before generating the plan. | 2 / 3 |
Progressive Disclosure | This is a monolithic wall of text with everything inline. It references external files like 'DESIGN_PRINCIPLES.md' and 'design-and-refine/templates/feedback/FeedbackOverlay.tsx' but dumps hundreds of lines of interview questions, variant guidelines, feedback parsing instructions, and template content that should be in separate referenced files. The skill would benefit enormously from splitting into overview + detailed phase documents. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (921 lines); consider splitting into references/ and linking | Warning |
Total | 10 / 11 Passed | |
f772de4
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.