Unified team skill for UI design team. Research -> design tokens -> audit -> implementation. Uses team-worker agent architecture with roles/ for domain logic. Coordinator orchestrates dual-track pipeline with GC loops and sync points. Triggers on "team ui design", "ui design team".
62
53%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.claude/skills/team-uidesign/SKILL.mdQuality
Discovery
35%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is overly focused on internal architecture (agent roles, GC loops, sync points, dual-track pipeline) rather than describing user-facing capabilities. The trigger terms are unnatural command-like phrases rather than terms users would naturally use. While it identifies a clear domain (UI design), it fails to articulate concrete benefits or natural use cases that would help Claude select this skill appropriately.
Suggestions
Replace the architectural jargon ('GC loops', 'sync points', 'team-worker agent architecture', 'coordinator') with concrete user-facing actions like 'generates design tokens', 'audits UI components for consistency', 'creates implementation specs from designs'.
Add natural trigger terms users would actually say, such as 'design system', 'component library', 'UI audit', 'design tokens', 'style guide', 'visual consistency', '.figma'.
Expand the 'Triggers on' clause into a proper 'Use when...' statement covering realistic scenarios, e.g., 'Use when the user needs to create or audit a design system, generate design tokens, or bridge design-to-implementation workflows.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names a domain (UI design) and mentions several actions (research, design tokens, audit, implementation), but these are listed as a pipeline rather than concrete user-facing capabilities. Terms like 'GC loops', 'sync points', and 'team-worker agent architecture' are internal implementation details rather than specific actions the skill performs for the user. | 2 / 3 |
Completeness | The 'what' is partially addressed (research, design tokens, audit, implementation pipeline), and there is a 'Triggers on' clause that serves as a 'when' equivalent. However, the trigger terms are so narrow and unnatural that the 'when' guidance is effectively weak, and the description focuses more on internal architecture than on what it does for the user. | 2 / 3 |
Trigger Term Quality | The explicit triggers are 'team ui design' and 'ui design team', which are unnatural phrases a user would rarely say. Natural terms like 'design system', 'component audit', 'design tokens', 'UI implementation', or 'style guide' are missing as trigger terms. The listed triggers feel like command phrases rather than natural language. | 1 / 3 |
Distinctiveness Conflict Risk | The very specific trigger phrases ('team ui design') reduce conflict risk, but the broad mention of 'UI design' and 'implementation' could overlap with other design or frontend skills. The architectural details (agent architecture, coordinator) add some distinctiveness but are implementation-focused rather than task-focused. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
72%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured team orchestration skill that excels at progressive disclosure and actionability, with concrete spawn templates and clear role routing. Its main weakness is the workflow clarity — while the architecture is well-diagrammed, the actual pipeline execution lacks explicit validation checkpoints between stages and feedback loops for rejection/retry scenarios. The content is mostly concise but could benefit from moving the session directory tree and some structural details to reference files.
Suggestions
Add explicit validation/sync points between pipeline stages (e.g., 'Reviewer must pass audit before implementer spawns') with feedback loops for rejection scenarios
Move the session directory tree to a reference file (e.g., specs/session-structure.md) and keep only a one-line summary in SKILL.md to improve conciseness
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is reasonably efficient and avoids explaining basic concepts, but includes some structural detail (like the full ASCII architecture diagram and session directory tree) that could be trimmed or moved to reference files. The worker spawn template and role registry are useful but borderline verbose for a SKILL.md overview. | 2 / 3 |
Actionability | The skill provides concrete, executable guidance: the role router logic is explicit, the worker spawn template includes a copy-paste-ready Agent() call with all parameters, CLI tools are named with specific flags, and the message bus function signature is given. The role registry links directly to role files. | 3 / 3 |
Workflow Clarity | The architecture diagram and role router show the dispatch flow clearly, and GC loop limits are specified. However, the multi-step pipeline (research -> design tokens -> review -> implementation) lacks explicit validation checkpoints between stages, and there's no clear feedback loop for when a reviewer rejects designer output or when GC rounds complete. Error handling is listed but not integrated into the workflow sequence. | 2 / 3 |
Progressive Disclosure | Excellent progressive disclosure: SKILL.md serves as a clear router/overview, with all domain logic delegated to one-level-deep references (roles/<role>/role.md and specs/*.md). References are well-signaled with relative links and organized in clear tables. Content is appropriately split between overview and detail files. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
Total | 10 / 11 Passed | |
0f8e801
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.