CtrlK
BlogDocsLog inGet started
Tessl Logo

team-ux-improve

Unified team skill for UX improvement. Systematically discovers and fixes UI/UX interaction issues including unresponsive buttons, missing feedback, and state refresh problems. Uses team-worker agent architecture with roles/ for domain logic. Coordinator orchestrates pipeline, workers are team-worker agents. Triggers on "team ux improve".

79

Quality

75%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.claude/skills/team-ux-improve/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

77%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description does a good job of specifying concrete actions and providing an explicit trigger phrase. Its main weaknesses are limited natural trigger term coverage (relying heavily on the specific command 'team ux improve' rather than natural language variations) and moderate overlap risk with other frontend/UI debugging skills. The architectural details (team-worker agent, coordinator, pipeline) consume space that could be better used for additional trigger terms.

Suggestions

Add more natural trigger term variations users might say, such as 'UI bugs', 'broken buttons', 'user experience issues', 'interface not responding', 'usability problems'

Remove or minimize internal architecture details (team-worker agent, coordinator, pipeline) which don't help with skill selection and replace with user-facing trigger scenarios

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'discovers and fixes UI/UX interaction issues including unresponsive buttons, missing feedback, and state refresh problems.' Also describes the architecture: 'team-worker agent architecture with roles/ for domain logic.'

3 / 3

Completeness

Clearly answers both what ('discovers and fixes UI/UX interaction issues including unresponsive buttons, missing feedback, and state refresh problems') and when ('Triggers on "team ux improve"'), providing an explicit trigger clause.

3 / 3

Trigger Term Quality

Includes some relevant terms like 'UX improvement', 'unresponsive buttons', 'missing feedback', 'state refresh problems', and the explicit trigger 'team ux improve'. However, it misses common natural user phrases like 'UI bugs', 'broken buttons', 'user experience', 'interface issues', or 'usability problems' that users would naturally say.

2 / 3

Distinctiveness Conflict Risk

The UX improvement focus is somewhat specific, but 'UI/UX interaction issues' could overlap with general frontend debugging or CSS styling skills. The explicit trigger phrase 'team ux improve' helps reduce conflict, but the broader description of fixing UI issues could still cause overlap with other frontend-related skills.

2 / 3

Total

10

/

12

Passed

Implementation

72%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured orchestration skill that excels at progressive disclosure and actionability, providing concrete spawn templates, CLI commands, and clear role routing. Its main weakness is the lack of explicit validation checkpoints and feedback loops between pipeline stages, which is important for a multi-step process involving code modifications. Some minor verbosity could be trimmed but overall token efficiency is reasonable for the complexity of the system.

Suggestions

Add explicit validation/gate criteria between pipeline stages (e.g., 'Coordinator verifies scan-report.md exists and contains at least one finding before spawning diagnoser')

Include a feedback loop for implementation failures — what happens when tester finds regressions? Document the retry/rollback path explicitly

DimensionReasoningScore

Conciseness

The content is reasonably efficient for a complex multi-agent orchestration skill, but includes some sections that could be tightened (e.g., the ASCII architecture diagram is somewhat redundant given the role registry table, and the error handling table includes obvious behaviors like 'Error with available command list').

2 / 3

Actionability

Provides concrete, copy-paste-ready Agent() spawn templates, specific CLI commands (ccw cli --mode analysis/write), exact file paths, session directory structures, and message bus function signatures. The role router logic is explicit and unambiguous.

3 / 3

Workflow Clarity

The pipeline stages (scan -> diagnose -> design -> implement -> test) are named and the coordinator/worker dispatch is clear, but there are no explicit validation checkpoints between stages, no feedback loops for error recovery between pipeline steps, and no clear criteria for when a stage passes to the next. For a multi-step orchestration involving code modifications, this is a gap.

2 / 3

Progressive Disclosure

Excellent progressive disclosure — SKILL.md serves as a clear router/overview with well-signaled one-level-deep references to role specs (roles/<name>/role.md), pipeline specs, design standards, anti-patterns, and heuristics. Content is appropriately split between the overview and referenced files.

3 / 3

Total

10

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

Total

10

/

11

Passed

Repository
catlog22/Claude-Code-Workflow
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.