Mandatory review of any UI which looks for any blatant signs of being 'vibe-coded'
52
52%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Quality
Discovery
7%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is too vague and relies on informal jargon ('vibe-coded') without defining what it means or what specific actions the skill performs. It lacks explicit trigger guidance, concrete capabilities, and natural keywords that users would actually use when needing this functionality.
Suggestions
Define what 'vibe-coded' means by listing specific anti-patterns to detect (e.g., 'inconsistent spacing, hardcoded values, missing accessibility attributes, poor component structure')
Add a 'Use when...' clause with natural trigger terms like 'review UI', 'check frontend code', 'UI quality check', 'code review for React/Vue/etc.'
Specify concrete actions the skill performs (e.g., 'Analyzes component structure, checks for accessibility issues, identifies inconsistent styling patterns')
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague language like 'mandatory review' and 'blatant signs' without specifying concrete actions. It doesn't explain what specific checks, validations, or corrections are performed. | 1 / 3 |
Completeness | The description vaguely addresses 'what' (review UI for vibe-coded signs) but provides no explicit 'when' clause or trigger guidance. It's unclear what constitutes 'vibe-coded' or when this skill should activate. | 1 / 3 |
Trigger Term Quality | The term 'vibe-coded' is informal jargon that users are unlikely to naturally say. There are no common trigger terms like 'UI review', 'code quality', 'frontend check', or specific file types mentioned. | 1 / 3 |
Distinctiveness Conflict Risk | While 'vibe-coded' is a somewhat unique term, the phrase 'review of any UI' is broad and could overlap with other UI testing, code review, or quality assurance skills. | 2 / 3 |
Total | 5 / 12 Passed |
Implementation
72%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill effectively identifies specific UI anti-patterns in a concise format, but lacks actionable guidance on what to do after identifying issues. The checklist approach is good, but the skill would benefit from clearer output expectations and concrete examples of both problematic and improved UI patterns.
Suggestions
Add specific output format expectations (e.g., 'List each issue found with severity and suggested fix')
Include before/after examples or concrete remediation guidance for each anti-pattern
Define what 'expensive' UI looks like with positive examples, not just negatives to avoid
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is lean and efficient, listing specific anti-patterns without unnecessary explanation. Every line serves a purpose and assumes Claude understands UI concepts. | 3 / 3 |
Actionability | Provides a clear checklist of what to look for, but lacks concrete examples of good vs bad implementations, specific remediation steps, or visual/code examples of the anti-patterns. | 2 / 3 |
Workflow Clarity | The review criteria are listed but there's no clear process for how to conduct the review, what to output, or how to prioritize findings. The final assessment step is vague ('assess whether the UI could benefit'). | 2 / 3 |
Progressive Disclosure | For a simple, single-purpose skill under 50 lines, the content is appropriately structured as a single file with clear sections. No external references are needed. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Reviewed
Table of Contents