Get structured design feedback on usability, hierarchy, and consistency. Trigger with "review this design", "critique this mockup", "what do you think of this screen?", or when sharing a Figma link or screenshot for feedback at any stage from exploration to final polish.
75
70%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./design/skills/design-critique/SKILL.mdQuality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description with excellent trigger terms and completeness. It clearly communicates when to use the skill with natural language triggers and contextual cues. The main weakness is that the 'what' could be more specific about the concrete actions or outputs the skill produces beyond 'structured design feedback.'
Suggestions
Expand the capability description with more concrete actions, e.g., 'Identifies usability issues, evaluates visual hierarchy, checks design consistency, and provides actionable improvement suggestions' rather than just 'get structured design feedback.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | It names the domain (design feedback) and mentions specific aspects like 'usability, hierarchy, and consistency,' but doesn't list multiple concrete actions beyond 'get structured design feedback.' It could be more specific about what the feedback entails (e.g., annotating issues, suggesting improvements, scoring accessibility). | 2 / 3 |
Completeness | Clearly answers both 'what' (structured design feedback on usability, hierarchy, and consistency) and 'when' (explicit trigger phrases and contexts like sharing a Figma link or screenshot at any stage from exploration to final polish). | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger phrases: 'review this design', 'critique this mockup', 'what do you think of this screen?', 'Figma link', 'screenshot for feedback.' These are phrases users would naturally say when seeking design feedback. | 3 / 3 |
Distinctiveness Conflict Risk | The description carves out a clear niche around design critique/feedback with distinct triggers like 'mockup', 'Figma link', 'screen', and specific design aspects. This is unlikely to conflict with other skills like general code review or document analysis. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
50%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a well-structured design critique framework with a useful output template, but it over-explains concepts Claude already knows (design feedback best practices, basic usability heuristics). It would benefit from a concrete example showing a completed critique and tighter focus on what's truly novel—the specific output format and connector integrations. The workflow could be more explicit about the decision tree for different input types.
Suggestions
Add a concrete before/after example showing a real design input and the completed critique output to improve actionability.
Remove or significantly trim the 'How to Give Feedback' section and critique framework questions—Claude already knows design principles; focus on the specific output format and severity classification system instead.
Add explicit workflow steps: 1) Check for connectors, 2) Obtain design (with branching for Figma URL/file/description), 3) Determine stage, 4) Apply framework, 5) Generate output.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably structured but includes some content Claude already knows well—like how to give good design feedback (be specific, explain why, suggest alternatives) and basic design critique heuristics. The critique framework and 'How to Give Feedback' sections explain concepts Claude inherently understands. | 2 / 3 |
Actionability | The output template is concrete and copy-paste ready, which is good. However, the critique framework itself is more of a checklist of questions than executable guidance—there are no concrete examples of actual design critiques, no sample input/output pairs showing what a completed critique looks like for a real design. | 2 / 3 |
Workflow Clarity | The workflow is implicit: receive design → apply framework → produce output. The numbered critique dimensions provide structure, but there's no explicit sequencing of steps (e.g., first check if Figma connector is available, then pull design, then apply framework). The input handling logic (Figma URL vs file vs description) could be more explicitly sequenced. | 2 / 3 |
Progressive Disclosure | References to CONNECTORS.md is good, but the skill itself is somewhat monolithic—the critique framework, feedback guidelines, output template, and tips are all inline. The framework dimensions and output template could potentially be split out, though the file isn't excessively long. The structure is decent with clear headers but everything lives in one file. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
f55b539
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.