Knowledge and patterns for effective code review visualization
36
Quality
22%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Optimize this skill with Tessl
npx tessl skill review --optimize ./claude-plugins/miro-review/skills/code-review/SKILL.mdDiscovery
22%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is too vague to be effective for skill selection. It fails to specify concrete actions the skill performs and provides no guidance on when Claude should select it. The phrase 'knowledge and patterns' is abstract fluff that doesn't help distinguish this skill from others.
Suggestions
Replace vague language with specific actions (e.g., 'Generates visual diagrams of code review feedback, creates summary charts of review comments, visualizes diff coverage')
Add an explicit 'Use when...' clause with trigger terms like 'visualize review', 'code review diagram', 'PR feedback chart', 'review summary'
Specify the output formats or visualization types to distinguish from other code review or visualization skills
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague language like 'knowledge and patterns' and 'effective' without listing any concrete actions. It doesn't specify what the skill actually does (e.g., generate diagrams, create reports, highlight issues). | 1 / 3 |
Completeness | The description only vaguely hints at 'what' (something about code review visualization) and completely lacks any 'when' guidance or explicit triggers for when Claude should use this skill. | 1 / 3 |
Trigger Term Quality | Contains 'code review' and 'visualization' which are relevant keywords users might say, but lacks common variations like 'PR review', 'diff', 'review comments', 'code changes', or specific visualization types. | 2 / 3 |
Distinctiveness Conflict Risk | While 'code review visualization' is somewhat specific, the vague phrasing could overlap with general code review skills, documentation skills, or other visualization tools. The lack of specific triggers increases conflict risk. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
22%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads more like a conceptual overview than actionable guidance. It explains what code reviews are and why visual artifacts help, but fails to provide concrete instructions, API calls, or executable examples for actually creating Miro board visualizations. The workflow is absent, leaving Claude without clear steps to follow.
Suggestions
Add concrete, executable code examples showing how to create Miro items (tables, documents, diagrams) using the Miro API with specific coordinates from the layout diagram
Define a clear step-by-step workflow: 1) Analyze PR, 2) Create summary doc at position X, 3) Create file table, 4) Add relevant diagrams, with validation checkpoints
Remove the 'Review Philosophy' and 'Visual Review Benefits' sections - Claude already understands these concepts
Include a complete example showing input (a sample PR or code change) and expected output (the Miro API calls or resulting board structure)
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The 'Review Philosophy' section explains concepts Claude already knows (what correctness, security, maintainability mean). The 'Visual Review Benefits' section is also somewhat obvious. However, the tables and layout diagram add value without excessive padding. | 2 / 3 |
Actionability | The skill describes concepts and benefits but provides no executable code, API calls, or concrete commands for creating Miro artifacts. The layout diagram shows positioning but doesn't show how to actually create items at those coordinates. | 1 / 3 |
Workflow Clarity | There is no clear workflow or sequence of steps for conducting a code review visualization. The content lists concepts and artifact types but doesn't explain when to do what, in what order, or how to validate the output. | 1 / 3 |
Progressive Disclosure | References to external files (risk-assessment.md, review-patterns.md) are mentioned, which is good. However, the main content includes conceptual material that could be trimmed, and the references aren't clearly signaled with navigation context. | 2 / 3 |
Total | 6 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 13 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
description_trigger_hint | Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...') | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
license_field | 'license' field is missing | Warning |
Total | 13 / 16 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.