Knowledge and patterns for effective code review visualization
37
3%
Does it follow best practices?
Impact
97%
1.38xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/miro-code-review/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is critically underspecified. It reads more like a category label than a functional skill description, providing no concrete actions, no trigger terms, and no guidance on when Claude should select it. It would be nearly impossible for Claude to reliably choose this skill from a pool of alternatives.
Suggestions
Replace the abstract phrasing with specific concrete actions, e.g., 'Generates visual diff summaries, annotated code review diagrams, and review coverage charts from pull request data.'
Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks to visualize code reviews, PR diffs, review comments, or wants a graphical summary of code changes.'
Include specific file types, tools, or formats (e.g., 'GitHub PRs', 'GitLab merge requests', '.diff files') to improve distinctiveness and reduce conflict risk with other skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague, abstract language ('knowledge and patterns', 'effective') without listing any concrete actions. It does not specify what the skill actually does—no verbs like 'generate', 'create', 'display', or 'analyze' are present. | 1 / 3 |
Completeness | The description fails to answer both 'what does this do' (no concrete actions) and 'when should Claude use it' (no 'Use when...' clause or equivalent trigger guidance). Both dimensions are very weak. | 1 / 3 |
Trigger Term Quality | The phrase 'code review visualization' is somewhat relevant but overly abstract. It lacks natural user trigger terms like 'diff view', 'PR review', 'code comments', 'review dashboard', or specific file types/tools users would mention. | 1 / 3 |
Distinctiveness Conflict Risk | The description is extremely generic—'code review visualization' could overlap with code review skills, visualization/charting skills, or dashboard skills. There are no distinct triggers to differentiate it from other skills. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
7%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is mostly conceptual explanation of code review principles and visual artifact types, with very little actionable guidance. It explains things Claude already knows (what correctness, security, and maintainability mean in code review) while failing to provide concrete instructions, API calls, or executable examples for actually creating Miro board visualizations. The layout diagram is a useful reference but insufficient on its own.
Suggestions
Replace the 'Review Philosophy' and 'Visual Review Benefits' sections with concrete, executable examples showing how to create Miro board items (API calls, tool invocations, or specific commands).
Add a step-by-step workflow: e.g., 1. Analyze the diff, 2. Score risk, 3. Create summary document via Miro API, 4. Create file table, 5. Add diagrams, with validation at each step.
Include at least one complete, copy-paste-ready example of creating a review visualization (e.g., a Miro API call to create a table or document with actual parameters).
Either inline the key content from the referenced files (risk-assessment.md, review-patterns.md) as concise summaries, or provide clearer descriptions of when and how to use them.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Heavily padded with concepts Claude already knows (what code review focuses on, benefits of visual reviews). The 'Core Concepts' and 'Review Philosophy' sections explain basic software engineering knowledge that adds no actionable value. | 1 / 3 |
Actionability | No executable code, no concrete commands, no API calls, no specific tool usage examples. The content describes concepts and shows an ASCII layout diagram but never instructs Claude on how to actually create anything on a Miro board. The table of artifacts is descriptive rather than instructive. | 1 / 3 |
Workflow Clarity | There is no sequenced workflow for conducting a code review visualization. No steps are defined for how to go from code changes to a visual board. No validation checkpoints or feedback loops exist. | 1 / 3 |
Progressive Disclosure | References to `references/risk-assessment.md` and `references/review-patterns.md` are present and one-level deep, which is good. However, the main file itself contains too much conceptual padding that should either be removed or replaced with actionable content, and the references are vaguely described without clear signals of what they contain. | 2 / 3 |
Total | 5 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
b1d33ab
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.