CtrlK
BlogDocsLog inGet started
Tessl Logo

miro-code-review

Knowledge and patterns for effective code review visualization

37

1.38x
Quality

3%

Does it follow best practices?

Impact

97%

1.38x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/miro-code-review/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is extremely vague and provides almost no actionable information for skill selection. It lacks concrete actions, natural trigger terms, explicit 'when to use' guidance, and any distinguishing details that would help Claude choose it over other skills. It reads more like a category label than a functional skill description.

Suggestions

Replace the abstract phrasing with specific concrete actions, e.g., 'Generates visual diff summaries, annotated code review diagrams, and review coverage charts from pull request data.'

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks to visualize code reviews, display PR diffs graphically, create review dashboards, or summarize code changes visually.'

Include specific file types, tools, or formats to increase distinctiveness, e.g., mention GitHub PRs, GitLab merge requests, diff files, or specific output formats like HTML reports or SVG diagrams.

DimensionReasoningScore

Specificity

The description uses vague, abstract language ('knowledge and patterns', 'effective') without listing any concrete actions. It does not specify what the skill actually does—no verbs like 'generate', 'create', 'display', or 'analyze' are present.

1 / 3

Completeness

The description fails to answer both 'what does this do' and 'when should Claude use it'. There is no 'Use when...' clause or any explicit trigger guidance, and the 'what' is extremely vague.

1 / 3

Trigger Term Quality

The phrase 'code review visualization' is somewhat relevant but overly abstract. It lacks natural user trigger terms like 'diff view', 'PR review', 'code comments', 'review dashboard', or specific file types/tools users would mention.

1 / 3

Distinctiveness Conflict Risk

The description is generic enough to overlap with code review skills, visualization/charting skills, or general coding assistance skills. 'Knowledge and patterns' is especially non-distinctive and could apply to almost anything.

1 / 3

Total

4

/

12

Passed

Implementation

7%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is largely conceptual and descriptive rather than actionable. It explains code review philosophy and visual review benefits that Claude already knows, while failing to provide any concrete commands, API calls, or executable steps for actually creating review visualizations on Miro boards. The content reads more like a blog post about code review visualization than an operational skill.

Suggestions

Replace the 'Core Concepts' and 'Visual Review Benefits' sections with concrete, executable examples showing how to create Miro board artifacts (e.g., specific API calls or tool invocations for creating tables, documents, and diagrams).

Add a clear step-by-step workflow: e.g., 1. Analyze the diff, 2. Assess risk per file, 3. Create summary document via [specific command], 4. Create file table via [specific command], 5. Validate board layout.

Include at least one complete, copy-paste-ready example showing the full process from code diff input to Miro board output, with specific tool calls and expected responses.

Move the artifact selection table and layout guidelines into a reference file, and use the main skill body for the operational workflow with validation checkpoints.

DimensionReasoningScore

Conciseness

Heavily padded with concepts Claude already knows (what code review focuses on, benefits of visual reviews). The 'Core Concepts' and 'Review Philosophy' sections explain basic software engineering knowledge that adds no actionable value. Much of the content describes rather than instructs.

1 / 3

Actionability

No executable code, no concrete commands, no specific API calls or tool usage examples. The skill describes what artifacts exist and when to use them but never shows how to create them. The layout diagram is illustrative but not actionable without corresponding tool commands.

1 / 3

Workflow Clarity

There is no clear workflow or sequence of steps for conducting a code review visualization. No process is defined for going from code changes to visual output. No validation checkpoints or feedback loops are present.

1 / 3

Progressive Disclosure

References to external files (references/risk-assessment.md, references/review-patterns.md) are present and one level deep, which is good. However, the main file itself contains too much conceptual padding that should either be removed or moved to references, and the references aren't clearly signaled with descriptive context about what they contain.

2 / 3

Total

5

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
miroapp/miro-ai
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.