Comprehensive guide to Miro as a visual collaboration platform. Covers canvas features, content types, AI capabilities, and enterprise use cases. For MCP tool documentation, see the miro plugin's skill (miro-mcp).
50
23%
Does it follow best practices?
Impact
94%
2.76xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/miro-platform/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies the domain (Miro) and lists broad topic categories but lacks concrete actions, natural trigger terms, and an explicit 'Use when...' clause. The cross-reference to miro-mcp is helpful for disambiguation but the description reads more like a documentation summary than a skill selection guide.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about Miro features, whiteboard collaboration, creating boards, sticky notes, or visual brainstorming workflows.'
Replace category labels with specific concrete actions, e.g., 'Explains how to create and organize boards, use sticky notes and shapes, set up templates, leverage Miro AI for content generation, and configure enterprise settings.'
Include natural trigger terms users would say, such as 'whiteboard', 'sticky notes', 'diagrams', 'brainstorming', 'board templates', and 'flowcharts'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Miro visual collaboration platform) and some broad areas (canvas features, content types, AI capabilities, enterprise use cases), but these are category labels rather than concrete actions like 'create boards', 'add sticky notes', or 'manage templates'. | 2 / 3 |
Completeness | Describes what the skill covers at a high level but has no explicit 'Use when...' clause or equivalent trigger guidance. The description reads more like a table of contents than actionable selection criteria. Per rubric guidelines, missing 'Use when' should cap completeness at 2, and the 'what' is also weak, so this scores a 1. | 1 / 3 |
Trigger Term Quality | Includes 'Miro', 'visual collaboration', 'canvas', and 'AI capabilities' which are somewhat relevant, but misses natural user terms like 'whiteboard', 'sticky notes', 'diagrams', 'brainstorming', 'board', or 'flowchart' that users would commonly say. | 2 / 3 |
Distinctiveness Conflict Risk | The mention of 'Miro' specifically and the cross-reference to 'miro-mcp' for MCP tools helps distinguish it somewhat, but 'visual collaboration platform' and 'AI capabilities' are broad enough to potentially overlap with other collaboration or diagramming skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
14%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads as a product marketing overview rather than an actionable guide for Claude. While the progressive disclosure structure is excellent with well-organized references to deeper content, the body itself contains no executable guidance, concrete steps, or information that Claude wouldn't already know. The content would benefit greatly from being rewritten to focus on what Claude should actually do with Miro rather than describing what Miro is.
Suggestions
Replace the descriptive/marketing content with actionable guidance - what should Claude actually do when a user asks about Miro? Include specific decision trees or response patterns.
Add concrete examples of how Claude should handle common Miro-related requests, such as recommending templates for specific use cases or explaining how to set up integrations.
Remove general product knowledge (user counts, taglines, feature lists) that Claude already knows or that doesn't help Claude perform a specific task.
If this skill is meant to be purely informational context, add explicit instructions for when and how Claude should use this information in conversations.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is largely informational/marketing material that Claude already knows or can infer. Phrases like 'Miro is the visual collaboration platform for every team' and '100M+ users across 250K+ organizations' are promotional filler. The tables of content types and enterprise use cases describe general knowledge rather than providing actionable instructions Claude wouldn't already have. | 1 / 3 |
Actionability | There are no concrete commands, executable code, specific steps, or copy-paste-ready guidance anywhere in the content. Everything is descriptive ('Miro boards contain diverse content types') rather than instructive. The skill reads like a product overview page, not an actionable guide. | 1 / 3 |
Workflow Clarity | There are no multi-step workflows, sequences, or validation checkpoints. The enterprise use cases section lists patterns but provides no steps for executing any of them. The design-to-code section mentions a workflow but only lists bullet points without sequencing or validation. | 1 / 3 |
Progressive Disclosure | The content is well-structured as an overview with clear, one-level-deep references to separate files (content-types.md, ai-capabilities.md, design-to-code.md, enterprise-use-cases.md). Each section provides a brief summary and points to the appropriate reference document with clear signaling. | 3 / 3 |
Total | 6 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
b1d33ab
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.