Generate a visual anatomy annotation in Figma showing numbered markers on a component instance with an attribute table. Use when the user mentions "anatomy", "anatomy annotation", "component anatomy", "create anatomy", or wants to annotate a component's structural elements.
84
81%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly communicates what the skill does (generate visual anatomy annotations with numbered markers and attribute tables in Figma) and when to use it (with explicit trigger terms). The description is concise, uses third person voice, and occupies a distinct niche that minimizes conflict risk with other skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Generate a visual anatomy annotation', 'showing numbered markers on a component instance', 'with an attribute table'. These are concrete, specific capabilities. | 3 / 3 |
Completeness | Clearly answers both what ('Generate a visual anatomy annotation in Figma showing numbered markers on a component instance with an attribute table') and when ('Use when the user mentions "anatomy", "anatomy annotation"...') with explicit trigger guidance. | 3 / 3 |
Trigger Term Quality | Includes natural keywords users would say: 'anatomy', 'anatomy annotation', 'component anatomy', 'create anatomy', 'annotate a component's structural elements'. Good coverage of variations a user might naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive — the combination of Figma, anatomy annotation, numbered markers, and component instances creates a very clear niche that is unlikely to conflict with other skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
62%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is highly actionable with excellent workflow clarity — every step has executable code, clear sequencing, and validation checkpoints. However, it is severely bloated: the inline JavaScript blocks are enormous and heavily duplicated between steps, making the skill consume far more tokens than necessary. The content would benefit enormously from extracting code into referenced files and deduplicating shared logic (marker placement, table filling, font loading).
Suggestions
Extract the large JavaScript code blocks (Step 3 extraction, Step 8 composition artwork, Step 8b child sections) into separate referenced files (e.g., anatomy/extract.js, anatomy/composition.js, anatomy/child-section.js) and keep only the orchestration logic and variable substitution instructions in SKILL.md.
Deduplicate the marker placement, collision avoidance, table filling, and font loading code that is repeated nearly identically between Step 8 and Step 8b into a shared utility referenced once.
Move the MCP adapter table to a shared reference file (e.g., mcp-adapter.md) since it appears to be reusable across multiple skills, and reference it with a one-line link.
Trim the detailed field documentation for extraction output (the long bullet list after Step 3) — Claude can infer field semantics from the extraction code itself; keep only non-obvious fields or behavioral notes.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | This skill is extremely verbose at ~1000+ lines with massive inline JavaScript code blocks that could be referenced as external files. Much of the code is duplicated between Step 8 and Step 8b (marker placement, collision avoidance, table filling). The extraction script alone is hundreds of lines inline. The MCP adapter table, while useful, adds significant length that could be in a separate reference file. | 1 / 3 |
Actionability | The skill provides fully executable JavaScript code blocks with specific variable replacements clearly marked (e.g., __NODE_ID__, __FRAME_ID__). Every step has concrete, copy-paste-ready code with explicit MCP calls, and the workflow leaves no ambiguity about what to execute. | 3 / 3 |
Workflow Clarity | The workflow has a clear numbered checklist with explicit validation steps (Step 10 visual validation with up to 3 fix iterations), error recovery paths (e.g., MCP connection failures, template key missing), and conditional branching (cross-file vs same-file destination, figma-console vs figma-mcp). The progress checklist at the top is a strong addition. | 3 / 3 |
Progressive Disclosure | The skill references external files appropriately (anatomy/agent-anatomy-instruction.md, uspecs.config.json) but fails to extract the massive inline code blocks into separate files. The 500+ line extraction script, the 200+ line Step 8 script, and the 200+ line Step 8b script should be in referenced files, with the SKILL.md providing an overview and orchestration logic only. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (1683 lines); consider splitting into references/ and linking | Warning |
Total | 10 / 11 Passed | |
4eae941
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.