Generate interactive anatomy quizzes for medical education with multiple.
46
33%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Academic Writing/anatomy-quiz-master/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is clearly truncated mid-sentence ('with multiple.'), making it incomplete and grammatically broken. While it identifies a reasonably specific domain (anatomy quizzes for medical education), it fails to provide a 'Use when...' clause and lacks sufficient detail about its capabilities. The truncation severely undermines its usefulness for skill selection.
Suggestions
Complete the truncated sentence and fully describe capabilities (e.g., 'Generate interactive anatomy quizzes with multiple-choice questions, labeled diagrams, and drag-and-drop exercises for medical education').
Add an explicit 'Use when...' clause with trigger terms like 'anatomy quiz', 'medical exam prep', 'study anatomy', 'body systems', 'anatomical structures'.
Include specific output formats or quiz types to improve distinctiveness (e.g., 'multiple-choice, fill-in-the-blank, diagram labeling').
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (anatomy quizzes, medical education) and one action (generate interactive anatomy quizzes), but the description appears truncated ('with multiple.' ends abruptly) and lacks comprehensive detail about specific capabilities. | 2 / 3 |
Completeness | The description partially addresses 'what' (generate anatomy quizzes) but is clearly truncated and incomplete. There is no 'Use when...' clause or any explicit trigger guidance, and the sentence itself is grammatically incomplete ('with multiple.' trails off). | 1 / 3 |
Trigger Term Quality | Includes some relevant keywords like 'anatomy', 'quizzes', 'medical education', and 'interactive', but is missing common variations users might say such as 'flashcards', 'study', 'exam prep', 'anatomy test', 'body parts', or specific anatomy terms. | 2 / 3 |
Distinctiveness Conflict Risk | The combination of 'anatomy' and 'quizzes' provides some distinctiveness, but 'interactive quizzes' and 'medical education' could overlap with other quiz-generation or medical education skills. The truncated description weakens its ability to clearly define its niche. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is excessively verbose with significant boilerplate that Claude doesn't need (error handling policies, response templates, input validation instructions). While it contains domain-specific content about anatomy quiz generation that is genuinely useful, the signal-to-noise ratio is poor. The code examples appear concrete but may not be truly executable, and the workflow lacks integrated validation checkpoints for quiz content accuracy.
Suggestions
Cut the generic boilerplate sections (Output Requirements, Response Template, Input Validation, Error Handling) entirely — these are standard Claude behaviors that waste tokens.
Remove redundant cross-references ('See ## Usage above for related details') and consolidate the workflow into a single clear sequence with validation checkpoints after quiz generation.
Move the detailed region tables, clinical scenario examples, and quality checklists into separate referenced files to reduce SKILL.md to an actionable overview.
Remove stdlib modules (json, random, argparse) from Dependencies — Claude knows these are built-in and listing them adds no value.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~300+ lines. Contains massive amounts of redundant sections (e.g., 'See ## Usage above' / 'See ## Workflow above' cross-references to sections that follow), boilerplate scaffolding (Output Requirements, Response Template, Input Validation, Error Handling) that Claude already knows how to do, and repeated information. Dependencies list includes Python stdlib modules (json, random, argparse) which is unnecessary padding. | 1 / 3 |
Actionability | Provides concrete Python code examples with specific API calls (QuizGenerator, AdaptiveEngine) and CLI commands with parameters, but these are likely not executable since they reference modules (scripts/quiz_generator.py, scripts/adaptive.py) whose actual existence and API are unverified. The code reads more like aspirational pseudocode dressed as real imports. The CLI examples and parameter table are concrete and useful. | 2 / 3 |
Workflow Clarity | There is a numbered workflow (steps 1-5) but it is generic and abstract ('Confirm the user objective', 'Validate that the request matches the documented scope'). The Example Usage section has a more concrete 4-step run plan, but lacks validation checkpoints for the generated quiz content (e.g., verifying anatomical accuracy of output, checking JSON schema validity). The Quality Checklist is helpful but is a review checklist, not an integrated workflow validation step. | 2 / 3 |
Progressive Disclosure | References to external files (references/ directory, scripts/ directory) are well-signaled with clear listings. However, the SKILL.md itself is monolithic with enormous inline content that should be split into separate files — the detailed region tables, clinical scenario examples, adaptive learning code, and quality checklists could all be in referenced documents. The document structure has many sections but poor organization with forward/backward references ('See ## Usage above' when Usage is below). | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
8277276
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.