Generate USMLE Step 1/2 style clinical cases with patient history, physical.
46
33%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Academic Writing/usmle-case-generator/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a specific medical education niche (USMLE clinical cases) but appears truncated and lacks critical completeness. It fails to include explicit trigger guidance ('Use when...') and doesn't fully enumerate the capabilities or components of generated cases. The description would benefit from completion and explicit usage triggers.
Suggestions
Add a 'Use when...' clause with trigger terms like 'USMLE prep', 'board exam practice', 'clinical vignette', 'medical case study', or 'Step 1/Step 2 questions'
Complete the truncated description to list all case components (e.g., 'labs, imaging, differential diagnosis, answer choices with explanations')
Include common user phrasings like 'practice questions', 'medical board prep', 'NBME-style', or 'exam review' to improve trigger term coverage
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (USMLE Step 1/2 clinical cases) and mentions some components (patient history, physical), but the description appears truncated and doesn't list comprehensive actions like creating differential diagnoses, lab values, or answer explanations. | 2 / 3 |
Completeness | Describes what it does (generate clinical cases) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, missing explicit trigger guidance caps this at 2, but the truncated nature and weak 'what' pushes it to 1. | 1 / 3 |
Trigger Term Quality | Includes relevant keywords like 'USMLE', 'Step 1/2', 'clinical cases', 'patient history', and 'physical', but misses common variations users might say like 'board exam', 'practice questions', 'vignettes', 'medical exam prep', or 'NBME'. | 2 / 3 |
Distinctiveness Conflict Risk | The USMLE Step 1/2 focus provides some distinctiveness from general medical or educational skills, but could overlap with other medical education, case generation, or exam prep skills without clearer boundaries. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill suffers from severe verbosity and redundancy, with circular references between sections and extensive generic boilerplate that obscures the actual USMLE case generation guidance. While it provides some concrete CLI examples and a useful case structure outline, the core medical education workflow is buried under template content. The skill would benefit from aggressive trimming to focus on the unique aspects of generating clinically accurate USMLE-style cases.
Suggestions
Remove circular section references ('See ## Features above') and consolidate duplicate content - the usage examples, workflow, and implementation details appear multiple times with slight variations
Move boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria) to a separate operational document or remove entirely
Add specific guidance on ensuring clinical accuracy in generated cases - what validation steps should be taken, what sources to cross-reference, how to structure medically plausible distractors
Show a complete, executable example of case generation including the actual output structure rather than truncated placeholders
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with massive redundancy - sections reference each other circularly ('See ## Features above', 'See ## Usage above'), duplicate information appears multiple times (example usage shown twice, workflow explained multiple times), and includes extensive boilerplate (risk assessment tables, security checklists, lifecycle status) that adds little value for the actual task of generating USMLE cases. | 1 / 3 |
Actionability | Provides concrete CLI commands and parameter tables which are helpful, but the actual case generation logic is not shown - we only see how to invoke scripts/main.py without understanding what it does internally. The example output is useful but incomplete (truncated with '[History, physical, labs, ECG findings...]'). | 2 / 3 |
Workflow Clarity | Multiple workflow sections exist but they're generic boilerplate rather than specific to USMLE case generation. The actual workflow for creating a medical case (selecting condition, building vignette, crafting question) is not explained. The 'Example run plan' is generic and doesn't address validation of medical accuracy. | 2 / 3 |
Progressive Disclosure | References external files appropriately (references/topics.json, references/case_templates.json, etc.), but the main document is bloated with redundant sections and boilerplate. Content that should be in separate files (security checklist, risk assessment, evaluation criteria) clutters the main skill file. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
4a48721
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.