CtrlK
BlogDocsLog inGet started
Tessl Logo

radiology-image-quiz

Use when creating radiology educational quizzes, preparing board exam questions, or studying medical imaging cases. Generates interactive quizzes with X-ray, CT, MRI, and ultrasound images for medical education.

61

Quality

52%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/radiology-image-quiz/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a solid skill description with excellent trigger terms and completeness, clearly specifying both when to use it and what it does. The main weakness is that the specificity of capabilities could be stronger—it primarily describes one action (generating quizzes) rather than listing multiple concrete actions like creating case presentations, providing diagnostic explanations, or tracking learning progress. Overall, it would perform well in skill selection due to its distinct niche and natural trigger terms.

Suggestions

Add more specific concrete actions beyond 'generates interactive quizzes', such as 'presents diagnostic findings, provides answer explanations with anatomical annotations, supports case-based learning scenarios'.

DimensionReasoningScore

Specificity

The description names the domain (radiology education) and mentions generating interactive quizzes with specific imaging modalities (X-ray, CT, MRI, ultrasound), but doesn't list multiple concrete actions beyond 'generates interactive quizzes'. It lacks detail on what the quizzes contain (e.g., answer explanations, difficulty levels, case presentations).

2 / 3

Completeness

Clearly answers both 'what' (generates interactive quizzes with medical imaging modalities) and 'when' (creating radiology educational quizzes, preparing board exam questions, studying medical imaging cases) with an explicit 'Use when' clause.

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'radiology', 'quizzes', 'board exam questions', 'medical imaging', 'X-ray', 'CT', 'MRI', 'ultrasound', 'medical education', 'studying'. Good coverage of terms a medical student or educator would naturally use.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive niche combining radiology, medical imaging modalities, and educational quiz generation. Very unlikely to conflict with other skills given the specific domain of radiology education and board exam preparation.

3 / 3

Total

11

/

12

Passed

Implementation

14%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is heavily padded with generic boilerplate that applies to any skill, not specifically to radiology quiz generation. The domain-specific content (Quick Start, Core Capabilities, CLI Usage) is buried among repetitive process management sections. There are conflicting entry points (main.py vs radiology_quiz.py), and the workflow provides no radiology-specific guidance, validation of medical accuracy, or meaningful checkpoints.

Suggestions

Remove all generic boilerplate sections (Output Requirements, Response Template, Input Validation, Error Handling, Implementation Details) that don't contain radiology-quiz-specific information, and consolidate the remaining content to eliminate duplication.

Resolve the conflicting entry points (scripts/main.py vs scripts/radiology_quiz.py) and provide a single, verified executable example with expected output.

Replace the generic workflow with radiology-quiz-specific steps: e.g., 1) Select modality and cases, 2) Generate questions, 3) Validate medical accuracy of findings/diagnoses, 4) Format output, with explicit validation checkpoints for clinical correctness.

Add a concrete end-to-end example showing input parameters and expected quiz output format (e.g., a sample generated question with answer choices, correct answer, and explanation).

DimensionReasoningScore

Conciseness

Extremely verbose and repetitive. The description is repeated verbatim in multiple sections ('When to Use', 'Key Features'). There are large boilerplate sections (Output Requirements, Response Template, Input Validation, Error Handling) that are generic process instructions Claude already knows. The 'Implementation Details' section says 'See Workflow above' then repeats vague platitudes. Multiple redundant validation commands appear in different sections.

1 / 3

Actionability

There are some concrete code examples (Quick Start, Core Capabilities, CLI Usage) that show specific API calls and parameters, but they appear to be non-executable pseudocode referencing modules (scripts/radiology_quiz.py) that may not exist alongside the referenced scripts/main.py. The conflicting entry points (main.py vs radiology_quiz.py) create confusion about what actually runs.

2 / 3

Workflow Clarity

The numbered workflow in the 'Workflow' section is entirely generic process management advice ('Confirm the user objective', 'Validate that the request matches documented scope') with no radiology-quiz-specific steps. There are no validation checkpoints for quiz content accuracy, no feedback loops for verifying generated questions are medically correct, and no clear sequence for the actual quiz generation process.

1 / 3

Progressive Disclosure

The document is a monolithic wall of text with heavily duplicated content across sections. There's only one reference link (references/audit-reference.md) which is generic. Content is poorly organized with redundant sections (Example Usage, Quick Start, CLI Usage all partially overlap; Audit-Ready Commands and Quick Check are nearly identical). No clear hierarchy or navigation structure.

1 / 3

Total

5

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.