Generates detailed text descriptions of medical images and charts for.
32
16%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Academic Writing/visual-content-desc/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is truncated (ending with 'for.'), which severely undermines its effectiveness. While it identifies a reasonably specific domain (medical image descriptions), it lacks a 'Use when...' clause, comprehensive trigger terms, and appears incomplete. The skill would benefit significantly from being completed and expanded with explicit trigger guidance.
Suggestions
Complete the truncated sentence and add a 'Use when...' clause, e.g., 'Use when the user needs text descriptions of medical images, radiology scans, pathology slides, or clinical charts.'
Add natural trigger terms users would say, such as 'X-ray', 'MRI', 'CT scan', 'radiology', 'pathology', 'medical imaging', 'alt text', 'image accessibility'.
List additional specific actions beyond generating descriptions, such as identifying anatomical structures, summarizing chart findings, or flagging notable features.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (medical images and charts) and a specific action (generates detailed text descriptions), but the description appears truncated ('for.' ends abruptly) and doesn't list multiple concrete actions. | 2 / 3 |
Completeness | The description partially addresses 'what' (generates text descriptions of medical images) but has no 'Use when...' clause or equivalent trigger guidance, and the description is truncated/incomplete, which should cap completeness at 2 at best — but the truncation makes it even weaker. | 1 / 3 |
Trigger Term Quality | Includes some relevant keywords like 'medical images', 'charts', and 'text descriptions', but misses common variations users might say such as 'radiology', 'X-ray', 'scan', 'alt text', 'accessibility', or 'image description'. | 2 / 3 |
Distinctiveness Conflict Risk | The medical image focus provides some distinctiveness, but 'charts' is vague and could overlap with data visualization or charting skills. The truncated description weakens its ability to clearly define its niche. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is almost entirely generic boilerplate with minimal domain-specific content about medical image description. The actual useful information (input parameters, output format, feature list) comprises perhaps 15% of the document, while the rest is repetitive scaffolding, circular references, and abstract workflow steps. It fails to provide any concrete examples, executable code, or specific guidance for generating medical image descriptions.
Suggestions
Replace the generic boilerplate with concrete examples of medical image descriptions—show a sample microscopy image description, a chart interpretation, and alt text output so Claude knows exactly what quality looks like.
Remove circular self-references ('See ## Features above') and consolidate redundant sections (Dependencies/Prerequisites, Workflow/Implementation Details/Example Usage) into a single clear workflow.
Add domain-specific guidance: medical terminology conventions, what features to highlight for different image types (microscopy vs. scan vs. chart), and common pitfalls in medical image description.
Cut the Risk Assessment, Security Checklist, Lifecycle Status, and Evaluation Criteria sections entirely—they consume tokens without adding actionable guidance for the task.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose and repetitive. Multiple sections reference each other circularly ('See ## Features above', 'See ## Prerequisites above', 'See ## Workflow above'). Contains extensive boilerplate (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria) that adds no actionable value for Claude. The actual domain-specific content (medical image description) is buried under generic scaffolding. | 1 / 3 |
Actionability | Despite the length, there is almost no concrete, executable guidance for actually generating medical image descriptions. The 'scripts/main.py' is referenced repeatedly but no actual code or logic is shown. The workflow steps are entirely abstract ('Confirm the user objective', 'Validate that the request matches'). No examples of actual medical image descriptions, alt text patterns, or chart interpretation techniques are provided. | 1 / 3 |
Workflow Clarity | The workflow section contains only generic, abstract steps ('Confirm the user objective', 'Validate that the request matches the documented scope') with no specifics about how to describe medical images. There are no validation checkpoints specific to the domain (e.g., verifying medical accuracy, checking terminology). The 'Example run plan' is equally vague and provides no real sequencing for the actual task. | 1 / 3 |
Progressive Disclosure | Circular self-references ('See ## Features above', 'See ## Prerequisites above') create confusion rather than progressive disclosure. References to 'references/' directory are vague with no indication of what's actually there. The document is a monolithic wall of boilerplate sections that are poorly organized—Features appears after Implementation Details references it, Prerequisites appears near the bottom after Dependencies references it. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
8277276
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.