Use medical translation for academic writing workflows that need structured execution, explicit assumptions, and clear output boundaries.
43
30%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Academic Writing/medical-translation/SKILL.mdQuality
Discovery
40%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description fails to explain what the skill actually does - 'medical translation' is mentioned but never defined with concrete actions. The abstract phrases like 'structured execution' and 'clear output boundaries' are meaningless buzzwords that don't help Claude understand when to select this skill. While it has a 'Use when' structure, the trigger conditions are too vague to be useful.
Suggestions
Replace abstract language with concrete actions (e.g., 'Translates medical terminology between languages, converts clinical documents for lay audiences, adapts research papers for different medical specialties').
Add specific trigger terms users would naturally say (e.g., 'medical documents', 'clinical translation', 'patient materials', 'research abstracts', 'medical jargon').
Clarify the 'when' clause with explicit scenarios (e.g., 'Use when translating medical content, converting clinical language for patients, or adapting healthcare documents for academic publication').
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague, abstract language like 'structured execution', 'explicit assumptions', and 'clear output boundaries' without describing any concrete actions. It doesn't specify what 'medical translation' actually does. | 1 / 3 |
Completeness | Has a 'Use when' clause addressing when to use it (academic writing workflows), but the 'what' is extremely weak - it doesn't explain what the skill actually does beyond the vague term 'medical translation'. | 2 / 3 |
Trigger Term Quality | Contains some relevant keywords like 'medical translation' and 'academic writing' that users might say, but lacks common variations and specific terms users would naturally use (e.g., 'translate medical terms', 'clinical documents', 'research papers'). | 2 / 3 |
Distinctiveness Conflict Risk | The combination of 'medical translation' and 'academic writing' provides some specificity, but the vague descriptors could overlap with general writing, translation, or academic skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
20%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill content is heavily template-driven with excessive boilerplate that obscures the actual medical translation functionality. The core translation guidance (parameters, example) is minimal and buried under generic workflow documentation, risk assessments, and security checklists that add little value. The skill lacks concrete, executable code for performing medical translation despite referencing multiple scripts.
Suggestions
Remove or drastically reduce boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria) that don't provide actionable translation guidance
Add actual executable code showing how to perform medical translation, either inline or with real script content rather than just compilation checks
Consolidate the 'When to Use', 'Description', and 'Usage' sections which currently repeat similar information
Provide concrete medical translation examples with actual terminology mappings, edge cases (ambiguous terms, context-dependent translations), and validation steps specific to medical accuracy
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with excessive boilerplate, redundant sections (e.g., 'See ## Prerequisites above' repeated multiple times), and template-like content that doesn't add value. The actual medical translation guidance is buried under layers of generic workflow documentation that Claude doesn't need. | 1 / 3 |
Actionability | Despite referencing scripts like `scripts/main.py` and `scripts/smoke_test.py`, no actual executable code for medical translation is provided. The example shows input/output but no concrete implementation. The 'Example Usage' section just shows how to check if a script compiles, not how to actually perform translation. | 1 / 3 |
Workflow Clarity | There is a numbered workflow with steps, but it's generic and abstract rather than specific to medical translation. The validation steps exist but are boilerplate (e.g., 'confirm the user objective') rather than domain-specific checkpoints for translation quality or terminology accuracy. | 2 / 3 |
Progressive Disclosure | References to external files like `references/audit-reference.md` and `scripts/` exist, but the main document is bloated with content that should either be in those referenced files or removed entirely. The structure exists but is poorly utilized. | 2 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
4a48721
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.