Use medical cv resume builder for academic writing workflows that need structured execution, explicit assumptions, and clear output boundaries.
33
17%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Academic Writing/medical-cv-resume-builder/SKILL.mdQuality
Discovery
22%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is weak overall. It names a domain ('medical cv resume builder' and 'academic writing') but fails to specify concrete actions the skill performs and lacks explicit trigger guidance. The latter half of the description reads as abstract process language rather than actionable capability descriptions.
Suggestions
Replace abstract language like 'structured execution, explicit assumptions, and clear output boundaries' with concrete actions such as 'formats education history, lists publications, organizes clinical experience sections'.
Add an explicit 'Use when...' clause with natural trigger terms like 'Use when the user asks to create or update a medical CV, academic resume, curriculum vitae, or publication list'.
Include file format mentions (e.g., '.docx', '.pdf') and common synonyms ('curriculum vitae', 'academic CV', 'faculty resume') to improve trigger term coverage.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description mentions 'academic writing workflows' and 'structured execution, explicit assumptions, and clear output boundaries' but these are abstract concepts, not concrete actions. No specific capabilities like 'formats CV sections', 'generates publication lists', or 'creates education histories' are listed. | 1 / 3 |
Completeness | The 'what' is extremely vague — it doesn't clearly describe what the skill actually does beyond 'academic writing workflows.' The 'when' clause ('that need structured execution, explicit assumptions, and clear output boundaries') is abstract and not tied to concrete user triggers. There is no explicit 'Use when...' clause. | 1 / 3 |
Trigger Term Quality | Contains some relevant keywords like 'medical cv', 'resume', 'builder', and 'academic writing' that users might naturally say. However, it misses common variations like 'curriculum vitae', 'publications list', 'academic CV', '.docx', or specific medical specialties. | 2 / 3 |
Distinctiveness Conflict Risk | The combination of 'medical cv resume builder' and 'academic writing' provides some niche specificity, but 'academic writing workflows' is broad enough to overlap with general writing, resume building, or academic document skills. The vague qualifiers about 'structured execution' and 'output boundaries' don't help distinguish it. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
12%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is heavily padded with generic boilerplate (security checklists, lifecycle status, evaluation criteria, risk assessments) that adds no medical-CV-specific value and consumes significant token budget. The actual domain expertise—how to format a medical CV, what sections to include, US standards for academic medical CVs, how to handle publications/grants/clinical experience—is almost entirely absent. The circular cross-references between sections suggest auto-generated content rather than thoughtfully structured guidance.
Suggestions
Remove all generic boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria) and replace with actual medical CV domain knowledge: required sections (Education, Residency, Fellowships, Board Certifications, Publications, Grants, etc.), formatting standards, and ordering conventions.
Add concrete examples of medical CV sections with properly formatted content, showing how experiences and education inputs map to specific CV output sections.
Eliminate circular references ('See ## Features above') and consolidate overlapping sections (Workflow appears twice, Output Format vs Output Requirements, Prerequisites vs Dependencies).
Provide actual executable code or at minimum concrete formatting rules and templates rather than abstract workflow descriptions like 'Confirm the user objective' and 'Validate that the request matches the documented scope'.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose and repetitive. Multiple sections reference each other circularly ('See ## Features above', 'See ## Prerequisites above', 'See ## Workflow above'). The skill explains generic concepts Claude already knows (error handling philosophy, security checklists, lifecycle status, evaluation criteria). Much of the content is boilerplate that adds no medical-CV-specific value. The actual domain-specific content (medical CV formatting) is buried under layers of generic process documentation. | 1 / 3 |
Actionability | Despite its length, the skill provides almost no concrete, executable guidance for actually building a medical CV. The 'scripts/main.py' is referenced repeatedly but no actual code or specific formatting rules are provided. The input parameters table is skeletal ('experiences: list'), and the workflow steps are abstract ('Confirm the user objective'). There are no examples of medical CV content, formatting rules, or section-specific guidance. | 1 / 3 |
Workflow Clarity | There is a numbered workflow with steps and some validation concepts (stop early if out of scope, fallback paths). However, the workflow is generic and abstract rather than specific to medical CV creation. The 'Example run plan' provides a basic sequence but lacks concrete validation checkpoints for the actual CV output quality. Error handling is mentioned but not tied to specific failure modes in CV generation. | 2 / 3 |
Progressive Disclosure | The document is a monolithic wall of text with many sections that repeat or circularly reference each other. References to 'references/' directory and 'scripts/main.py' are vague with no description of what those files contain. Multiple sections cover overlapping concerns (Workflow appears twice, Prerequisites and Dependencies overlap, Output Format and Output Requirements are separate sections covering related topics). The circular 'See above' references are particularly problematic. | 1 / 3 |
Total | 5 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
0b96148
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.