Use market access value for academic writing workflows that need structured execution, explicit assumptions, and clear output boundaries.
30
13%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Academic Writing/market-access-value/SKILL.mdQuality
Discovery
7%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is poorly constructed, relying on abstract buzzwords ('market access value', 'structured execution', 'clear output boundaries') without explaining what the skill concretely does. It fails to provide natural trigger terms users would say and lacks explicit 'Use when...' guidance. The domain itself is unclear—it's ambiguous whether this is about health economics market access, academic writing, or something else entirely.
Suggestions
Replace abstract language with concrete actions the skill performs (e.g., 'Drafts market access dossiers, writes value propositions for health technology assessments, structures AMCP-format submissions').
Add an explicit 'Use when...' clause with natural trigger terms users would say (e.g., 'Use when the user asks about HTA submissions, market access documents, HEOR writing, or payer value stories').
Clarify the domain—specify whether this is health economics market access, financial market access, or another field to reduce ambiguity and improve distinctiveness.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague, abstract language like 'market access value', 'structured execution', 'explicit assumptions', and 'clear output boundaries' without listing any concrete actions the skill performs. | 1 / 3 |
Completeness | The 'what' is extremely vague—it's unclear what the skill actually does. While there's a partial 'when' implied ('academic writing workflows'), it lacks explicit trigger guidance and the 'what' is too weak to be meaningful. | 1 / 3 |
Trigger Term Quality | 'Market access value' is jargon that users are unlikely to naturally say. 'Academic writing' is somewhat relevant but the other terms ('structured execution', 'output boundaries') are not natural user trigger terms. | 1 / 3 |
Distinctiveness Conflict Risk | The combination of 'market access value' and 'academic writing' is unusual enough to reduce conflict with most other skills, but the vagueness of the description could still cause confusion about when to select it versus other academic writing or market analysis skills. | 2 / 3 |
Total | 5 / 12 Passed |
Implementation
20%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is heavily padded with generic boilerplate that could apply to any skill, while lacking the domain-specific substance needed for market access value work. The actual market access content (ICER calculations, budget impact modeling, value dossier creation) is mentioned only superficially with no executable examples or concrete methodology. The document is highly repetitive, with the same commands and cross-references appearing multiple times.
Suggestions
Remove all generic boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria) and focus on domain-specific market access value content—concrete ICER calculation methods, budget impact model structures, and value dossier templates.
Replace vague parameter descriptions ('drug_profile: Efficacy/safety data') with concrete input schemas, example data structures, and at least one fully worked example showing input data → ICER narrative output.
Eliminate redundant content: the py_compile command appears 3 times, the skill description is copy-pasted into multiple sections, and 'See ## X above' cross-references point to sections that don't add information.
Add concrete, executable workflow steps specific to market access analysis—e.g., how to structure comparator data, what ICER thresholds to use for different markets (US/EU/Japan), and how to generate payer objection handlers with specific examples.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose and repetitive. Multiple sections restate the same information (e.g., 'See ## Prerequisites above', 'See ## Workflow above', repeated py_compile commands in three places). The description is copy-pasted into 'When to Use' and 'Key Features'. Boilerplate sections like Risk Assessment, Security Checklist, Evaluation Criteria, and Lifecycle Status add significant token cost with minimal actionable value for Claude. | 1 / 3 |
Actionability | Despite having code blocks, the actual domain-specific guidance is extremely vague. The 'Example' section is just 'ICER = $45,000/QALY with uncertainty analysis' with no executable code or concrete methodology. Parameters like 'drug_profile: Efficacy/safety data' are undefined placeholders. The workflow steps are generic process descriptions ('Confirm the user objective') rather than concrete, executable instructions for market access value assessment. | 1 / 3 |
Workflow Clarity | There is a numbered workflow with steps and some error handling/fallback guidance, but the steps are abstract and generic rather than specific to market access value work. Validation is mentioned ('Validate that the request matches the documented scope') but there are no concrete validation checkpoints for the actual ICER calculations or budget impact modeling. The workflow could apply to virtually any skill. | 2 / 3 |
Progressive Disclosure | There is a reference to 'references/audit-reference.md' and the content is organized into sections, but the document itself is a monolithic wall of boilerplate. Many sections that add no value (Risk Assessment table, Security Checklist, Lifecycle Status) are inline rather than separated. The structure exists but is poorly curated—important domain content is buried among generic scaffolding. | 2 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
8277276
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.