Use market access value for academic writing workflows that need structured execution, explicit assumptions, and clear output boundaries.
33
Quality
17%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Academic Writing/market-access-value/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description fails on all dimensions. 'Market access value' appears to be a typo or nonsensical phrase, making the description confusing. It lacks concrete actions, natural trigger terms, and clear guidance on when to use the skill. The description would be nearly impossible for Claude to use effectively when selecting among multiple skills.
Suggestions
Replace 'market access value' with a clear verb phrase describing what the skill does (e.g., 'Structures academic papers with explicit methodology sections and defined scope')
Add specific concrete actions like 'create outlines, format citations, structure arguments, define research boundaries'
Include a proper 'Use when...' clause with natural trigger terms like 'academic paper', 'thesis', 'research writing', 'dissertation', 'scholarly article'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague language like 'market access value' (unclear meaning), 'structured execution', and 'clear output boundaries' without describing any concrete actions. No specific capabilities are listed. | 1 / 3 |
Completeness | The 'what' is extremely unclear - we don't know what this skill actually does. The 'when' clause exists ('workflows that need...') but is too vague to be actionable. Neither component is adequately addressed. | 1 / 3 |
Trigger Term Quality | 'Market access value' is confusing jargon that users would never naturally say. 'Academic writing' is a relevant term, but 'structured execution' and 'output boundaries' are abstract technical phrases, not natural user language. | 1 / 3 |
Distinctiveness Conflict Risk | 'Academic writing workflows' is broad and could overlap with many writing, research, or documentation skills. The vague qualifiers don't help distinguish this skill from others. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill suffers from severe verbosity and redundancy, with boilerplate sections that don't add domain-specific value. While it establishes a reasonable workflow structure and references external files appropriately, the core market access functionality (ICER calculations, budget impact models, payer negotiations) lacks concrete, executable guidance. The skill reads more like a generic template than a specialized tool for health economics work.
Suggestions
Remove redundant 'When to Use' bullets and circular cross-references; consolidate the skill description to a single clear statement
Add executable Python code examples showing actual ICER calculation, budget impact modeling, or value dossier generation rather than just showing expected outputs
Move boilerplate sections (Security Checklist, Lifecycle Status, Evaluation Criteria) to a separate reference file or remove entirely if not skill-specific
Provide concrete validation steps for domain outputs (e.g., how to verify ICER calculations are within acceptable ranges, how to validate budget impact assumptions)
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with significant redundancy - 'When to Use' repeats the description three times with slight variations, multiple sections reference each other circularly ('See ## Prerequisites above', 'See ## Workflow above'), and includes extensive boilerplate (security checklists, lifecycle status, evaluation criteria) that adds little actionable value for this specific skill. | 1 / 3 |
Actionability | Provides some concrete commands (py_compile, --help) but the core domain functionality (ICER calculations, budget impact models, value dossiers) lacks any executable code or specific implementation details. The 'Example' section shows only a result ('ICER = $45,000/QALY') without showing how to achieve it. | 2 / 3 |
Workflow Clarity | The workflow section provides a reasonable 5-step sequence with validation and fallback paths mentioned, but lacks explicit validation checkpoints for the domain-specific operations (ICER calculations, budget impact modeling). No concrete validation commands for the actual outputs are provided. | 2 / 3 |
Progressive Disclosure | References external files (references/audit-reference.md, scripts/main.py) appropriately, but the main document is bloated with sections that could be consolidated or moved to reference files. The circular cross-references ('See ## Prerequisites above') create confusion rather than clarity. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
4a48721
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.