Generates academic reviews for molecules in diseases using PubMed research. Invoke when user needs biomedical literature review with Vancouver citation format.
54
43%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Academic Writing/molecular-review-workflow/SKILL.mdQuality
Discovery
75%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is functional with a clear niche and explicit trigger guidance, making it strong on completeness and distinctiveness. However, it could benefit from listing more concrete actions beyond 'generates academic reviews' and including additional natural trigger terms that users might use when requesting this type of work.
Suggestions
Add more specific actions such as 'searches PubMed for relevant studies, summarizes findings, and formats references in Vancouver style'.
Include additional trigger terms users might naturally say, such as 'literature search', 'drug-disease associations', 'systematic review', 'medical research summary', or 'PubMed search'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (academic reviews, molecules in diseases, PubMed) and a key action (generates academic reviews), but doesn't list multiple concrete actions beyond the single generation task. It mentions Vancouver citation format which adds specificity. | 2 / 3 |
Completeness | Clearly answers both 'what' (generates academic reviews for molecules in diseases using PubMed research) and 'when' (invoke when user needs biomedical literature review with Vancouver citation format), with an explicit trigger clause. | 3 / 3 |
Trigger Term Quality | Includes some relevant keywords like 'PubMed', 'biomedical literature review', 'Vancouver citation', 'molecules', and 'diseases', but misses common user variations like 'drug targets', 'therapeutic compounds', 'systematic review', 'citations', or 'medical research'. | 2 / 3 |
Distinctiveness Conflict Risk | Highly specific niche combining molecules-in-diseases focus, PubMed as a data source, and Vancouver citation format. This is unlikely to conflict with other skills due to its narrow biomedical academic review scope. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
12%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is heavily padded with generic boilerplate that adds no value — validation rules, failure handling, output contracts, and completion checklists that Claude already knows how to do. The actual domain-specific content (how to generate a molecular review from PubMed) is thin and lacks concrete examples of invocation with real parameters, expected outputs, or executable code. The result is a long document that tells Claude very little about how to actually perform the task.
Suggestions
Remove all generic boilerplate sections (Failure Handling, Validation and Safety Rules, Deterministic Output Rules, Completion Checklist, Output Contract, When Not to Use, Required Inputs, Recommended Workflow) and keep only domain-specific instructions.
Add a concrete, executable example showing the full invocation with real disease/molecule parameters, e.g., `python scripts/pubmed_api.py --disease 'breast cancer' --molecule 'tamoxifen'` with expected output.
Show a concrete example of the generated review output (even abbreviated) so Claude knows the exact format, section structure, and Vancouver citation style expected.
Consolidate the duplicated workflow descriptions into a single clear sequence with explicit validation checkpoints between steps (e.g., verify PubMed returned results before proceeding to review generation).
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose and repetitive. The skill restates generic boilerplate multiple times (validation rules, failure handling, when to use/not use, required inputs, recommended workflow) that Claude already knows. The actual domain-specific content (molecular review workflow) is buried under layers of generic scaffolding. Multiple sections say essentially the same thing (e.g., 'Validation Shortcut', 'Quick Validation', 'Validation and Safety Rules', 'Completion Checklist' all overlap heavily). | 1 / 3 |
Actionability | Despite the length, there is almost no concrete, executable guidance. The actual invocation of the workflow is never shown with real parameters. 'Then invoke the skill with disease and molecule parameters' is vague. No example of actual script invocation with arguments, no example output, no concrete code showing how the scripts are called with disease/molecule inputs. The 'Example Usage' section references a non-existent '## Usage above' and only shows py_compile and --help commands. | 1 / 3 |
Workflow Clarity | The 5-step workflow process (Input Translation → Search Term Generation → PubMed Search → Result Processing → Review Generation) provides a reasonable sequence, and quality rules are listed. However, there are no validation checkpoints between steps, no error recovery within the workflow itself, and the actual commands to execute each step are missing. The generic 'Recommended Workflow' section is too abstract to be useful. | 2 / 3 |
Progressive Disclosure | The content is a monolithic wall of text with heavily duplicated sections. There are no references to separate files for detailed content. The skill has two overlapping structures — a generic template section and the actual 'Molecular Review Workflow' section — creating confusion about which is authoritative. No clear navigation or hierarchy between sections. | 1 / 3 |
Total | 5 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
8277276
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.