Conduct comprehensive, systematic literature reviews using multiple academic databases (PubMed, arXiv, bioRxiv, Semantic Scholar, etc.). This skill should be used when conducting systematic literature reviews, meta-analyses, research synthesis, or comprehensive literature searches across biomedical, scientific, and technical domains. Creates professionally formatted markdown documents and PDFs with verified citations in multiple citation styles (APA, Nature, Vancouver, etc.).
72
67%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/literature-review/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly articulates specific capabilities (database searching, citation formatting), includes an explicit 'use when' clause with relevant trigger scenarios, and names concrete tools and formats. The description is well-structured, uses third person voice appropriately, and provides enough detail to distinguish it from general research or writing skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: conducting systematic literature reviews, searching multiple named academic databases (PubMed, arXiv, bioRxiv, Semantic Scholar), creating formatted markdown documents and PDFs, and generating verified citations in multiple named styles (APA, Nature, Vancouver). | 3 / 3 |
Completeness | Clearly answers both 'what' (conduct systematic literature reviews using multiple databases, create formatted documents with verified citations) and 'when' (explicit 'This skill should be used when' clause listing systematic literature reviews, meta-analyses, research synthesis, comprehensive literature searches). | 3 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'literature review', 'systematic literature review', 'meta-analyses', 'research synthesis', 'literature searches', 'PubMed', 'arXiv', 'citations', 'APA', and domain terms like 'biomedical', 'scientific'. Good coverage of terms a researcher would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Occupies a clear niche focused on systematic literature reviews across academic databases with specific citation formatting. The combination of named databases, citation styles, and academic research focus makes it highly distinct and unlikely to conflict with general document or writing skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is comprehensive in scope but severely undermined by verbosity—it reads more like a textbook chapter on systematic reviews than a concise skill file for Claude. There is significant redundancy (duplicate sections, restated concepts), excessive explanation of things Claude already knows (Boolean operators, what preprints are, journal impact factors), and large sections that should be in referenced files rather than inline. The actionability is moderate: CLI commands are concrete but many critical steps depend on undocumented scripts.
Suggestions
Cut content by 60-70%: Remove the duplicate 'Screening and Selection' best practices section, the journal tier/author reputation tables (Claude knows this), explanations of basic concepts (PICO, Boolean operators, what preprints are), and the Summary section. Move database-specific guidance and citation style examples to their referenced files.
Split the monolithic content: Move 'Database-Specific Search Guidance', 'Citation Style Guide', 'Prioritizing High-Impact Papers', and 'Common Pitfalls' into separate referenced markdown files, keeping only 1-2 line pointers in the main SKILL.md.
Add explicit validation checkpoints between phases: e.g., 'Before proceeding to Phase 3, verify you have results from at least 3 databases and all search parameters are documented in the review file.'
Remove the mandatory scientific-schematics section or reduce to a single line reference—it's a dependency on another skill and takes up significant space with instructions that belong in that skill's documentation.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~500+ lines. Contains significant redundancy (e.g., 'Best Practices' repeats 'Screening and Selection' section twice verbatim, the 'Summary' restates what's already clear). Explains concepts Claude already knows (what PICO is, what Boolean operators are, what preprints are). The journal tier lists, author reputation assessment, and citation count thresholds are general academic knowledge that doesn't need this level of detail. | 1 / 3 |
Actionability | Provides concrete CLI commands for parallel-cli and some Python scripts, which is good. However, many steps rely on scripts (verify_citations.py, search_databases.py, generate_pdf.py) whose actual behavior is not shown—they're referenced but not executable without the actual files. Several steps are described abstractly ('Manually screen titles, abstracts, full texts') rather than with concrete guidance. | 2 / 3 |
Workflow Clarity | The 7-phase workflow is well-sequenced with clear phases and the citation verification step serves as a validation checkpoint. However, there are no explicit feedback loops for the screening phases (what to do if too many/few results), no validation between phases, and the example workflow at the end is mostly comments rather than actionable validation steps. The PRISMA flow diagram is mentioned but only as ASCII art placeholder. | 2 / 3 |
Progressive Disclosure | References external files appropriately (references/citation_styles.md, references/database_strategies.md, assets/review_template.md, scripts/), but the main SKILL.md itself is monolithic with enormous inline content that should be split out—the database-specific search guidance, citation style guide, and journal tier tables could all be in separate reference files. The inline content is far too long for an overview document. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (700 lines); consider splitting into references/ and linking | Warning |
metadata_version | 'metadata.version' is missing | Warning |
Total | 9 / 11 Passed | |
25e1c0f
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.