CtrlK
BlogDocsLog inGet started
Tessl Logo

literature-review

Conduct comprehensive, systematic literature reviews using multiple academic databases (PubMed, arXiv, bioRxiv, Semantic Scholar, etc.). This skill should be used when conducting systematic literature reviews, meta-analyses, research synthesis, or comprehensive literature searches across biomedical, scientific, and technical domains. Creates professionally formatted markdown documents and PDFs with verified citations in multiple citation styles (APA, Nature, Vancouver, etc.).

75

1.28x
Quality

67%

Does it follow best practices?

Impact

91%

1.28x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/literature-review/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong, well-crafted description that clearly communicates the skill's capabilities, includes explicit trigger guidance with a 'should be used when' clause, and provides rich natural keywords that researchers would use. The specificity of named databases and citation styles makes it highly distinctive and easy for Claude to select appropriately.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: conducting systematic literature reviews, searching multiple named academic databases (PubMed, arXiv, bioRxiv, Semantic Scholar), creating formatted markdown documents and PDFs, and handling verified citations in multiple named styles (APA, Nature, Vancouver).

3 / 3

Completeness

Clearly answers both 'what' (conduct systematic literature reviews using multiple databases, create formatted documents with verified citations) and 'when' ('should be used when conducting systematic literature reviews, meta-analyses, research synthesis, or comprehensive literature searches across biomedical, scientific, and technical domains').

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'literature review', 'systematic literature review', 'meta-analyses', 'research synthesis', 'literature searches', 'PubMed', 'arXiv', 'citations', 'APA', and domain terms like 'biomedical', 'scientific'. Good coverage of terms a researcher would naturally use.

3 / 3

Distinctiveness Conflict Risk

Occupies a clear niche focused on systematic literature reviews across academic databases with specific citation formatting. The combination of named databases, citation styles, and academic research context makes it highly distinctive and unlikely to conflict with general document or research skills.

3 / 3

Total

12

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is comprehensive in scope but severely over-verbose, explaining many concepts Claude already understands (systematic review methodology, Boolean operators, what preprints are, journal impact factors). The workflow structure is solid but would benefit from being a concise overview with details pushed to referenced files. Actionability is moderate — some concrete commands exist but much content is descriptive rather than executable.

Suggestions

Reduce content by 60-70%: Remove explanations of concepts Claude already knows (PICO framework basics, what MeSH terms are, how Boolean operators work, what preprints are, journal tier rankings, author h-index thresholds). Focus only on project-specific tooling and conventions.

Move database-specific search guidance, citation style details, best practices, and common pitfalls into separate referenced files (e.g., references/database_strategies.md, references/best_practices.md) to keep SKILL.md as a lean overview.

Make the arXiv search example executable rather than a partial Python snippet with comments, and provide complete working examples for Semantic Scholar API access.

Fix the duplicated bullet points in the 'Best Practices' section where 'Search Strategy' and 'Screening and Selection' share identical first four items.

DimensionReasoningScore

Conciseness

Extremely verbose at ~500+ lines. Contains extensive explanations of concepts Claude already knows (what PICO is, what MeSH terms are, how Boolean operators work, what preprints are, what systematic reviews entail). The journal tier lists, citation count thresholds, author reputation guidelines, and best practices sections are largely common knowledge for Claude. The 'Best Practices' section even duplicates its first four bullet points between 'Search Strategy' and 'Screening and Selection'. The summary section restates what was already covered.

1 / 3

Actionability

Provides some concrete commands (e.g., `python scripts/verify_citations.py`, `python scripts/generate_pdf.py`, `gget search pubmed`) which are useful, but much of the guidance is procedural description rather than executable code. The screening phases are entirely manual/conceptual with no concrete tooling. Many code blocks are comments or pseudocode rather than executable. The arXiv example is incomplete Python that can't be run.

2 / 3

Workflow Clarity

The 7-phase workflow is clearly sequenced and well-structured with numbered steps. However, validation checkpoints are weak — citation verification is mentioned but there's no explicit feedback loop for the screening phases or data extraction. The 'fix and re-verify' step in Phase 6 is the only real feedback loop. For a process involving batch operations on search results and document generation, more explicit validation gates between phases would be expected.

2 / 3

Progressive Disclosure

References external files appropriately (references/citation_styles.md, references/database_strategies.md, assets/review_template.md, scripts/), but the main document itself is a monolithic wall of text that inlines enormous amounts of detail that could be split into separate files. The database-specific search guidance, citation style guide, journal tier tables, and best practices could all be in referenced documents, keeping the SKILL.md as a concise overview.

2 / 3

Total

7

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (638 lines); consider splitting into references/ and linking

Warning

metadata_version

'metadata.version' is missing

Warning

Total

9

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.