CtrlK
BlogDocsLog inGet started
Tessl Logo

biorxiv-database

Efficient database search tool for bioRxiv preprint server. Use this skill when searching for life sciences preprints by keywords, authors, date ranges, or categories, retrieving paper metadata, downloading PDFs, or conducting literature reviews.

88

1.37x
Quality

88%

Does it follow best practices?

Impact

80%

1.37x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-crafted skill description that excels across all dimensions. It clearly identifies the specific tool (bioRxiv database search), lists concrete actions (searching, retrieving metadata, downloading PDFs, literature reviews), includes natural trigger terms users would use, and has an explicit 'Use this skill when...' clause. The description is distinctive enough to avoid conflicts with other document or research skills.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'searching for life sciences preprints by keywords, authors, date ranges, or categories, retrieving paper metadata, downloading PDFs, or conducting literature reviews.'

3 / 3

Completeness

Clearly answers both what ('Efficient database search tool for bioRxiv preprint server') and when ('Use this skill when searching for life sciences preprints by keywords, authors, date ranges, or categories, retrieving paper metadata, downloading PDFs, or conducting literature reviews').

3 / 3

Trigger Term Quality

Includes natural keywords users would say: 'bioRxiv', 'preprint', 'life sciences', 'keywords', 'authors', 'date ranges', 'categories', 'paper metadata', 'PDFs', 'literature reviews'. Good coverage of domain-specific and general research terms.

3 / 3

Distinctiveness Conflict Risk

Clear niche targeting bioRxiv specifically for life sciences preprints. The combination of 'bioRxiv', 'preprint', and 'life sciences' creates distinct triggers unlikely to conflict with general document or other database skills.

3 / 3

Total

12

/

12

Passed

Implementation

77%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured, highly actionable skill with excellent executable examples and clear workflows. The main weaknesses are verbosity (explaining when to use the skill, overview sections) and a monolithic structure that could benefit from splitting detailed reference content into separate files. The testing section, while useful, adds significant length that could be externalized.

Suggestions

Remove or significantly condense the 'Overview' and 'When to Use This Skill' sections - Claude can infer these from the skill description and examples

Move the 'Valid Categories' list and 'Testing the Skill' section to separate reference files to reduce main file length

Consider moving 'Programmatic Integration' and 'Advanced Features' to a separate advanced.md file, keeping SKILL.md focused on core usage patterns

DimensionReasoningScore

Conciseness

The skill is moderately verbose with some unnecessary sections like the detailed 'Overview' and 'When to Use This Skill' that explain obvious use cases. The content could be tightened significantly while preserving all actionable information.

2 / 3

Actionability

Provides fully executable command-line examples and Python code throughout. All examples are copy-paste ready with concrete parameters, output formats, and real-world usage patterns.

3 / 3

Workflow Clarity

The 'Literature Review Workflow' section provides clear numbered steps with validation (checking result_count, reviewing results before downloading). The workflow progresses logically from search to review to download with explicit checkpoints.

3 / 3

Progressive Disclosure

While the skill references external documentation (references/api_reference.md), the main file is quite long (~350 lines) with content that could be split out (e.g., the full category list, testing section, programmatic integration examples). The structure is good but the file is monolithic.

2 / 3

Total

10

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_version

'metadata.version' is missing

Warning

Total

10

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.