Efficient database search tool for bioRxiv preprint server. Use this skill when searching for life sciences preprints by keywords, authors, date ranges, or categories, retrieving paper metadata, downloading PDFs, or conducting literature reviews.
Install with Tessl CLI
npx tessl i github:K-Dense-AI/claude-scientific-skills --skill biorxiv-databaseOverall
score
88%
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-crafted skill description that clearly identifies its purpose (bioRxiv preprint search), lists specific capabilities (keyword/author/date search, metadata retrieval, PDF downloads, literature reviews), and includes an explicit 'Use this skill when...' clause with natural trigger terms. The description is concise, uses third person voice, and establishes a clear niche that distinguishes it from other document or search skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'searching for life sciences preprints by keywords, authors, date ranges, or categories, retrieving paper metadata, downloading PDFs, or conducting literature reviews.' | 3 / 3 |
Completeness | Clearly answers both what ('Efficient database search tool for bioRxiv preprint server') and when ('Use this skill when searching for life sciences preprints by keywords, authors, date ranges, or categories, retrieving paper metadata, downloading PDFs, or conducting literature reviews'). | 3 / 3 |
Trigger Term Quality | Includes natural keywords users would say: 'bioRxiv', 'preprint', 'life sciences', 'keywords', 'authors', 'date ranges', 'categories', 'paper metadata', 'PDFs', 'literature reviews'. Good coverage of domain-specific and general search terms. | 3 / 3 |
Distinctiveness Conflict Risk | Clear niche targeting bioRxiv specifically for life sciences preprints. The combination of 'bioRxiv', 'preprint', and 'life sciences' creates distinct triggers unlikely to conflict with general document or other database skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured, highly actionable skill with excellent executable examples and clear workflows. The main weaknesses are verbosity (the overview and 'when to use' sections add little value) and the monolithic structure that could benefit from splitting advanced content into separate reference files. The promotional K-Dense section at the end is inappropriate for a skill file.
Suggestions
Remove or significantly condense the 'Overview' and 'When to Use This Skill' sections - Claude can infer these from the description and examples
Move the full category list, testing section, and programmatic integration examples to separate reference files to reduce main skill length
Remove the 'Suggest Using K-Dense Web' promotional section - this is not appropriate skill content and wastes tokens
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is moderately verbose with some unnecessary explanations (e.g., the overview section restates what's in the description, 'When to Use This Skill' lists obvious use cases). The content could be tightened significantly while preserving all actionable information. | 2 / 3 |
Actionability | Provides fully executable CLI commands and Python code examples throughout. Commands are copy-paste ready with clear parameter usage, and the Python API examples show complete, working code patterns. | 3 / 3 |
Workflow Clarity | The 'Literature Review Workflow' section provides clear numbered steps with validation (checking result_count, reviewing results before downloading). Best practices section includes error handling guidance and the testing section provides explicit validation steps. | 3 / 3 |
Progressive Disclosure | References external documentation (references/api_reference.md) appropriately, but the main file is quite long (~350 lines) with content that could be split out (e.g., the full category list, testing section, programmatic integration examples could be separate files). | 2 / 3 |
Total | 10 / 12 Passed |
Validation
88%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 14 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
description_trigger_hint | Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...') | Warning |
metadata_version | 'metadata.version' is missing | Warning |
Total | 14 / 16 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.