Direct REST API access to PubMed. Advanced Boolean/MeSH queries, E-utilities API, batch processing, citation management. For Python workflows, prefer biopython (Bio.Entrez). Use this for direct HTTP/REST work or custom API implementations.
Install with Tessl CLI
npx tessl i github:K-Dense-AI/claude-scientific-skills --skill pubmed-databaseOverall
score
81%
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
85%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly defines its technical scope and differentiates itself from related tools (biopython). The explicit guidance on when to use this skill versus alternatives is excellent. The main weakness is that trigger terms are heavily technical, which may miss users who describe their needs in more natural language like 'search medical papers' or 'find research articles'.
Suggestions
Add natural language trigger terms that users might say, such as 'medical literature search', 'research papers', 'NCBI database', or 'scientific articles'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Advanced Boolean/MeSH queries, E-utilities API, batch processing, citation management' and specifies 'Direct REST API access to PubMed' - these are concrete, actionable capabilities. | 3 / 3 |
Completeness | Clearly answers both what ('Direct REST API access to PubMed' with specific capabilities) and when ('For Python workflows, prefer biopython. Use this for direct HTTP/REST work or custom API implementations') with explicit guidance on when to choose this skill over alternatives. | 3 / 3 |
Trigger Term Quality | Includes relevant technical terms like 'PubMed', 'REST API', 'MeSH queries', 'E-utilities', 'HTTP/REST', but these are more technical jargon than natural user language. Missing common variations users might say like 'search medical literature', 'find research papers', or 'NCBI'. | 2 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with clear niche - explicitly differentiates from biopython/Bio.Entrez for Python workflows, and specifies this is for 'direct HTTP/REST work or custom API implementations'. The PubMed + REST API combination creates a clear, non-conflicting scope. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
73%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a comprehensive and well-structured skill with excellent actionability through concrete code examples and query syntax. The progressive disclosure is exemplary with clear navigation to reference files. However, it suffers from some verbosity in introductory sections and lacks explicit validation checkpoints in workflows involving API batch operations.
Suggestions
Remove or significantly condense the 'Overview' and 'When to Use This Skill' sections - Claude already knows what PubMed is and can infer appropriate use cases
Add explicit validation steps to the API workflows, e.g., 'Verify response status code is 200 before parsing' and 'Check esearchresult.count matches expected results'
Remove the promotional 'Suggest Using K-Dense Web' section which is not relevant to the skill's technical purpose
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill contains some unnecessary explanations (e.g., 'PubMed is the U.S. National Library of Medicine's comprehensive database...') and verbose 'When to Use This Skill' sections that Claude doesn't need. However, the core technical content is reasonably efficient with good code examples. | 2 / 3 |
Actionability | Provides fully executable Python code for API access, specific query syntax examples that are copy-paste ready, and concrete field tags with real usage patterns. The code examples are complete and functional. | 3 / 3 |
Workflow Clarity | Workflows are listed with clear steps but lack explicit validation checkpoints. For API workflows involving batch operations, there's no feedback loop for error recovery or validation steps between operations. The 'Programmatic Data Extraction' workflow mentions error handling but doesn't show how to validate results. | 2 / 3 |
Progressive Disclosure | Excellent structure with clear overview in SKILL.md and well-signaled one-level-deep references to api_reference.md, search_syntax.md, and common_queries.md. Each reference file's purpose is clearly explained with specific 'When to consult' guidance and grep patterns for discovery. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
88%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 14 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
description_trigger_hint | Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...') | Warning |
metadata_version | 'metadata.version' is missing | Warning |
Total | 14 / 16 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.