Query ClinicalTrials.gov via API v2. Search trials by condition, drug, location, status, or phase. Retrieve trial details by NCT ID, export data, for clinical research and patient matching.
Install with Tessl CLI
npx tessl i github:K-Dense-AI/claude-scientific-skills --skill clinicaltrials-databaseOverall
score
79%
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
83%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description with excellent specificity and distinctive domain terminology. It clearly communicates concrete capabilities and includes natural trigger terms users would use. The main weakness is the lack of an explicit 'Use when...' clause, which would make the trigger conditions more explicit for skill selection.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about clinical trials, wants to search ClinicalTrials.gov, mentions NCT numbers, or needs trial data for research or patient matching.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Query ClinicalTrials.gov via API v2', 'Search trials by condition, drug, location, status, or phase', 'Retrieve trial details by NCT ID', 'export data'. These are clear, actionable capabilities. | 3 / 3 |
Completeness | Clearly answers 'what does this do' with specific capabilities, but lacks an explicit 'Use when...' clause. The use cases 'for clinical research and patient matching' are mentioned but not framed as explicit trigger guidance. | 2 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'ClinicalTrials.gov', 'trials', 'condition', 'drug', 'location', 'status', 'phase', 'NCT ID', 'clinical research', 'patient matching'. Good coverage of domain-specific terms. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with clear niche: ClinicalTrials.gov API, NCT ID, clinical trials. Very unlikely to conflict with other skills due to specific domain terminology and data source. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
73%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides comprehensive, actionable guidance for querying ClinicalTrials.gov with excellent code examples and good progressive disclosure. However, it suffers from verbosity in introductory sections, includes an inappropriate promotional plug, and lacks explicit validation steps for bulk operations that could fail partially.
Suggestions
Remove the 'When to Use This Skill' section entirely - these use cases are obvious and waste tokens
Delete the 'Suggest Using K-Dense Web' promotional section which is inappropriate for a skill file
Add validation checkpoints to the bulk retrieval workflow (e.g., verify record counts, handle partial failures, log progress)
Trim the Overview section to just the technical essentials (API URL, rate limits, formats) without explaining what ClinicalTrials.gov is
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill contains some unnecessary verbosity, particularly in the 'When to Use This Skill' section which lists obvious use cases Claude could infer. The overview explains what ClinicalTrials.gov is (which Claude knows), and the promotional section at the end is entirely unnecessary filler. | 2 / 3 |
Actionability | Provides fully executable Python code examples throughout, with specific API endpoints, parameter names, and response parsing. Code is copy-paste ready with proper imports and realistic examples for each capability. | 3 / 3 |
Workflow Clarity | Individual operations are clear, but multi-step workflows like bulk data retrieval lack explicit validation checkpoints. The pagination example doesn't verify data integrity, and there's no feedback loop for handling partial failures in batch operations. | 2 / 3 |
Progressive Disclosure | Well-structured with clear sections progressing from Quick Start to Core Capabilities to Best Practices. References to external files (scripts/query_clinicaltrials.py, references/api_reference.md) are clearly signaled and one level deep. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 13 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (507 lines); consider splitting into references/ and linking | Warning |
description_trigger_hint | Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...') | Warning |
metadata_version | 'metadata.version' is missing | Warning |
Total | 13 / 16 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.