Query ClinicalTrials.gov via API v2. Search trials by condition, drug, location, status, or phase. Retrieve trial details by NCT ID, export data, for clinical research and patient matching.
79
77%
Does it follow best practices?
Impact
77%
0.97xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/clinicaltrials-database/SKILL.mdQuality
Discovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong description with excellent specificity and distinctive domain focus on ClinicalTrials.gov. It lists concrete actions and includes good trigger terms that users would naturally use. The main weakness is the lack of an explicit 'Use when...' clause to guide skill selection.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about clinical trials, wants to search ClinicalTrials.gov, mentions NCT numbers, or needs trial data for research or patient matching.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Query ClinicalTrials.gov via API v2', 'Search trials by condition, drug, location, status, or phase', 'Retrieve trial details by NCT ID', 'export data'. These are clear, actionable capabilities. | 3 / 3 |
Completeness | Clearly answers 'what' with specific capabilities, but lacks an explicit 'Use when...' clause. The use cases 'for clinical research and patient matching' are mentioned but not framed as trigger guidance. | 2 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'ClinicalTrials.gov', 'trials', 'condition', 'drug', 'location', 'status', 'phase', 'NCT ID', 'clinical research', 'patient matching'. Good coverage of domain-specific terms. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with specific domain (ClinicalTrials.gov), API version (v2), and unique identifiers (NCT ID). Unlikely to conflict with other skills due to the specialized clinical trials focus. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
72%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid, actionable skill with excellent code examples and good progressive disclosure. The main weaknesses are verbosity in introductory sections (explaining what ClinicalTrials.gov is, listing obvious use cases) and missing explicit validation workflows for bulk data operations. The technical content is strong but could be tightened.
Suggestions
Remove the 'When to Use This Skill' section entirely - these use cases are self-evident from the skill description and capabilities
Trim the Overview to just the technical essentials (API URL, rate limits, formats) without explaining what ClinicalTrials.gov is
Add explicit validation workflow for bulk data retrieval: verify record counts, spot-check data integrity, handle partial failures
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill contains some unnecessary verbosity, particularly in the 'When to Use This Skill' section which lists obvious use cases Claude could infer. The overview explains what ClinicalTrials.gov is, which Claude already knows. However, the code examples are reasonably efficient. | 2 / 3 |
Actionability | Excellent executable code examples throughout with copy-paste ready Python snippets. Specific API endpoints, parameters, and response paths are clearly documented with working examples for each capability. | 3 / 3 |
Workflow Clarity | While individual operations are clear, the skill lacks explicit validation checkpoints. The rate limit handling example shows retry logic, but there's no systematic workflow for bulk operations with verification steps. Error handling is shown but not integrated into a clear workflow. | 2 / 3 |
Progressive Disclosure | Well-structured with clear sections progressing from Quick Start to Core Capabilities to Best Practices. References to external files (scripts/query_clinicaltrials.py, references/api_reference.md) are clearly signaled and one level deep. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (506 lines); consider splitting into references/ and linking | Warning |
metadata_version | 'metadata.version' is missing | Warning |
Total | 9 / 11 Passed | |
71add64
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.