CtrlK
BlogDocsLog inGet started
Tessl Logo

clinicaltrials-gov-parser

Monitor and summarize competitor clinical trial status changes from ClinicalTrials.gov. Trigger: When user asks to track clinical trials, monitor trial status changes, get updates on specific trials, or analyze competitor trial activities. Use cases: Pharma competitive intelligence, trial monitoring, status tracking, recruitment updates, completion alerts.

76

2.06x
Quality

67%

Does it follow best practices?

Impact

91%

2.06x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Evidence insights/clinicaltrials-gov-parser/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-structured skill description with explicit trigger guidance and clear use cases. The main weakness is that the specific capabilities could be more detailed - it says 'monitor and summarize' but doesn't specify what data points are tracked or what the summaries contain. The domain specificity (ClinicalTrials.gov, pharma) makes it highly distinctive.

Suggestions

Expand the capability description to include specific actions like 'extract enrollment numbers, track phase transitions, compare timelines across competitors, generate status change reports'

DimensionReasoningScore

Specificity

Names the domain (clinical trials, ClinicalTrials.gov) and some actions (monitor, summarize, track status changes), but lacks comprehensive specific actions like what data is extracted, what summaries include, or how alerts work.

2 / 3

Completeness

Clearly answers both what (monitor and summarize competitor clinical trial status changes from ClinicalTrials.gov) and when (explicit 'Trigger:' clause with multiple scenarios plus 'Use cases:' section with specific applications).

3 / 3

Trigger Term Quality

Good coverage of natural terms users would say: 'track clinical trials', 'monitor trial status', 'competitor trial activities', 'recruitment updates', 'completion alerts', plus domain-specific terms like 'pharma competitive intelligence'.

3 / 3

Distinctiveness Conflict Risk

Very clear niche with distinct triggers - ClinicalTrials.gov, clinical trials, pharma competitive intelligence, trial status changes are highly specific and unlikely to conflict with other skills.

3 / 3

Total

11

/

12

Passed

Implementation

44%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides a reasonable API reference structure but lacks the actionable depth needed for Claude to effectively monitor clinical trials. The excessive boilerplate (security checklists, lifecycle status, evaluation criteria) consumes tokens without adding value, while the core workflow for actually monitoring and detecting status changes is underspecified.

Suggestions

Remove boilerplate sections (Risk Assessment, Security Checklist, Evaluation Criteria, Lifecycle Status) that don't provide actionable guidance for Claude

Add a concrete workflow showing the complete monitoring loop: initial search → store baseline → periodic check → detect changes → report differences

Include actual error handling patterns and validation steps for API responses, especially for rate limiting and network failures

Show a concrete example of what a 'status change' looks like in the output, so Claude knows what to report to users

DimensionReasoningScore

Conciseness

The skill includes substantial boilerplate sections (Risk Assessment, Security Checklist, Evaluation Criteria, Lifecycle Status) that add little actionable value and consume tokens. The core functionality is reasonably concise but surrounded by unnecessary scaffolding.

2 / 3

Actionability

Provides concrete Python and CLI examples that appear executable, but the code references a non-existent module structure (scripts/main.py, ClinicalTrialsMonitor class) without showing actual implementation. It's more of an API specification than executable guidance.

2 / 3

Workflow Clarity

No clear workflow sequence for monitoring trials over time. Missing validation steps for API responses, error handling patterns, and feedback loops for when API calls fail or rate limits are hit. The 'monitor' use case lacks explicit steps for ongoing monitoring.

1 / 3

Progressive Disclosure

Good structure with clear sections and well-signaled references to external files (api-docs.md, status-codes.md, examples.md). Content is appropriately organized with tables for parameters and methods.

3 / 3

Total

8

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.