CtrlK
BlogDocsLog inGet started
Tessl Logo

pytdc

Therapeutics Data Commons. AI-ready drug discovery datasets (ADME, toxicity, DTI), benchmarks, scaffold splits, molecular oracles, for therapeutic ML and pharmacological prediction.

67

1.16x
Quality

52%

Does it follow best practices?

Impact

91%

1.16x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/pytdc/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

54%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description effectively identifies a clear, specialized niche (Therapeutics Data Commons for drug discovery) with strong domain-specific trigger terms that would help Claude distinguish it from other skills. However, it lacks concrete action verbs describing what the skill does and entirely omits a 'Use when...' clause, making it unclear when Claude should select this skill. It reads more like a tagline than an actionable skill description.

Suggestions

Add a 'Use when...' clause specifying trigger conditions, e.g., 'Use when the user asks about TDC datasets, drug discovery benchmarks, ADME/toxicity predictions, or needs to load therapeutic ML data.'

Replace the noun-heavy feature list with concrete action verbs, e.g., 'Loads and preprocesses AI-ready drug discovery datasets, runs benchmark evaluations with scaffold splits, and queries molecular oracles for pharmacological prediction.'

Specify the tool/library context (e.g., 'using the TDC Python library') to further clarify what this skill enables Claude to do.

DimensionReasoningScore

Specificity

Names the domain (drug discovery/therapeutics) and lists several specific data types (ADME, toxicity, DTI, scaffold splits, molecular oracles), but these read more like a feature list than concrete actions the skill performs. No verbs describing what the skill actually does (e.g., 'loads datasets', 'runs benchmarks', 'generates splits').

2 / 3

Completeness

There is no explicit 'Use when...' clause or equivalent trigger guidance. The description partially addresses 'what' (datasets, benchmarks, oracles) but lacks any explicit 'when should Claude use this' guidance. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and the 'what' is also weak (no clear actions), so this scores a 1.

1 / 3

Trigger Term Quality

Includes strong natural keywords a user in this domain would use: 'drug discovery', 'ADME', 'toxicity', 'DTI', 'benchmarks', 'scaffold splits', 'molecular oracles', 'therapeutic ML', 'pharmacological prediction'. These are highly relevant terms that cover common variations in this specialized domain.

3 / 3

Distinctiveness Conflict Risk

The description targets a very specific niche — Therapeutics Data Commons for drug discovery ML — with domain-specific terms like ADME, DTI, scaffold splits, and molecular oracles. This is highly unlikely to conflict with other skills.

3 / 3

Total

9

/

12

Passed

Implementation

50%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill provides excellent actionable code examples covering the full PyTDC API surface, but suffers significantly from verbosity — it reads more like comprehensive library documentation than a focused skill. Dataset catalogs, task category explanations, and column format descriptions should be moved to reference files, leaving the main skill lean. Workflows are present but incomplete inline, deferring to external scripts without providing sufficient inline guidance or validation steps.

Suggestions

Drastically reduce the main file by moving dataset catalogs, task category listings, and data format descriptions into references/datasets.md, keeping only the Quick Start pattern and 2-3 key examples inline.

Remove the 'When to Use This Skill' section and explanatory text about what ADME/toxicity/DTI are — Claude already knows these concepts.

Flesh out the Common Workflows section with complete inline steps including validation (e.g., verify split sizes, check for data loading errors) rather than deferring to external scripts.

Consolidate the 'Available Task Categories' sections into a simple table or list in a reference file rather than giving each one a code example in the main skill.

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~300+ lines, extensively listing dataset names, column descriptions, and task categories that Claude could easily look up or infer. Sections like 'When to Use This Skill' and explanations of what ADME/toxicity/DTI are add little value. Much of this reads like library documentation rather than a focused skill.

1 / 3

Actionability

The skill provides fully executable, copy-paste ready code examples throughout — loading datasets, splitting data, using oracles, evaluating models, and converting molecular formats. The API patterns are concrete and specific.

3 / 3

Workflow Clarity

Workflows are listed but mostly defer to external scripts ('See scripts/benchmark_evaluation.py') rather than providing inline step-by-step guidance. The benchmark evaluation workflow mentions a 5-seed protocol but the inline code is incomplete with commented-out model training. No validation checkpoints or error recovery steps are included.

2 / 3

Progressive Disclosure

References to external files (references/oracles.md, scripts/*.py, references/utilities.md) are present and well-signaled, but the main file itself contains far too much inline content that should be in reference files — extensive dataset listings, processing utility catalogs, and task category enumerations bloat the overview.

2 / 3

Total

8

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_version

'metadata.version' is missing

Warning

Total

10

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.