CtrlK
BlogDocsLog inGet started
Tessl Logo

pytdc

Therapeutics Data Commons. AI-ready drug discovery datasets (ADME, toxicity, DTI), benchmarks, scaffold splits, molecular oracles, for therapeutic ML and pharmacological prediction.

67

1.16x
Quality

52%

Does it follow best practices?

Impact

91%

1.16x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/pytdc/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

54%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description effectively identifies a clear, specialized niche in drug discovery ML with strong domain-specific trigger terms that would help the right users find it. However, it lacks action verbs describing what the skill concretely does and entirely omits a 'Use when...' clause, making it read more like a tagline than an actionable skill description.

Suggestions

Add explicit action verbs describing what the skill does, e.g., 'Loads and preprocesses TDC datasets, runs benchmark evaluations, generates scaffold splits, and queries molecular oracles.'

Add a 'Use when...' clause with explicit triggers, e.g., 'Use when the user needs to access TDC datasets, benchmark drug discovery models, perform ADME/toxicity predictions, or work with molecular property data.'

DimensionReasoningScore

Specificity

Names the domain (drug discovery/therapeutics) and lists several specific data types (ADME, toxicity, DTI, scaffold splits, molecular oracles), but these read more like a feature list than concrete actions. No verbs describe what the skill actually does (e.g., 'loads datasets', 'runs benchmarks', 'generates splits').

2 / 3

Completeness

While it partially addresses 'what' (AI-ready datasets, benchmarks, etc.), there is no 'when' clause or explicit trigger guidance (no 'Use when...' or equivalent). Per the rubric, a missing 'Use when...' clause caps completeness at 2, and the 'what' is also weak (no actions described), so this scores a 1.

1 / 3

Trigger Term Quality

Includes strong domain-specific trigger terms that users in this field would naturally use: 'ADME', 'toxicity', 'DTI', 'drug discovery', 'scaffold splits', 'molecular oracles', 'pharmacological prediction', 'therapeutic ML'. Good coverage of natural terms for the target audience.

3 / 3

Distinctiveness Conflict Risk

The description targets a very specific niche — Therapeutics Data Commons for drug discovery ML — with highly domain-specific terminology. It is unlikely to conflict with other skills given the specialized vocabulary (ADME, DTI, scaffold splits, molecular oracles).

3 / 3

Total

9

/

12

Passed

Implementation

50%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill provides excellent actionable code examples covering the full breadth of PyTDC's API, but it is far too verbose — it reads more like library documentation than a concise skill file. Much of the inline content (dataset catalogs, task category descriptions, data format explanations) duplicates what should be in the referenced bundle files, and the workflows lack validation checkpoints and complete implementations.

Suggestions

Move dataset catalogs, task category descriptions, and detailed API listings into the referenced bundle files (references/datasets.md, references/utilities.md) and keep only the Quick Start pattern and 2-3 key examples in SKILL.md.

Remove explanatory prose that Claude already knows (e.g., what ADME stands for, what single-instance prediction means, what DDI is) — just show the code patterns.

Complete the workflow sections with actual executable code rather than commented-out placeholders, and add validation steps (e.g., verify dataset loaded correctly, check split sizes, validate predictions shape before evaluation).

Actually provide the bundle files referenced in the skill, or remove the references to avoid broken navigation.

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~300+ lines, listing extensive dataset catalogs, task categories, and descriptions that Claude doesn't need explained. Much of this is reference documentation that belongs in separate files (and the skill claims such files exist). Phrases like 'Single-instance prediction involves forecasting properties of individual biomedical entities' explain concepts Claude already knows.

1 / 3

Actionability

The skill provides fully executable, copy-paste ready code examples throughout — loading datasets, splitting data, using oracles, evaluating models, converting molecule formats. Import paths, method signatures, and parameter values are all concrete and specific.

3 / 3

Workflow Clarity

Workflows are listed (train model, benchmark evaluation, molecular generation) but they are incomplete — key steps are commented out as 'user implements' and the skill defers to script files for 'complete examples.' The benchmark workflow correctly shows the 5-seed protocol, but there are no validation checkpoints or error recovery steps for any workflow.

2 / 3

Progressive Disclosure

The skill references bundled files (references/oracles.md, references/utilities.md, scripts/*.py) for detailed content, which is good structure. However, no bundle files were actually provided, and the main SKILL.md itself contains enormous amounts of inline reference material (dataset lists, task categories, data format descriptions) that should be in those reference files instead.

2 / 3

Total

8

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_version

'metadata.version' is missing

Warning

Total

10

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.