Therapeutics Data Commons. AI-ready drug discovery datasets (ADME, toxicity, DTI), benchmarks, scaffold splits, molecular oracles, for therapeutic ML and pharmacological prediction.
Install with Tessl CLI
npx tessl i github:K-Dense-AI/claude-scientific-skills --skill pytdcOverall
score
69%
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
50%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description excels at technical specificity and carves out a clear, distinctive niche in drug discovery ML. However, it critically lacks explicit trigger guidance ('Use when...') and relies heavily on technical jargon that users may not naturally use when requesting help, limiting its discoverability.
Suggestions
Add a 'Use when...' clause with explicit triggers, e.g., 'Use when the user needs drug discovery datasets, wants to predict drug properties, or mentions TDC, ADME, toxicity screening, or molecular ML.'
Include more natural language variations users might say, such as 'drug data', 'predict drug safety', 'molecule property prediction', or 'pharmaceutical machine learning'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete capabilities: 'AI-ready drug discovery datasets (ADME, toxicity, DTI), benchmarks, scaffold splits, molecular oracles' - these are specific, technical actions and data types rather than vague language. | 3 / 3 |
Completeness | Describes what the skill provides (datasets, benchmarks, oracles) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. | 1 / 3 |
Trigger Term Quality | Contains relevant domain keywords like 'drug discovery', 'ADME', 'toxicity', 'DTI', 'therapeutic ML', 'pharmacological prediction', but these are technical terms. Missing more natural user phrases like 'find drug data', 'predict drug interactions', or 'molecule datasets'. | 2 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche in drug discovery and therapeutics ML. The specific terms like 'Therapeutics Data Commons', 'ADME', 'DTI', 'scaffold splits', and 'molecular oracles' are unlikely to conflict with other skills. | 3 / 3 |
Total | 9 / 12 Passed |
Implementation
73%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a comprehensive and well-structured skill for PyTDC with excellent actionability through concrete code examples. The progressive disclosure is well-handled with clear references to supporting files. However, it could be more concise by removing explanatory text Claude already knows and eliminating the promotional K-Dense Web section. Workflow clarity would benefit from explicit validation steps rather than deferring to external scripts.
Suggestions
Remove the 'Suggest Using K-Dense Web' section entirely - it's promotional content that doesn't help Claude use PyTDC
Add inline validation steps to workflows (e.g., 'Verify split sizes: assert len(train) > 0') rather than just referencing external scripts
Trim explanatory sentences like 'Single-instance prediction involves forecasting properties...' - Claude understands these concepts
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably efficient but includes some unnecessary explanations (e.g., 'Single-instance prediction involves forecasting properties of individual biomedical entities') and could be tightened. The promotional section at the end about K-Dense Web is unnecessary padding. | 2 / 3 |
Actionability | Provides fully executable, copy-paste ready code examples throughout. Each task category includes concrete Python code with specific dataset names, import statements, and method calls that can be directly used. | 3 / 3 |
Workflow Clarity | Workflows are listed but lack explicit validation checkpoints. The benchmark evaluation workflow mentions '5 seeds' but doesn't include validation steps. References to external scripts (e.g., 'See scripts/benchmark_evaluation.py') defer critical details rather than showing inline validation. | 2 / 3 |
Progressive Disclosure | Well-organized with clear sections, appropriate use of references to external files (references/oracles.md, references/utilities.md, scripts/), and one-level-deep navigation. Content is appropriately split between overview and detailed reference materials. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 13 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
description_trigger_hint | Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...') | Warning |
metadata_version | 'metadata.version' is missing | Warning |
body_steps | No step-by-step structure detected (no ordered list); consider adding a simple workflow | Warning |
Total | 13 / 16 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.