CtrlK
BlogDocsLog inGet started
Tessl Logo

deepchem

Molecular ML with diverse featurizers and pre-built datasets. Use for property prediction (ADMET, toxicity) with traditional ML or GNNs when you want extensive featurization options and MoleculeNet benchmarks. Best for quick experiments with pre-trained models, diverse molecular representations. For graph-first PyTorch workflows use torchdrug; for benchmark datasets use pytdc.

79

1.36x
Quality

75%

Does it follow best practices?

Impact

83%

1.36x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/deepchem/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an excellent skill description that clearly defines its niche in molecular machine learning with specific capabilities, strong domain-appropriate trigger terms, and explicit guidance on both when to use it and when to use alternative skills instead. The cross-referencing of related tools (torchdrug, pytdc) is particularly effective for disambiguation. Minor note: it uses second person 'you' in one place ('when you want'), but the overall description is largely capability-focused rather than conversational.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions and capabilities: molecular featurizers, pre-built datasets, property prediction (ADMET, toxicity), traditional ML, GNNs, featurization options, MoleculeNet benchmarks, pre-trained models, diverse molecular representations.

3 / 3

Completeness

Clearly answers both 'what' (molecular ML with featurizers, property prediction, pre-built datasets) and 'when' ('Use for property prediction...when you want extensive featurization options and MoleculeNet benchmarks', 'Best for quick experiments'). Also includes explicit negative triggers directing to alternative skills.

3 / 3

Trigger Term Quality

Includes strong natural keywords a user in this domain would use: 'molecular ML', 'featurizers', 'ADMET', 'toxicity', 'property prediction', 'GNNs', 'MoleculeNet', 'molecular representations', 'pre-trained models'. Also distinguishes from related tools (torchdrug, pytdc).

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with clear niche (molecular ML featurization) and explicitly differentiates from related skills by naming alternatives: 'For graph-first PyTorch workflows use torchdrug; for benchmark datasets use pytdc.' This greatly reduces conflict risk.

3 / 3

Total

12

/

12

Passed

Implementation

50%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill is highly actionable with excellent executable code examples covering the full DeepChem workflow, but it is far too verbose — repeating key advice multiple times, explaining concepts Claude already knows, and inlining reference-level content that should live in separate files. Workflow clarity is decent but lacks validation checkpoints for catching common silent failures like invalid SMILES or featurization errors.

Suggestions

Cut the content by 50-60%: remove the 'When to Use' bullet list (redundant with description), eliminate repeated scaffold splitting advice (mentioned 4+ times), and move the model selection table, featurizer decision tree, and common pitfalls to reference files.

Remove explanations of concepts Claude already knows — what overfitting is, what random forests are, what SMILES strings are, what imbalanced data means.

Add validation checkpoints to workflows: verify dataset loaded correctly (check dataset.X.shape, handle featurization failures), check for NaN features, and validate model convergence (e.g., compare train vs valid loss) before final evaluation.

Move the detailed featurizer catalog, model selection table, dataset lists, and troubleshooting sections into the referenced `references/api_reference.md` file, keeping only a concise summary with pointers in SKILL.md.

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~400+ lines. It explains concepts Claude already knows (what SMILES strings are, what random forests are, what overfitting is), includes extensive 'When to Use' lists that restate the description, provides redundant selection guides (decision tree + table + text), and repeats scaffold splitting advice at least 4 times across different sections.

1 / 3

Actionability

The skill provides fully executable, copy-paste ready code examples throughout — data loading, featurization, splitting, model training, evaluation, and prediction. CLI examples for scripts are concrete with specific flags and arguments.

3 / 3

Workflow Clarity

Three end-to-end workflows are clearly sequenced with numbered steps and complete code. However, there are no validation checkpoints — no steps to verify data loaded correctly, check for featurization failures (common with invalid SMILES), or validate model convergence before proceeding to evaluation. For ML workflows where bad featurization silently produces garbage, this is a notable gap.

2 / 3

Progressive Disclosure

The skill references `references/api_reference.md`, `references/workflows.md`, and `scripts/` directory with clear descriptions of when to use each. However, no bundle files were provided to verify these exist, and the main SKILL.md itself contains far too much inline content (full API tables, extensive featurizer catalogs, common pitfalls) that should be in reference files, undermining the progressive disclosure pattern.

2 / 3

Total

8

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (596 lines); consider splitting into references/ and linking

Warning

metadata_version

'metadata.version' is missing

Warning

Total

9

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.