PyTorch-native graph neural networks for molecules and proteins. Use when building custom GNN architectures for drug discovery, protein modeling, or knowledge graph reasoning. Best for custom model development, protein property prediction, retrosynthesis. For pre-trained models and diverse featurizers use deepchem; for benchmark datasets use pytdc.
89
67%
Does it follow best practices?
Impact
94%
1.18xAverage score across 9 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/torchdrug/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that clearly defines its niche (PyTorch-native GNNs for molecular and protein modeling), provides explicit trigger conditions, and proactively disambiguates from related skills (deepchem, pytdc). The description is concise, uses third-person voice, includes domain-specific trigger terms that practitioners would naturally use, and minimizes conflict risk through explicit boundary-setting.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions and domains: 'custom GNN architectures', 'drug discovery', 'protein modeling', 'knowledge graph reasoning', 'protein property prediction', 'retrosynthesis', and 'custom model development'. | 3 / 3 |
Completeness | Clearly answers both 'what' (PyTorch-native GNNs for molecules and proteins, custom GNN architectures) and 'when' ('Use when building custom GNN architectures for drug discovery, protein modeling, or knowledge graph reasoning'). Also includes explicit disambiguation guidance ('For pre-trained models use deepchem; for benchmark datasets use pytdc'). | 3 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'graph neural networks', 'GNN', 'molecules', 'proteins', 'drug discovery', 'protein modeling', 'knowledge graph reasoning', 'retrosynthesis', 'PyTorch'. These are terms domain practitioners naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with explicit boundary-setting against related skills (deepchem for pre-trained models, pytdc for benchmark datasets). The niche of PyTorch-native custom GNN development is clearly carved out and unlikely to conflict with other skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is well-organized in structure with clear section headers and good navigation to reference files, but it is far too verbose — repeating the same information across Overview, Core Capabilities, Common Workflows, Quick Reference, and Summary sections. The actionable code examples are limited to a few sections while most workflows remain abstract step lists. The referenced bundle files don't exist, undermining the progressive disclosure strategy.
Suggestions
Cut the document by at least 50%: remove the 'When to Use This Skill' section entirely (the description handles this), consolidate Core Capabilities and Common Workflows into a single section, and eliminate the Summary which duplicates the Quick Reference.
Add executable code to at least 2-3 of the Common Workflows instead of abstract step descriptions (e.g., show actual scaffold splitting code, actual KG query code).
Add validation checkpoints to workflows — e.g., after training, show how to check metrics before proceeding; after molecule generation, show inline RDKit validity check.
Provide the referenced bundle files (references/*.md) or remove the references and inline the essential content more concisely.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive repetition. The 'When to Use This Skill' section explains things Claude already knows, the 'Core Capabilities' section repeats use cases and components that are then repeated again in 'Common Workflows', and the 'Quick Reference Cheat Sheet' and 'Summary' duplicate navigation information already provided throughout. The document could be cut by 60%+ without losing actionable content. | 1 / 3 |
Actionability | The Quick Example and Integration Patterns sections provide executable code, but the five Common Workflows are all high-level step descriptions without executable code. The training loop in the Quick Example is missing `import torch`. Many workflows describe what to do abstractly ('Train center identification model') without showing how. | 2 / 3 |
Workflow Clarity | Workflows are listed as numbered steps which is good, but they lack validation checkpoints and error recovery loops. For multi-step processes like retrosynthesis planning or molecular generation, there are no explicit validation steps between stages. The troubleshooting section partially compensates but is disconnected from the workflows themselves. | 2 / 3 |
Progressive Disclosure | The skill references many external files (references/molecular_property_prediction.md, references/protein_modeling.md, etc.) with clear one-level-deep navigation, which is good structure. However, no bundle files are provided, so all those references are broken. Additionally, too much content is inline that should be in reference files — the Core Capabilities section essentially previews each reference file's content redundantly. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata.version' is missing | Warning |
Total | 10 / 11 Passed | |
cbcae7b
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.