Medicinal chemistry filters. Apply drug-likeness rules (Lipinski, Veber), PAINS filters, structural alerts, complexity metrics, for compound prioritization and library filtering.
65
58%
Does it follow best practices?
Impact
63%
3.50xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/medchem/SKILL.mdQuality
Discovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong, domain-specific description that clearly lists concrete capabilities and uses precise terminology that medicinal chemists would naturally use. Its main weakness is the absence of an explicit 'Use when...' clause, which would help Claude know exactly when to select this skill over others. The specificity and distinctiveness are excellent for a specialized scientific domain.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about filtering compounds, checking drug-likeness, applying Lipinski's Rule of Five, PAINS screening, or evaluating chemical libraries.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: applying drug-likeness rules (Lipinski, Veber), PAINS filters, structural alerts, complexity metrics, compound prioritization, and library filtering. | 3 / 3 |
Completeness | The 'what' is well-covered with specific filters and actions, but there is no explicit 'Use when...' clause or equivalent trigger guidance telling Claude when to select this skill. The 'when' is only implied by the domain context. | 2 / 3 |
Trigger Term Quality | Includes strong natural keywords a medicinal chemist would use: 'Lipinski', 'Veber', 'PAINS filters', 'structural alerts', 'drug-likeness', 'compound prioritization', 'library filtering', 'medicinal chemistry'. These are the exact terms domain users would naturally say. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche in medicinal chemistry filtering with specific named rules (Lipinski, Veber, PAINS). Very unlikely to conflict with other skills given the specialized domain terminology. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill covers a broad range of medchem functionality but is overly verbose, explaining obvious concepts and repeating similar code patterns across many sections. Several API examples appear speculative rather than verified against the actual library, reducing trustworthiness. The structure would benefit significantly from moving detailed module documentation into referenced files and keeping the main skill as a lean overview with one or two key workflow examples.
Suggestions
Cut the content by 50%+: remove the 'When to Use This Skill' section, trim best practices to 2-3 non-obvious items, and consolidate the 8 capability sections into a concise API overview table with one representative code example.
Verify all code examples against the actual medchem API — several constructs (mc.complexity.ComplexityFilter, mc.query.parse, mc.constraints.Constraints) appear fabricated and would cause errors if executed.
Add validation steps to workflow patterns: check for None molecules after dm.to_mol(), verify result DataFrame shapes, and handle common failure modes like invalid SMILES.
Move detailed per-module API examples into the referenced files (references/api_guide.md) and keep SKILL.md focused on the 1-2 most common workflows with concrete, verified code.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is excessively verbose at ~300+ lines. It explains when to use the skill (Claude can infer this), lists obvious best practices ('Context Matters', 'Document Filtering Decisions'), and provides lengthy sections that could be condensed significantly. Many code examples are repetitive in structure and the 'Available Rules/Filters/Groups/Catalogs' bullet lists add bulk without adding actionable value. | 1 / 3 |
Actionability | Code examples are provided throughout and appear mostly executable, but several are likely inaccurate or speculative about the actual API (e.g., `alert_filter.check_mol()`, `mc.complexity.ComplexityFilter`, `mc.query.parse`, `mc.constraints.Constraints` — these don't clearly match the real medchem API). The result format is described vaguely ('returned as dictionaries') without showing actual output structure, reducing copy-paste reliability. | 2 / 3 |
Workflow Clarity | The workflow patterns section provides reasonable multi-step sequences for compound triage and lead optimization. However, there are no validation checkpoints — no steps to verify molecule parsing succeeded, no error handling for invalid SMILES, no verification that filter results have expected shapes. For batch operations on compound libraries, this lack of validation is a notable gap. | 2 / 3 |
Progressive Disclosure | The skill references external files (references/api_guide.md, references/rules_catalog.md, scripts/filter_molecules.py) which is good structure, but no bundle files are actually provided, making these references hollow. The main file itself is monolithic — much of the detailed API usage for each module could be split into reference files, keeping the SKILL.md as a concise overview. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata.version' is missing | Warning |
Total | 10 / 11 Passed | |
cbcae7b
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.