Medicinal chemistry filters. Apply drug-likeness rules (Lipinski, Veber), PAINS filters, structural alerts, complexity metrics, for compound prioritization and library filtering.
67
62%
Does it follow best practices?
Impact
63%
3.50xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/medchem/SKILL.mdQuality
Discovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong, domain-specific description that clearly lists concrete capabilities and uses precise terminology that medicinal chemists would naturally use. Its main weakness is the absence of an explicit 'Use when...' clause, which would help Claude know exactly when to select this skill over others. The specificity and distinctiveness are excellent for a specialized scientific domain.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about filtering compound libraries, checking drug-likeness, applying Lipinski's Rule of Five, or flagging PAINS compounds.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: applying drug-likeness rules (Lipinski, Veber), PAINS filters, structural alerts, complexity metrics, compound prioritization, and library filtering. | 3 / 3 |
Completeness | The 'what' is well-covered with specific filters and actions, but there is no explicit 'Use when...' clause or equivalent trigger guidance telling Claude when to select this skill. The 'when' is only implied by the domain context. | 2 / 3 |
Trigger Term Quality | Includes strong natural keywords a medicinal chemist would use: 'Lipinski', 'Veber', 'PAINS filters', 'structural alerts', 'drug-likeness', 'compound prioritization', 'library filtering', 'medicinal chemistry'. These are the exact terms domain users would naturally say. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche in medicinal chemistry filtering with specific named rules (Lipinski, Veber, PAINS). Very unlikely to conflict with other skills given the specialized domain terminology. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill covers the medchem library comprehensively but is far too verbose, spending many tokens on explanations Claude doesn't need and listing information that could be in reference files. The API examples appear partially speculative, which undermines actionability. The progressive disclosure structure is good, but the main file should be much leaner with more content pushed to reference files.
Suggestions
Cut the content by at least 50%: remove the 'When to Use This Skill' section, the 'Best Practices' platitudes, and explanatory text before code blocks. Move the detailed API examples for each module into the referenced api_guide.md file.
Verify all code examples against the actual medchem API—the complexity, constraints, query language, and catalogs sections appear to have fabricated APIs. Only include examples you can confirm are accurate.
Add validation steps to workflow patterns: check for None molecules after dm.to_mol(), validate result shapes, and include error handling for failed parsing in batch operations.
Consolidate the 8 numbered capability sections into a concise table or brief list with one example each, keeping only the most common workflow (Pattern 1) in the main file and moving others to reference files.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is extremely verbose at ~300+ lines, explaining many concepts Claude already knows (what drug-likeness rules are, what structural alerts are, what complexity metrics are). The 'When to Use This Skill' section is unnecessary padding. Many sections include explanatory text that doesn't add actionable value, and the best practices section states obvious guidelines like 'context matters' and 'document filtering decisions.' | 1 / 3 |
Actionability | Code examples are provided throughout and appear mostly executable, but several are likely inaccurate or speculative about the actual API (e.g., the result format descriptions are vague like 'Results are returned as dictionaries with pass/fail status', the complexity module API and constraints module API look fabricated, and the query language section appears invented). Without verified API accuracy, these examples could mislead rather than help. | 2 / 3 |
Workflow Clarity | The workflow patterns section provides reasonable multi-step sequences for compound filtering, but there are no validation checkpoints or error handling steps. For batch operations on compound libraries, there should be validation of molecule parsing (handling None from dm.to_mol), checking result shapes, and verifying output integrity. The workflows assume everything succeeds without feedback loops. | 2 / 3 |
Progressive Disclosure | The skill has a clear overview structure with well-signaled references to external files (references/api_guide.md, references/rules_catalog.md, scripts/filter_molecules.py). Content is organized into logical sections with increasing specificity, and the references section provides one-level-deep pointers with clear descriptions of what each file contains. | 3 / 3 |
Total | 8 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata.version' is missing | Warning |
Total | 10 / 11 Passed | |
25e1c0f
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.