Access ZINC (230M+ purchasable compounds). Search by ZINC ID/SMILES, similarity searches, 3D-ready structures for docking, analog discovery, for virtual screening and drug discovery.
87
73%
Does it follow best practices?
Impact
97%
2.48xAverage score across 6 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/zinc-database/SKILL.mdQuality
Discovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong, domain-specific description with excellent specificity and trigger terms for the computational chemistry/drug discovery domain. The main weakness is the lack of an explicit 'Use when...' clause, which would help Claude know exactly when to select this skill. The description effectively communicates capabilities but relies on implied rather than explicit usage triggers.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about chemical compounds, needs to search ZINC database, mentions SMILES notation, or is doing virtual screening/docking work.'
Consider adding common user phrasings like 'find compounds', 'chemical library', or 'compound database' to capture more natural language triggers.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Search by ZINC ID/SMILES, similarity searches, 3D-ready structures for docking, analog discovery' - these are clear, actionable capabilities. | 3 / 3 |
Completeness | Clearly describes WHAT it does (access ZINC database, search, similarity searches, etc.) but lacks an explicit 'Use when...' clause. The 'for virtual screening and drug discovery' implies purpose but doesn't provide explicit trigger guidance. | 2 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'ZINC', 'SMILES', 'docking', 'virtual screening', 'drug discovery', 'compounds', 'analog discovery' - good coverage of domain-specific terms. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with specific domain (ZINC database, 230M+ compounds) and specialized triggers (SMILES, docking, virtual screening). Unlikely to conflict with other skills due to its narrow pharmaceutical/chemistry focus. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides solid, actionable guidance for accessing the ZINC database with executable curl commands and Python examples. The main weaknesses are verbosity in introductory sections, missing validation steps in workflows (especially for batch operations), and a monolithic structure that could benefit from better content splitting. The technical content is accurate and useful for drug discovery workflows.
Suggestions
Remove or significantly condense the 'When to Use This Skill' and 'Database Versions' sections - Claude can infer appropriate use cases from the capabilities described
Add explicit validation steps to workflows, such as checking HTTP response codes and verifying expected data format before processing results
Move the Python integration section and detailed tranche system explanation to separate reference files, keeping only essential examples in the main skill
Add error handling examples showing how to detect and recover from common API failures (rate limiting, invalid SMILES, etc.)
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill contains some unnecessary verbosity, particularly in the 'When to Use This Skill' section which lists obvious use cases, and the 'Database Versions' section explaining ZINC history that Claude doesn't need. However, the core API examples are reasonably efficient. | 2 / 3 |
Actionability | The skill provides fully executable curl commands and Python code examples that are copy-paste ready. API endpoints are concrete with specific parameters, and the code examples include proper imports and complete function definitions. | 3 / 3 |
Workflow Clarity | The four workflows are clearly sequenced with numbered steps, but they lack explicit validation checkpoints. For example, Workflow 1 doesn't verify API response success before parsing, and Workflow 2 doesn't include error handling for failed similarity searches. | 2 / 3 |
Progressive Disclosure | The skill references 'references/api_reference.md' for advanced documentation, which is good. However, the main document is quite long (~350 lines) with content that could be split out (e.g., Python integration, tranche system details), and the reference is only mentioned once near the end. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata.version' is missing | Warning |
Total | 10 / 11 Passed | |
71add64
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.