Access ZINC (230M+ purchasable compounds). Search by ZINC ID/SMILES, similarity searches, 3D-ready structures for docking, analog discovery, for virtual screening and drug discovery.
Install with Tessl CLI
npx tessl i github:K-Dense-AI/claude-scientific-skills --skill zinc-databaseOverall
score
80%
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
83%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong, domain-specific description with excellent specificity and trigger terms for the computational chemistry/drug discovery domain. The main weakness is the lack of an explicit 'Use when...' clause, which would help Claude know exactly when to select this skill. The description effectively communicates capabilities but relies on implied rather than explicit usage triggers.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about ZINC database, compound searches, molecular similarity, or needs structures for virtual screening/docking studies.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Search by ZINC ID/SMILES, similarity searches, 3D-ready structures for docking, analog discovery' - these are clear, actionable capabilities. | 3 / 3 |
Completeness | Clearly describes WHAT it does (access ZINC database, search, similarity searches, etc.) but lacks an explicit 'Use when...' clause. The use cases are implied through 'for virtual screening and drug discovery' but not explicitly stated as triggers. | 2 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'ZINC', 'SMILES', 'docking', 'virtual screening', 'drug discovery', 'compounds', 'analog discovery' - good coverage of domain-specific terms. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with specific database name (ZINC), domain-specific terms (SMILES, docking, purchasable compounds), and clear niche in computational chemistry/drug discovery - unlikely to conflict with other skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
73%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a comprehensive and actionable skill for accessing the ZINC database with excellent executable examples and clear API documentation. The main weaknesses are moderate verbosity in introductory sections and missing validation/error handling steps in the workflows. The progressive disclosure and organization are strong, making it easy to navigate from basic to advanced usage.
Suggestions
Remove or condense the 'When to Use This Skill' section - these use cases are self-evident from the overview and add unnecessary tokens
Add validation checkpoints to workflows, such as checking HTTP response codes and handling API errors before proceeding to data processing steps
Trim the 'Database Versions' section to a single line noting ZINC22 is current; the historical context adds little value for task execution
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill contains some unnecessary verbosity, particularly in the 'When to Use This Skill' section which lists obvious use cases, and the 'Database Versions' section explaining ZINC history. The overview also repeats information from the description. However, the API examples and code sections are reasonably efficient. | 2 / 3 |
Actionability | The skill provides fully executable curl commands and Python code examples that are copy-paste ready. API endpoints are clearly specified with concrete parameters, and the workflows include specific, runnable code snippets for common tasks like similarity searches and batch retrieval. | 3 / 3 |
Workflow Clarity | The four workflows are clearly sequenced with numbered steps, but they lack explicit validation checkpoints. For example, Workflow 1 doesn't verify API response success, and Workflow 2 doesn't include error handling for failed similarity searches. No feedback loops for error recovery are present. | 2 / 3 |
Progressive Disclosure | The skill is well-organized with clear sections progressing from overview to specific capabilities to workflows. It appropriately references 'references/api_reference.md' for detailed technical information and external resources (ZINC Wiki, documentation) for advanced topics, maintaining one-level-deep references. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
88%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 14 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
description_trigger_hint | Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...') | Warning |
metadata_version | 'metadata.version' is missing | Warning |
Total | 14 / 16 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.