Unified Python interface to 40+ bioinformatics services. Use when querying multiple databases (UniProt, KEGG, ChEMBL, Reactome) in a single workflow with consistent API. Best for cross-database analysis, ID mapping across services. For quick single-database lookups use gget; for sequence/file manipulation use biopython.
88
86%
Does it follow best practices?
Impact
91%
1.49xAverage score across 3 eval scenarios
Passed
No known issues
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that clearly communicates its purpose, trigger conditions, and boundaries. It names specific databases and use cases, provides explicit 'Use when' guidance, and proactively distinguishes itself from related tools (gget, biopython). The description is concise yet comprehensive, making it easy for Claude to select appropriately from a large skill set.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions and capabilities: 'querying multiple databases', 'cross-database analysis', 'ID mapping across services', and names specific databases (UniProt, KEGG, ChEMBL, Reactome). Also differentiates from related tools with specific use cases. | 3 / 3 |
Completeness | Clearly answers both 'what' (unified Python interface to 40+ bioinformatics services, cross-database analysis, ID mapping) and 'when' (explicit 'Use when querying multiple databases in a single workflow'). Also includes negative triggers distinguishing from gget and biopython. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural terms a bioinformatics user would say: 'UniProt', 'KEGG', 'ChEMBL', 'Reactome', 'bioinformatics', 'ID mapping', 'cross-database analysis', 'databases'. These are terms users in this domain would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with a clear niche (multi-database bioinformatics queries via unified API). Explicitly differentiates itself from related skills (gget for single-database lookups, biopython for sequence/file manipulation), which greatly reduces conflict risk. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
72%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid skill with excellent actionability—real, executable code examples covering the major BioServices capabilities. Progressive disclosure is well-handled with clear references to supporting files. The main weaknesses are moderate verbosity (the 'When to Use' section and some filler text could be trimmed) and missing validation/error-recovery steps in multi-step workflows, particularly for batch operations and asynchronous BLAST jobs.
Suggestions
Add explicit validation checkpoints and polling/retry loops for asynchronous operations like BLAST (e.g., a while loop checking status with sleep intervals and timeout handling).
Trim or remove the 'When to Use This Skill' bullet list—the section headers and code examples already make the use cases obvious, and this list consumes tokens without adding actionable value.
Add a feedback loop for batch identifier conversion (e.g., validate output count matches input count, handle unmapped IDs explicitly).
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably well-structured but includes some unnecessary content: the 'When to Use This Skill' section is a long bullet list that largely restates what the code examples already demonstrate, the 'Integration with Other Tools' section lists tools without actionable guidance, and some explanatory text (e.g., 'BioServices excels at combining multiple services') is filler. The overview paragraph also explains what REST/SOAP are, which Claude knows. | 2 / 3 |
Actionability | The skill provides extensive, executable Python code examples for each major capability—UniProt searches, KEGG pathway queries, BLAST jobs, identifier mapping, compound searches, and more. Code is copy-paste ready with real identifiers and realistic usage patterns. CLI scripts are also provided with concrete arguments. | 3 / 3 |
Workflow Clarity | Multi-step workflows are listed (e.g., compound cross-referencing steps 1-4, protein analysis pipeline steps 1-5) but lack explicit validation checkpoints or error recovery loops. The BLAST section mentions checking status but doesn't show a polling loop. The batch identifier conversion script has no validation step mentioned, which is important for batch operations. | 2 / 3 |
Progressive Disclosure | Content is well-organized with clear sections progressing from individual service usage to multi-service workflows. References to external files (services_reference.md, workflow_patterns.md, identifier_mapping.md) are clearly signaled and one level deep. Scripts are listed with descriptions. The main file serves as an effective overview without being monolithic. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata.version' is missing | Warning |
Total | 10 / 11 Passed | |
b58ad7e
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.