Unified Python interface to 40+ bioinformatics services. Use when querying multiple databases (UniProt, KEGG, ChEMBL, Reactome) in a single workflow with consistent API. Best for cross-database analysis, ID mapping across services. For quick single-database lookups use gget; for sequence/file manipulation use biopython.
86
82%
Does it follow best practices?
Impact
91%
1.49xAverage score across 3 eval scenarios
Passed
No known issues
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that clearly communicates its purpose, scope, and appropriate use cases. It names specific databases and actions, provides explicit trigger guidance with 'Use when...', and proactively differentiates itself from related tools (gget, biopython) to minimize selection conflicts. The description is concise yet comprehensive.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions and capabilities: 'querying multiple databases', 'cross-database analysis', 'ID mapping across services', and names specific databases (UniProt, KEGG, ChEMBL, Reactome). Also differentiates from related tools with specific use cases. | 3 / 3 |
Completeness | Clearly answers both 'what' (unified Python interface to 40+ bioinformatics services, cross-database analysis, ID mapping) and 'when' (explicit 'Use when querying multiple databases in a single workflow' plus differentiation guidance on when NOT to use it, pointing to gget and biopython for other cases). | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural terms a bioinformatics user would say: 'UniProt', 'KEGG', 'ChEMBL', 'Reactome', 'bioinformatics', 'ID mapping', 'cross-database analysis', 'databases'. These are terms users in this domain would naturally use when requesting this kind of work. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with a clear niche (multi-database bioinformatics queries with consistent API). Explicitly distinguishes itself from related skills (gget for single-database lookups, biopython for sequence/file manipulation), which directly reduces conflict risk. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid, actionable skill with excellent executable code examples covering the major BioServices capabilities. Its main weaknesses are moderate verbosity (some sections explain things Claude already knows or could be offloaded to reference files) and workflow clarity that lacks explicit validation/feedback loops for multi-step operations. The progressive disclosure structure is well-designed in principle but the main file carries too much detail inline.
Suggestions
Add explicit validation checkpoints to multi-step workflows (e.g., check BLAST job status in a polling loop before retrieving results, verify mapping results are non-empty before proceeding)
Move Best Practices, organism codes, and integration tool lists to a reference file to reduce the main SKILL.md length and improve progressive disclosure
Remove the 'When to Use This Skill' section or reduce it to 2-3 lines — the frontmatter description already covers this, and Claude can infer appropriate usage from the capabilities shown
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably well-organized but includes some unnecessary content Claude already knows (e.g., explaining what BioServices is, listing integration tools like Pandas/NetworkX, explaining output format handling generically, basic error handling patterns). The 'When to Use This Skill' section is overly detailed and could be trimmed. However, the code examples are mostly lean and useful. | 2 / 3 |
Actionability | The skill provides fully executable, copy-paste ready Python code examples for each major capability. Code includes specific method calls, real identifiers (P43403, hsa:7535, C11222), and concrete parameters. CLI commands for scripts are also specific and complete. | 3 / 3 |
Workflow Clarity | Multi-step workflows are listed (e.g., compound search workflow steps 1-4, multi-service integration pipelines) but lack explicit validation checkpoints or error recovery feedback loops. The BLAST section mentions checking status but doesn't show a polling loop. The batch identifier conversion has no validation step for confirming successful mappings. | 2 / 3 |
Progressive Disclosure | The skill references external files (references/services_reference.md, references/workflow_patterns.md, references/identifier_mapping.md, scripts/) with clear signaling, which is good structure. However, no bundle files are provided, so we cannot verify these exist. The main SKILL.md itself is quite long (~250 lines) and some sections (like Best Practices, organism codes) could be moved to reference files to keep the overview leaner. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata.version' is missing | Warning |
Total | 10 / 11 Passed | |
cbcae7b
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.