CtrlK
BlogDocsLog inGet started
Tessl Logo

bioservices

Unified Python interface to 40+ bioinformatics services. Use when querying multiple databases (UniProt, KEGG, ChEMBL, Reactome) in a single workflow with consistent API. Best for cross-database analysis, ID mapping across services. For quick single-database lookups use gget; for sequence/file manipulation use biopython.

88

1.49x
Quality

86%

Does it follow best practices?

Impact

91%

1.49x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an excellent skill description that clearly communicates its purpose, scope, and appropriate use cases. It names specific databases, describes concrete actions (querying, ID mapping, cross-database analysis), provides explicit trigger guidance, and proactively distinguishes itself from related tools. The negative guidance ('For quick single-database lookups use gget; for sequence/file manipulation use biopython') is particularly valuable for skill selection.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions and capabilities: 'querying multiple databases', 'cross-database analysis', 'ID mapping across services', and names specific databases (UniProt, KEGG, ChEMBL, Reactome). Also distinguishes from related tools with specific use cases.

3 / 3

Completeness

Clearly answers both 'what' (unified Python interface to 40+ bioinformatics services, cross-database analysis, ID mapping) and 'when' (explicit 'Use when querying multiple databases in a single workflow'). Also includes negative guidance on when NOT to use it (use gget or biopython instead).

3 / 3

Trigger Term Quality

Excellent coverage of natural terms a bioinformatics user would say: 'UniProt', 'KEGG', 'ChEMBL', 'Reactome', 'bioinformatics', 'ID mapping', 'cross-database analysis', 'databases'. These are terms users in this domain would naturally use.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with clear niche (multi-database bioinformatics queries) and explicitly differentiates itself from related skills (gget for single-database lookups, biopython for sequence/file manipulation), minimizing conflict risk.

3 / 3

Total

12

/

12

Passed

Implementation

72%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid, well-structured skill with excellent actionability through concrete code examples and good progressive disclosure via clearly referenced supplementary files. Its main weaknesses are moderate verbosity (explaining things Claude already knows, redundant 'When to Use' section) and missing validation/error-recovery steps in multi-step and batch workflows.

Suggestions

Remove or significantly trim the 'When to Use This Skill' section—the capability sections themselves make the use cases obvious, and this list wastes tokens.

Add explicit validation checkpoints to multi-step workflows, especially for batch operations (e.g., validate mapping results count, handle partial failures in batch_id_converter).

Add a polling/wait loop example for the BLAST async workflow showing how to check status and retry before retrieving results.

Trim the 'Output Format Handling' and 'Integration with Other Tools' subsections—Claude already knows what XML, JSON, and Pandas are.

DimensionReasoningScore

Conciseness

The skill is reasonably well-structured but includes some unnecessary verbosity—e.g., the 'When to Use This Skill' section is a long bullet list that largely restates what the sections themselves cover, the 'Overview' paragraph explains what REST/SOAP are, and the 'Integration with Other Tools' section lists obvious pairings Claude would already know. The Best Practices section has some filler (e.g., explaining what XML/JSON/TSV are).

2 / 3

Actionability

The skill provides fully executable, copy-paste-ready Python code examples for each major capability (UniProt search, KEGG pathways, BLAST, identifier mapping, compound search, etc.). Commands for scripts are concrete with arguments. Key methods are enumerated with clear signatures.

3 / 3

Workflow Clarity

Multi-step workflows are listed (e.g., compound cross-referencing steps 1-4, protein analysis pipeline) but lack explicit validation checkpoints or error recovery loops. The BLAST section mentions checking status but doesn't show a polling loop. The batch identifier conversion script has no validation step mentioned despite being a batch operation, which should cap this at 2.

2 / 3

Progressive Disclosure

Content is well-organized with a clear overview, individual capability sections with inline examples, and explicit one-level-deep references to `references/services_reference.md`, `references/workflow_patterns.md`, `references/identifier_mapping.md`, and `scripts/` directory. Navigation is clearly signaled throughout.

3 / 3

Total

10

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_version

'metadata.version' is missing

Warning

Total

10

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.