Fast CLI/Python queries to 20+ bioinformatics databases. Use for quick lookups: gene info, BLAST searches, AlphaFold structures, enrichment analysis. Best for interactive exploration, simple queries. For batch processing or advanced BLAST use biopython; for multi-database Python workflows use bioservices.
82
75%
Does it follow best practices?
Impact
99%
1.59xAverage score across 3 eval scenarios
Risky
Do not use without reviewing
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/gget/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that concisely covers specific capabilities, includes natural trigger terms bioinformatics users would employ, clearly states both what it does and when to use it, and explicitly distinguishes itself from related skills. The boundary guidance ('For batch processing or advanced BLAST use biopython; for multi-database Python workflows use bioservices') is particularly effective for skill selection in a multi-skill environment.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'gene info, BLAST searches, AlphaFold structures, enrichment analysis' and specifies 'CLI/Python queries to 20+ bioinformatics databases'. Also distinguishes scope with 'interactive exploration, simple queries'. | 3 / 3 |
Completeness | Clearly answers 'what' (fast CLI/Python queries to 20+ bioinformatics databases with specific examples) and 'when' ('Use for quick lookups', 'Best for interactive exploration, simple queries') with explicit boundary conditions distinguishing it from biopython and bioservices skills. | 3 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'gene info', 'BLAST searches', 'AlphaFold structures', 'enrichment analysis', 'bioinformatics databases', 'quick lookups'. These are terms bioinformatics users naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Explicitly differentiates itself from related skills (biopython for batch/advanced BLAST, bioservices for multi-database Python workflows), creating clear boundaries. The focus on 'fast CLI/Python queries' and 'quick lookups' carves a distinct niche. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
50%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is highly actionable with excellent executable examples covering both CLI and Python interfaces for all 20+ modules. However, it is far too verbose for a SKILL.md — it reads like complete API documentation rather than a concise skill overview, with detailed parameter lists for every module that should be in reference files. The workflows are useful but lack validation checkpoints and error recovery steps.
Suggestions
Drastically reduce the main SKILL.md to a quick-start overview with 2-3 key module examples, moving the detailed per-module parameter documentation into the referenced module_reference.md file.
Add validation/error handling steps to workflows (e.g., check if gget setup succeeded, verify BLAST returned results before proceeding, handle empty DataFrames).
Remove parameter descriptions that Claude already knows (e.g., what BLAST does, what E-values are, what FASTA format is) and focus only on gget-specific quirks and gotchas.
Consolidate the 'Best Practices' and 'Output Formats' sections into a brief table or bullet list — much of this information is already implied by the examples.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | This is extremely verbose at 500+ lines. It exhaustively documents every module's parameters, which Claude could infer or look up. The parameter lists read like API documentation that belongs in a reference file, not the main SKILL.md. Much of this (what BLAST is, what FASTA format is, basic parameter descriptions) is knowledge Claude already has. | 1 / 3 |
Actionability | The skill provides fully executable code examples for both CLI and Python for every module. Commands are copy-paste ready with real gene IDs, real sequences, and concrete database names. The workflow examples chain multiple modules together with working code. | 3 / 3 |
Workflow Clarity | The six workflows provide clear sequential steps, but they lack validation checkpoints. There's no error handling within workflows (e.g., what if BLAST returns no results, what if gget setup fails, what if the database is down). For operations like AlphaFold setup (~4GB download) or COSMIC database downloads, there are no verification steps. | 2 / 3 |
Progressive Disclosure | The skill references external files (module_reference.md, database_info.md, workflows.md) at the bottom, but the main SKILL.md itself is a monolithic wall containing detailed parameter documentation for every module that should be in those reference files. The overview should be much leaner with the detailed per-module docs split out. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (870 lines); consider splitting into references/ and linking | Warning |
metadata_version | 'metadata.version' is missing | Warning |
Total | 9 / 11 Passed | |
b58ad7e
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.