Fast CLI/Python queries to 20+ bioinformatics databases. Use for quick lookups: gene info, BLAST searches, AlphaFold structures, enrichment analysis. Best for interactive exploration, simple queries. For batch processing or advanced BLAST use biopython; for multi-database Python workflows use bioservices.
79
71%
Does it follow best practices?
Impact
99%
1.59xAverage score across 3 eval scenarios
Risky
Do not use without reviewing
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/gget/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that concisely covers specific capabilities, includes natural trigger terms bioinformatics users would employ, clearly states both what it does and when to use it, and explicitly distinguishes itself from related skills. The boundary guidance ('For batch processing or advanced BLAST use biopython; for multi-database Python workflows use bioservices') is particularly effective for skill selection in a multi-skill environment.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'gene info, BLAST searches, AlphaFold structures, enrichment analysis' and specifies 'CLI/Python queries to 20+ bioinformatics databases'. Also distinguishes scope with 'interactive exploration, simple queries'. | 3 / 3 |
Completeness | Clearly answers 'what' (fast CLI/Python queries to 20+ bioinformatics databases with specific examples) and 'when' ('Use for quick lookups', 'Best for interactive exploration, simple queries') with explicit boundary conditions distinguishing it from biopython and bioservices skills. | 3 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'gene info', 'BLAST searches', 'AlphaFold structures', 'enrichment analysis', 'bioinformatics databases', 'quick lookups'. These are terms bioinformatics users naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Explicitly differentiates itself from related skills (biopython for batch/advanced BLAST, bioservices for multi-database Python workflows), creating clear boundaries. The focus on 'fast CLI/Python queries' and 'quick lookups' carves a distinct niche. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill is highly actionable with excellent executable examples for both CLI and Python across all 20+ modules, but it severely violates conciseness and progressive disclosure principles. The main SKILL.md reads like a complete API reference manual rather than an overview that leverages the referenced sub-files. Workflows lack validation steps despite involving external API calls and database queries that can fail.
Suggestions
Drastically reduce the main SKILL.md to a quick-start overview with 2-3 representative module examples, moving detailed parameter documentation to the referenced module_reference.md file.
Add validation/error-checking steps to workflows (e.g., check if gget.search returns results before passing IDs to gget.info, verify gget setup completed before running alphafold).
Remove parameter listings for each module from SKILL.md and instead provide a brief one-line description per module with a link to the reference file for full details.
Consolidate the 6 workflow examples into 2-3 representative ones in SKILL.md and move the rest to the referenced workflows.md file.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | This is extremely verbose at 500+ lines. It exhaustively documents every module's parameters, which Claude could look up or infer. Much of this is reference documentation that belongs in separate files, not the main SKILL.md. The skill even mentions reference files exist but duplicates their content inline. | 1 / 3 |
Actionability | Every module includes concrete, executable code examples for both CLI and Python usage. Parameters are specific with real values, and workflows show complete multi-step pipelines with actual function calls and realistic arguments. | 3 / 3 |
Workflow Clarity | Six workflows are provided with clear sequential steps, but none include validation checkpoints, error handling, or feedback loops. For example, the BLAST workflow doesn't check if results are empty, and the AlphaFold workflow doesn't verify setup succeeded before proceeding. | 2 / 3 |
Progressive Disclosure | Despite referencing separate files (module_reference.md, database_info.md, workflows.md), the SKILL.md dumps comprehensive parameter documentation for every single module inline. This is a monolithic wall of text that should have a concise overview pointing to the reference files for details. | 1 / 3 |
Total | 7 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (870 lines); consider splitting into references/ and linking | Warning |
metadata_version | 'metadata.version' is missing | Warning |
Total | 9 / 11 Passed | |
25e1c0f
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.