CtrlK
BlogDocsLog inGet started
Tessl Logo

gget

Fast CLI/Python queries to 20+ bioinformatics databases. Use for quick lookups: gene info, BLAST searches, AlphaFold structures, enrichment analysis. Best for interactive exploration, simple queries. For batch processing or advanced BLAST use biopython; for multi-database Python workflows use bioservices.

79

1.59x
Quality

71%

Does it follow best practices?

Impact

99%

1.59x

Average score across 3 eval scenarios

SecuritybySnyk

Risky

Do not use without reviewing

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/gget/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an excellent skill description that concisely covers specific capabilities, includes natural trigger terms bioinformatics users would employ, clearly states both what it does and when to use it, and explicitly distinguishes itself from related skills (biopython, bioservices). The third-person voice is used correctly throughout, and the description is information-dense without being verbose.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'gene info, BLAST searches, AlphaFold structures, enrichment analysis' and specifies 'CLI/Python queries to 20+ bioinformatics databases'. Also distinguishes scope with 'interactive exploration, simple queries'.

3 / 3

Completeness

Clearly answers 'what' (fast CLI/Python queries to 20+ bioinformatics databases with specific examples) and 'when' ('Use for quick lookups', 'Best for interactive exploration, simple queries') with explicit boundary conditions distinguishing it from biopython and bioservices skills.

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'gene info', 'BLAST searches', 'AlphaFold structures', 'enrichment analysis', 'bioinformatics databases', 'quick lookups'. These are terms bioinformatics users naturally use.

3 / 3

Distinctiveness Conflict Risk

Explicitly differentiates itself from related skills (biopython for batch/advanced BLAST, bioservices for multi-database Python workflows), creating clear boundaries. The focus on 'fast CLI/Python queries' and 'quick lookups' carves a distinct niche.

3 / 3

Total

12

/

12

Passed

Implementation

42%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is highly actionable with excellent concrete examples for every module in both CLI and Python, but it severely violates conciseness by inlining what amounts to a complete API reference manual (~500+ lines). The progressive disclosure is broken: it references external files that don't exist while keeping all the detailed content that should be in those files inline. Workflows lack validation checkpoints despite involving external API calls and database queries that can fail.

Suggestions

Move the detailed per-module parameter lists and examples into the referenced module_reference.md file, keeping only a concise module summary table with 1-line descriptions and one example each in SKILL.md

Add validation/error-checking steps to workflows (e.g., 'Verify results are non-empty before proceeding', 'Check gget setup completed successfully')

Create the referenced bundle files (module_reference.md, database_info.md, workflows.md) and actually offload the detailed content there, keeping SKILL.md as a lean overview with quick-start examples

Remove explanatory text that Claude already knows (e.g., what BLAST is, what FASTA format is, what enrichment analysis does) and focus only on gget-specific syntax and gotchas

DimensionReasoningScore

Conciseness

This is extremely verbose at 500+ lines. It exhaustively documents every module's parameters, which Claude could look up or infer. The entire module reference section reads like API documentation that should be in a separate reference file, not the main SKILL.md. Much of this (parameter lists, return types for 15+ modules) is information Claude doesn't need inline.

1 / 3

Actionability

Every module includes concrete, executable code examples in both CLI and Python forms. Parameters are specific with real values, and the workflow sections show complete multi-step pipelines with actual function calls and realistic arguments.

3 / 3

Workflow Clarity

Six workflows are provided with clear sequential steps, but none include validation checkpoints or error handling within the workflow itself. There are no feedback loops (e.g., checking if BLAST returned results before proceeding, validating AlphaFold setup succeeded). The workflows are linear sequences without verification steps.

2 / 3

Progressive Disclosure

The skill references a references/ directory with module_reference.md, database_info.md, and workflows.md, but no bundle files are provided, meaning those references don't exist. More critically, the SKILL.md itself is a monolithic wall containing exhaustive parameter documentation for 20+ modules that should be in those reference files instead of inline. The content structure contradicts the progressive disclosure pattern it claims to follow.

1 / 3

Total

7

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (870 lines); consider splitting into references/ and linking

Warning

metadata_version

'metadata.version' is missing

Warning

Total

9

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.