CtrlK
BlogDocsLog inGet started
Tessl Logo

latchbio-integration

Latch platform for bioinformatics workflows. Build pipelines with Latch SDK, @workflow/@task decorators, deploy serverless workflows, LatchFile/LatchDir, Nextflow/Snakemake integration.

42

Quality

42%

Does it follow best practices?

Impact

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/latchbio-integration/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

50%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description excels at specificity and distinctiveness by naming concrete Latch SDK features and bioinformatics-specific tooling, making it clearly distinguishable from other skills. However, it critically lacks a 'Use when...' clause, which means Claude has no explicit guidance on when to select this skill. The trigger terms are heavily technical, which may miss more natural user phrasings.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about Latch, latch.bio, building bioinformatics pipelines, or deploying workflows on the Latch platform.'

Include more natural user-facing trigger terms such as 'bioinformatics pipeline', 'genomics workflow', 'latch.bio', or 'cloud bioinformatics' to improve discoverability.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'Build pipelines with Latch SDK', '@workflow/@task decorators', 'deploy serverless workflows', 'LatchFile/LatchDir', 'Nextflow/Snakemake integration'. These are concrete, actionable capabilities.

3 / 3

Completeness

The description answers 'what does this do' reasonably well but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and since the 'when' is entirely absent (not even implied beyond the domain mention), this scores at 1.

1 / 3

Trigger Term Quality

Includes relevant technical keywords like 'Latch SDK', 'bioinformatics', 'Nextflow', 'Snakemake', 'pipelines', 'workflows', and specific API terms like '@workflow/@task decorators' and 'LatchFile/LatchDir'. However, these are mostly technical jargon rather than natural user language, and common variations like 'bioinformatics pipeline', 'latch.bio', or 'genomics workflow' are missing.

2 / 3

Distinctiveness Conflict Risk

The description targets a very specific niche — the Latch bioinformatics platform with its specific SDK, decorators, and file types. This is highly unlikely to conflict with other skills due to the unique domain and tooling references.

3 / 3

Total

9

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is overly verbose and padded with content Claude doesn't need — generic best practices, a 'When to Use This Skill' section, explanations of what bioinformatics workflows are, and redundant descriptions of reference files. The code examples provide useful structural templates but are incomplete pseudocode rather than executable. The skill would benefit significantly from cutting 60%+ of the content and making the remaining code examples actually runnable.

Suggestions

Remove the 'When to Use This Skill', 'Core Capabilities' overview, 'Additional Resources', and 'Support' sections entirely — these waste tokens on information Claude can infer or that belongs in frontmatter.

Make code examples executable: define actual processing logic (even minimal), show real imports, and include complete return statements instead of undefined variables like `output_file` and `process(input_file)`.

Add validation checkpoints to the Quick Start workflow: e.g., verify Docker is running before `latch register`, check registration output for success, and show how to verify the workflow appears on the platform.

Either provide the referenced bundle files (references/*.md) or move the inlined Best Practices, Troubleshooting, and Common Patterns content into those files and keep SKILL.md as a lean overview.

DimensionReasoningScore

Conciseness

Extremely verbose. The 'When to Use This Skill' section is unnecessary (Claude can infer this). The 'Core Capabilities' overview repeats what's in the reference file descriptions. The 'Detailed Documentation' section with 'Read this for' and 'Key topics' for each reference file is bloated. The 'Best Practices' and 'Troubleshooting' sections contain generic advice Claude already knows. The 'Additional Resources' and 'Support' sections add little value. Much of this content explains concepts rather than providing actionable instructions.

1 / 3

Actionability

The Quick Start section has concrete CLI commands and a basic workflow example. The Common Workflow Patterns provide structural templates. However, the code examples are incomplete pseudocode (e.g., `return output_file` without defining it, `process(input_file)` undefined). The RNA-seq pipeline tasks have empty implementations. Most guidance is descriptive rather than executable.

2 / 3

Workflow Clarity

The Quick Start provides a clear 4-step sequence (install → login → init → register), and the pipeline examples show task chaining. However, there are no validation checkpoints — no step to verify registration succeeded, no guidance on checking workflow execution status, no feedback loops for handling failures during the register/deploy cycle. The troubleshooting section lists issues but doesn't integrate them into the workflow as checkpoints.

2 / 3

Progressive Disclosure

The skill references four detailed reference files (workflow-creation.md, data-management.md, resource-configuration.md, verified-workflows.md) with clear descriptions of what each contains. However, no bundle files are actually provided, so the references are broken. Additionally, the SKILL.md itself is monolithic — it inlines extensive content (Common Workflow Patterns, Best Practices, Troubleshooting) that should be in reference files, while simultaneously describing what's supposedly in those reference files.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_version

'metadata.version' is missing

Warning

Total

10

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.