CtrlK
BlogDocsLog inGet started
Tessl Logo

latchbio-integration

Latch platform for bioinformatics workflows. Build pipelines with Latch SDK, @workflow/@task decorators, deploy serverless workflows, LatchFile/LatchDir, Nextflow/Snakemake integration.

63

1.65x
Quality

46%

Does it follow best practices?

Impact

91%

1.65x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/latchbio-integration/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

50%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description excels at specificity and distinctiveness, listing concrete Latch-specific capabilities and technical terms that clearly carve out a unique niche. However, it critically lacks a 'Use when...' clause, which hurts completeness significantly. The trigger terms are appropriate for the domain but could benefit from broader natural language variations.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about Latch, latch.bio, building bioinformatics pipelines, or deploying workflows on the Latch platform.'

Include broader natural language trigger terms like 'bioinformatics pipeline', 'latch.bio', 'genomics workflow', or 'cloud bioinformatics' to capture more user phrasings.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'Build pipelines with Latch SDK', '@workflow/@task decorators', 'deploy serverless workflows', 'LatchFile/LatchDir', 'Nextflow/Snakemake integration'. These are concrete, actionable capabilities.

3 / 3

Completeness

Describes what the skill does (build pipelines, deploy workflows, etc.) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, a missing 'Use when...' clause caps completeness at 2, and the 'when' is not even implied clearly, warranting a score closer to 1.

1 / 3

Trigger Term Quality

Includes relevant technical keywords like 'Latch SDK', 'bioinformatics', 'Nextflow', 'Snakemake', 'pipelines', 'workflows', but these are fairly specialized. Missing common user phrasings like 'latch.bio', 'bioinformatics pipeline', or broader terms a user might naturally say. The terms are good for the niche but could include more variations.

2 / 3

Distinctiveness Conflict Risk

Highly distinctive with very specific niche terms like 'Latch SDK', 'LatchFile/LatchDir', '@workflow/@task decorators', and 'Latch platform'. Extremely unlikely to conflict with other skills due to the specificity of the Latch bioinformatics ecosystem.

3 / 3

Total

9

/

12

Passed

Implementation

42%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill has excellent progressive disclosure with well-organized references to detailed documentation, but suffers from significant verbosity—the 'When to Use This Skill' section, capability overview, and detailed reference descriptions consume many tokens without adding actionable value. Code examples are illustrative but not truly executable, and the workflow lacks validation checkpoints between steps.

Suggestions

Remove the 'When to Use This Skill' section entirely and trim the 'Core Capabilities' overview to a brief bullet list—Claude can infer applicability from the content itself.

Condense the 'Detailed Documentation' section to a simple table or short list of reference files with one-line descriptions instead of multi-line topic lists for each.

Make code examples fully executable by replacing undefined variables (e.g., `output_file`, `qc_output`) with actual implementation or clearly marked placeholder logic.

Add explicit validation steps to the Quick Start and pipeline examples, such as 'Verify registration: `latch get-wf` to confirm workflow appears' and local Docker testing before registration.

DimensionReasoningScore

Conciseness

The skill is extremely verbose with significant padding. The 'When to Use This Skill' section is unnecessary (Claude can infer this), the 'Core Capabilities' overview restates what the reference files cover, the 'Detailed Documentation' section extensively describes each reference file's contents (table-of-contents style), and the 'Best Practices' are generic advice Claude already knows. Much of this content could be cut by 60%+ without losing actionable information.

1 / 3

Actionability

The CLI commands are concrete and the code examples show real decorator patterns, but the workflow examples are incomplete pseudocode (e.g., `process_file` returns `output_file` without defining it, `quality_control` returns `qc_output` without defining it). The Registry example uses `process()` which is undefined. These are illustrative but not executable.

2 / 3

Workflow Clarity

The Quick Start provides a clear 4-step sequence (install → login → init → register), and the troubleshooting section covers common issues. However, there are no validation checkpoints in the workflow development process—no step to verify registration succeeded, no feedback loop for testing locally before deploying, and the multi-step pipeline examples lack any validation between steps.

2 / 3

Progressive Disclosure

The skill is well-structured with a clear overview pointing to four one-level-deep reference files, each with clear descriptions of what they cover and when to read them. Navigation is straightforward with well-signaled references to workflow-creation.md, data-management.md, resource-configuration.md, and verified-workflows.md.

3 / 3

Total

8

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_version

'metadata.version' is missing

Warning

Total

10

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.