CtrlK
BlogDocsLog inGet started
Tessl Logo

latchbio-integration

Latch platform for bioinformatics workflows. Build pipelines with Latch SDK, @workflow/@task decorators, deploy serverless workflows, LatchFile/LatchDir, Nextflow/Snakemake integration.

68

1.65x
Quality

55%

Does it follow best practices?

Impact

91%

1.65x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/latchbio-integration/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

67%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is strong on specificity and distinctiveness, clearly identifying the Latch platform and its concrete capabilities for bioinformatics workflows. However, it lacks an explicit 'Use when...' clause, which caps completeness, and the trigger terms lean heavily toward implementation-specific jargon rather than natural user language.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about Latch, latch.bio, building bioinformatics pipelines, or deploying genomics workflows.'

Include broader natural trigger terms users might say, such as 'genomics', 'latch.bio', 'bioinformatics pipeline development', or 'cloud workflow deployment'.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'Build pipelines with Latch SDK', '@workflow/@task decorators', 'deploy serverless workflows', 'LatchFile/LatchDir', 'Nextflow/Snakemake integration'. These are concrete, actionable capabilities.

3 / 3

Completeness

The 'what' is well-covered with specific capabilities, but there is no explicit 'Use when...' clause or equivalent guidance on when Claude should select this skill. The 'when' is only implied by the domain context.

2 / 3

Trigger Term Quality

Includes relevant technical keywords like 'Latch SDK', 'Nextflow', 'Snakemake', 'bioinformatics', 'pipelines', and 'workflows', but these are fairly specialized. Missing common user phrasings like 'bioinformatics pipeline', 'latch.bio', or broader terms like 'genomics workflows'. Some terms like '@workflow/@task decorators' and 'LatchFile/LatchDir' are very implementation-specific rather than natural trigger terms.

2 / 3

Distinctiveness Conflict Risk

Highly distinctive due to the specific platform name 'Latch', the SDK-specific decorators, and the bioinformatics domain focus. Very unlikely to conflict with other skills given the niche technology stack.

3 / 3

Total

10

/

12

Passed

Implementation

42%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill has excellent progressive disclosure with well-organized references to detailed documentation files, but is significantly too verbose for its content density. The main body includes too much generic advice, prompt examples, and explanatory text that Claude doesn't need, while the code examples are incomplete pseudocode rather than executable implementations. Trimming the 'When to Use This Skill', 'Best Practices', and 'Additional Resources' sections and making code examples actually executable would significantly improve quality.

Suggestions

Remove the 'When to Use This Skill' section entirely—this is metadata that belongs in frontmatter, not skill body content, and wastes ~30 lines of tokens.

Make code examples executable: replace placeholder comments like '# Processing logic' and 'return qc_output' with actual minimal implementations (e.g., subprocess calls to real tools, actual file I/O).

Cut the 'Core Capabilities' overview section significantly—it restates what the reference file descriptions already cover. A 2-3 line overview is sufficient before jumping to Quick Start.

Integrate validation steps into workflows: after 'latch register', add a verification step; in code examples, show how to check task status or logs for errors.

DimensionReasoningScore

Conciseness

The content is extremely verbose at ~300+ lines. It explains concepts Claude already knows (what RNA-seq is, what Docker is), includes a lengthy 'When to Use This Skill' section with example prompts that waste tokens, lists best practices that are generic software engineering advice, and has a 'Core Capabilities' overview section that largely restates what the reference files contain. The 'Additional Resources' and 'Support' sections add little value.

1 / 3

Actionability

The CLI commands (install, login, init, register) are concrete and executable. However, the code examples are incomplete pseudocode disguised as real code—functions like `process_file`, `quality_control`, and `alignment` have placeholder comments like '# Processing logic' and 'return qc_output' without actual implementations. The Registry example uses APIs that may not match the actual SDK interface.

2 / 3

Workflow Clarity

The Quick Start section provides a clear sequence (install → login → init → register), and the RNA-seq pipeline shows a multi-step workflow. However, there are no validation checkpoints—no step to verify registration succeeded, no guidance on checking task logs, and the troubleshooting section is separate from the workflow rather than integrated as feedback loops. For a deployment workflow, missing validation caps this at 2.

2 / 3

Progressive Disclosure

The skill clearly organizes content with a concise overview and well-signaled references to four separate reference files (workflow-creation.md, data-management.md, resource-configuration.md, verified-workflows.md), each with clear descriptions of what they contain and key topics. References are one level deep with good navigation cues.

3 / 3

Total

8

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_version

'metadata.version' is missing

Warning

Total

10

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.