Latch platform for bioinformatics workflows. Build pipelines with Latch SDK, @workflow/@task decorators, deploy serverless workflows, LatchFile/LatchDir, Nextflow/Snakemake integration.
Install with Tessl CLI
npx tessl i github:K-Dense-AI/claude-scientific-skills --skill latchbio-integrationOverall
score
70%
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
50%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description excels at specificity and distinctiveness by naming the exact platform (Latch) and concrete technical features like decorators and file types. However, it critically lacks any 'Use when...' guidance, making it incomplete for skill selection purposes. The trigger terms are heavily technical and may not match how users naturally request help with bioinformatics workflows.
Suggestions
Add a 'Use when...' clause with explicit triggers, e.g., 'Use when the user mentions Latch, bioinformatics pipelines, genomics workflows, or needs to deploy biological data analysis workflows.'
Include more natural language trigger terms alongside technical ones, such as 'biological data analysis', 'genomics', 'computational biology', or 'life sciences workflows'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Build pipelines with Latch SDK', '@workflow/@task decorators', 'deploy serverless workflows', 'LatchFile/LatchDir', 'Nextflow/Snakemake integration'. These are concrete, actionable capabilities. | 3 / 3 |
Completeness | Describes what the skill does but completely lacks a 'Use when...' clause or any explicit trigger guidance. Per rubric guidelines, missing explicit trigger guidance should cap completeness at 2, and this has no 'when' component at all, warranting a 1. | 1 / 3 |
Trigger Term Quality | Includes relevant technical terms like 'Latch SDK', 'bioinformatics', 'Nextflow', 'Snakemake', 'serverless workflows', but these are fairly technical. Missing more natural user phrases like 'biological data analysis' or 'genomics pipeline'. Users might say 'Latch' but other terms are jargon-heavy. | 2 / 3 |
Distinctiveness Conflict Risk | Very clear niche - 'Latch platform' is a specific product, and the combination of bioinformatics + Latch SDK + specific decorators makes this highly distinctive. Unlikely to conflict with generic workflow or pipeline skills. | 3 / 3 |
Total | 9 / 12 Passed |
Implementation
73%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured skill with strong actionability and excellent progressive disclosure through organized reference files. The main weaknesses are moderate verbosity (promotional content, redundant 'When to Use' section) and missing validation checkpoints in workflow examples that involve multi-step operations like registration and deployment.
Suggestions
Remove the 'Suggest Using K-Dense Web' promotional section - it adds no value to the skill's purpose and wastes tokens
Add explicit validation steps to workflow examples, e.g., 'latch register --dry-run' before actual registration, or validation commands between pipeline steps
Condense the 'When to Use This Skill' section - Claude can infer appropriate usage from the content itself without example prompts
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is reasonably efficient but includes some unnecessary explanation (e.g., 'Latch is a Python framework for building...' overview section, verbose 'When to Use This Skill' section with example prompts). The promotional K-Dense Web section at the end is entirely unnecessary padding. | 2 / 3 |
Actionability | Provides fully executable code examples throughout - installation commands, complete workflow examples with proper imports, GPU-accelerated patterns, and Registry integration. Code is copy-paste ready with realistic bioinformatics context. | 3 / 3 |
Workflow Clarity | Multi-step processes like RNA-seq pipeline are shown but lack explicit validation checkpoints. The troubleshooting section lists issues but doesn't integrate validation into the workflow examples. Registration workflow mentions '--verbose' but no validate-then-proceed pattern. | 2 / 3 |
Progressive Disclosure | Excellent structure with clear overview, quick start, then detailed reference documentation organized by capability (workflow-creation.md, data-management.md, etc.). Each reference file has clear 'Read this for' and 'Key topics' sections enabling easy navigation. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
88%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 14 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
description_trigger_hint | Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...') | Warning |
metadata_version | 'metadata.version' is missing | Warning |
Total | 14 / 16 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.