CtrlK
BlogDocsLog inGet started
Tessl Logo

lamindb

This skill should be used when working with LaminDB, an open-source data framework for biology that makes data queryable, traceable, reproducible, and FAIR. Use when managing biological datasets (scRNA-seq, spatial, flow cytometry, etc.), tracking computational workflows, curating and validating data with biological ontologies, building data lakehouses, or ensuring data lineage and reproducibility in biological research. Covers data management, annotation, ontologies (genes, cell types, diseases, tissues), schema validation, integrations with workflow managers (Nextflow, Snakemake) and MLOps platforms (W&B, MLflow), and deployment strategies.

73

1.32x
Quality

67%

Does it follow best practices?

Impact

74%

1.32x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/lamindb/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly identifies the tool (LaminDB), its domain (biological data management), specific capabilities, and explicit trigger conditions. It provides excellent keyword coverage for the target audience of computational biologists and bioinformaticians. The description is comprehensive without being unnecessarily verbose, and its specificity to LaminDB and biological data makes it highly distinctive.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: managing biological datasets (with examples like scRNA-seq, spatial, flow cytometry), tracking computational workflows, curating/validating data with biological ontologies, building data lakehouses, ensuring data lineage and reproducibility. Also enumerates specific integrations (Nextflow, Snakemake, W&B, MLflow).

3 / 3

Completeness

Clearly answers both 'what' (data framework for biology covering management, annotation, ontologies, schema validation, integrations, deployment) and 'when' with an explicit 'Use when...' clause listing specific trigger scenarios (managing biological datasets, tracking workflows, curating with ontologies, building data lakehouses, ensuring lineage).

3 / 3

Trigger Term Quality

Excellent coverage of natural terms a user would say: 'LaminDB', 'scRNA-seq', 'spatial', 'flow cytometry', 'biological ontologies', 'genes', 'cell types', 'diseases', 'tissues', 'Nextflow', 'Snakemake', 'W&B', 'MLflow', 'data lineage', 'reproducibility', 'FAIR'. These are terms bioinformaticians and computational biologists would naturally use.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive due to the specific mention of 'LaminDB' as the core tool, combined with the biological data domain focus and specific technology integrations. Very unlikely to conflict with other skills unless another LaminDB skill exists.

3 / 3

Total

12

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads more like product documentation or a README than an actionable skill for Claude. It is excessively verbose, spending significant tokens on capability descriptions, marketing-style bullet lists, and concept explanations that Claude doesn't need. The code examples provide some actionable value, but the lack of validation/error-handling workflows and the sheer volume of descriptive content significantly reduce its effectiveness as a skill file.

Suggestions

Cut the Overview, 'When to Use This Skill', and 'Core Value Proposition' sections entirely—Claude doesn't need marketing copy. Start with a one-line description and jump to actionable content.

Move the detailed capability lists (sections 1-6 under 'Core Capabilities') into the reference files and replace with a brief table or 2-3 line summary pointing to each reference.

Add explicit validation and error-handling steps to code examples (e.g., what to do when curator.validate() returns errors, how to handle missing ontology terms).

Reduce the 'Key Principles' section to 3-4 non-obvious principles—items like 'document thoroughly' and 'track everything' are generic advice that wastes tokens.

DimensionReasoningScore

Conciseness

Extremely verbose with extensive explanations of concepts Claude already knows (FAIR acronym expansion, what ontologies are, what a lakehouse is). The 'When to Use This Skill' section largely duplicates the description. Bullet-point lists of capabilities read like marketing material rather than actionable instructions. The 'Overview' section and 'Core Value Proposition' are unnecessary padding.

1 / 3

Actionability

The code examples in the 'Common Use Case Workflows' section provide concrete, mostly executable guidance. However, much of the skill is descriptive lists of capabilities rather than instructions. Some code examples have questionable API accuracy (e.g., `artifact.feature_sets.add_ontology()`, `from_anndata` as a method) and the bulk of the content describes rather than instructs.

2 / 3

Workflow Clarity

The 'Getting Started Checklist' provides a reasonable sequence, and the code examples show multi-step workflows with ln.track()/ln.finish() bookends. However, there are no validation checkpoints or error recovery steps in any workflow. No guidance on what to do when curator.validate() fails, no feedback loops for common errors.

2 / 3

Progressive Disclosure

Good use of reference files with clear pointers to six separate reference documents. However, the main SKILL.md itself is a monolithic wall of text that includes too much inline detail (full capability lists, multiple code examples, extensive bullet lists) that should be in the reference files. The overview should be much leaner with more content pushed to references.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_version

'metadata.version' is missing

Warning

Total

10

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.