CtrlK
BlogDocsLog inGet started
Tessl Logo

lamindb

This skill should be used when working with LaminDB, an open-source data framework for biology that makes data queryable, traceable, reproducible, and FAIR. Use when managing biological datasets (scRNA-seq, spatial, flow cytometry, etc.), tracking computational workflows, curating and validating data with biological ontologies, building data lakehouses, or ensuring data lineage and reproducibility in biological research. Covers data management, annotation, ontologies (genes, cell types, diseases, tissues), schema validation, integrations with workflow managers (Nextflow, Snakemake) and MLOps platforms (W&B, MLflow), and deployment strategies.

73

1.32x
Quality

67%

Does it follow best practices?

Impact

74%

1.32x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/lamindb/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly identifies the tool (LaminDB), its domain (biological data management), and provides explicit trigger conditions. It includes rich, natural trigger terms spanning data types, ontologies, workflow managers, and MLOps platforms. The description is comprehensive without being padded, and its specificity to biological data frameworks makes it highly distinctive.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: managing biological datasets, tracking computational workflows, curating/validating data with ontologies, building data lakehouses, ensuring data lineage. Also names specific data types (scRNA-seq, spatial, flow cytometry) and integrations (Nextflow, Snakemake, W&B, MLflow).

3 / 3

Completeness

Clearly answers both 'what' (data framework for biology covering management, annotation, ontologies, schema validation, integrations, deployment) and 'when' with an explicit 'Use when...' clause listing specific trigger scenarios (managing biological datasets, tracking workflows, curating with ontologies, building lakehouses, ensuring lineage).

3 / 3

Trigger Term Quality

Excellent coverage of natural terms a user would say: 'LaminDB', 'scRNA-seq', 'spatial', 'flow cytometry', 'biological ontologies', 'genes', 'cell types', 'diseases', 'tissues', 'Nextflow', 'Snakemake', 'W&B', 'MLflow', 'data lineage', 'reproducibility', 'FAIR'. These are terms bioinformaticians and computational biologists would naturally use.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive due to the specific focus on LaminDB and biological data management. The combination of biological ontologies, specific data types (scRNA-seq, flow cytometry), and named integrations creates a clear niche that is unlikely to conflict with other skills.

3 / 3

Total

12

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads more like comprehensive product documentation than a focused, actionable skill file. It is significantly over-verbose, spending many tokens on feature catalogs, value propositions, and descriptions that Claude doesn't need. The code examples provide some actionability but lack validation steps and error handling, and their API accuracy is uncertain. The progressive disclosure structure is reasonable in concept but the main file retains too much detail that should be delegated to reference files.

Suggestions

Cut the content by 60-70%: remove the 'Overview' value proposition, 'When to Use This Skill' section (duplicates the description), and the extensive feature/integration catalogs. Keep only what's needed to orient Claude toward the right reference file.

Add validation checkpoints and error handling to the use case workflows, especially around curator.validate() — show what happens on failure and how to fix it.

Verify all code examples against the actual LaminDB API and ensure they are copy-paste executable rather than approximations.

Move the detailed use case examples into a reference file (e.g., references/examples.md) and keep only one minimal quick-start example in the main SKILL.md.

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~300+ lines. It explains what LaminDB is, its value proposition, when to use it, and lists extensive catalogs of features that Claude doesn't need spelled out (e.g., listing every ontology source, every storage system, every integration). The 'When to Use This Skill' section largely duplicates the description. The 10 'Key Principles' are mostly generic best practices Claude already knows. Much of this reads like product documentation rather than actionable skill content.

1 / 3

Actionability

The skill includes several concrete code examples (Use Cases 1-4) that are mostly executable, which is good. However, many of the code examples have questionable API accuracy (e.g., `artifact.feature_sets.add_ontology()`, `bt.CellType.standardize()` as a class method on a Series) and some appear to be pseudocode-like approximations rather than verified, copy-paste-ready code. The bulk of the content is descriptive lists rather than executable guidance.

2 / 3

Workflow Clarity

The Getting Started Checklist provides a reasonable sequence, and the use case examples show multi-step workflows with ln.track()/ln.finish() bookends. However, there are no validation checkpoints or error recovery steps in any workflow. The curation workflow mentions curator.validate() but doesn't show what to do when validation fails. For a skill involving data validation and schema enforcement, missing feedback loops is a significant gap.

2 / 3

Progressive Disclosure

The skill references six separate reference files organized by capability area, which is good structure in principle. However, no bundle files were provided, so we cannot verify these references exist. The main SKILL.md itself contains too much inline content that should be in the reference files (extensive feature lists, multiple use cases, detailed capability descriptions), making the overview far longer than necessary.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_version

'metadata.version' is missing

Warning

Total

10

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.