This skill should be used when working with LaminDB, an open-source data framework for biology that makes data queryable, traceable, reproducible, and FAIR. Use when managing biological datasets (scRNA-seq, spatial, flow cytometry, etc.), tracking computational workflows, curating and validating data with biological ontologies, building data lakehouses, or ensuring data lineage and reproducibility in biological research. Covers data management, annotation, ontologies (genes, cell types, diseases, tissues), schema validation, integrations with workflow managers (Nextflow, Snakemake) and MLOps platforms (W&B, MLflow), and deployment strategies.
75
71%
Does it follow best practices?
Impact
74%
1.32xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/lamindb/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly identifies the tool (LaminDB), its domain (biological data management), specific capabilities, and explicit trigger conditions. It uses third person voice appropriately and includes rich, natural trigger terms spanning biological data types, ontology concepts, and integration tools. The description is comprehensive without being unnecessarily verbose.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: managing biological datasets (with examples like scRNA-seq, spatial, flow cytometry), tracking computational workflows, curating/validating data with biological ontologies, building data lakehouses, ensuring data lineage and reproducibility. Also enumerates specific integrations (Nextflow, Snakemake, W&B, MLflow). | 3 / 3 |
Completeness | Clearly answers both 'what' (data framework for biology covering management, annotation, ontologies, schema validation, integrations, deployment) and 'when' with an explicit 'Use when...' clause listing specific trigger scenarios (managing biological datasets, tracking workflows, curating with ontologies, building data lakehouses, ensuring lineage). | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural terms a user would say: 'LaminDB', 'scRNA-seq', 'spatial', 'flow cytometry', 'biological ontologies', 'genes', 'cell types', 'diseases', 'tissues', 'Nextflow', 'Snakemake', 'W&B', 'MLflow', 'data lineage', 'reproducibility', 'FAIR'. These are terms bioinformaticians and computational biologists would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive due to the specific mention of 'LaminDB' as the core tool, combined with the biological data domain focus and specific technology integrations. Very unlikely to conflict with other skills unless another LaminDB skill exists. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill is well-structured with good progressive disclosure to reference files and provides useful code examples for common workflows. However, it is significantly too verbose—much of the content is catalog-style enumeration of features and capabilities that could be dramatically condensed. The workflows lack validation checkpoints and error handling, which is important for data curation and ontology operations.
Suggestions
Cut the content by at least 50%: remove the 'When to Use This Skill' section (duplicates the description), condense capability area summaries to 2-3 lines each pointing to references, and remove the 'Key Principles' section which states obvious best practices.
Add error handling and validation checkpoints to the code examples—especially show what happens when `curator.validate()` fails and how to handle standardization mismatches.
Verify API accuracy of code examples against current LaminDB API (e.g., `artifact.feature_sets.add_ontology()`, `artifact.features.add_values()`, `bt.CellType.standardize()` as a static method) to ensure they are copy-paste executable.
Remove the bullet-point catalogs of query operators, ontology sources, storage systems, etc.—these belong in the reference files, not the overview skill.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive explanations of concepts Claude already knows or can infer. The 'When to Use This Skill' section duplicates the description, bullet-point lists of capabilities are exhaustive but largely unnecessary catalog-style content, and sections like 'Core Value Proposition' and 'Key Principles' explain general best practices that don't need spelling out. The file is well over 300 lines when much of this could be condensed. | 1 / 3 |
Actionability | The code examples in the 'Common Use Case Workflows' section provide concrete, mostly executable Python code, which is good. However, many of the code examples may not be fully accurate (e.g., `artifact.feature_sets.add_ontology()` and `artifact.features.add_values()` API calls look potentially incorrect or outdated), and much of the content is descriptive bullet lists rather than executable guidance. | 2 / 3 |
Workflow Clarity | The 'Getting Started Checklist' provides a reasonable sequence, and the code examples show multi-step workflows with ln.track()/ln.finish() bookends. However, there are no validation checkpoints or error recovery steps in any of the workflows—no guidance on what to do when curator.validate() fails, no feedback loops for handling ontology mismatches, and no error handling patterns. | 2 / 3 |
Progressive Disclosure | The skill effectively uses a hub-and-spoke model with clear references to six well-organized reference files, each clearly signaled with descriptive summaries. Navigation is straightforward with one-level-deep references and a dedicated 'Reference Files' section summarizing all available documents. | 3 / 3 |
Total | 8 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata.version' is missing | Warning |
Total | 10 / 11 Passed | |
b58ad7e
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.