CtrlK
BlogDocsLog inGet started
Tessl Logo

geniml

This skill should be used when working with genomic interval data (BED files) for machine learning tasks. Use for training region embeddings (Region2Vec, BEDspace), single-cell ATAC-seq analysis (scEmbed), building consensus peaks (universes), or any ML-based analysis of genomic regions. Applies to BED file collections, scATAC-seq data, chromatin accessibility datasets, and region-based genomic feature learning.

74

1.74x
Quality

71%

Does it follow best practices?

Impact

68%

1.74x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/geniml/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong, well-crafted skill description that clearly defines a specific niche at the intersection of genomics and machine learning. It names concrete tools and actions, provides explicit trigger guidance with 'Use for...' and 'Applies to...' clauses, and includes rich domain-specific terminology that users in this field would naturally use. The only minor note is that it uses passive/imperative voice rather than third person ('Use for...' instead of 'Trains region embeddings...'), but this does not significantly detract from quality.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: training region embeddings (Region2Vec, BEDspace), single-cell ATAC-seq analysis (scEmbed), building consensus peaks (universes), and ML-based analysis of genomic regions. These are concrete, named tools and tasks.

3 / 3

Completeness

Clearly answers both 'what' (training region embeddings, scATAC-seq analysis, building consensus peaks, ML-based genomic analysis) and 'when' with explicit triggers ('Use for...', 'Applies to BED file collections, scATAC-seq data, chromatin accessibility datasets'). Opens with an explicit 'should be used when' clause.

3 / 3

Trigger Term Quality

Excellent coverage of natural terms a bioinformatics user would say: 'BED files', 'Region2Vec', 'BEDspace', 'scEmbed', 'scATAC-seq', 'chromatin accessibility', 'genomic regions', 'region embeddings', 'consensus peaks', 'universes'. These are the exact terms domain users would use.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive niche combining genomic interval data (BED files) with machine learning. The specific tool names (Region2Vec, BEDspace, scEmbed) and domain-specific terms (chromatin accessibility, consensus peaks) make it very unlikely to conflict with other skills.

3 / 3

Total

12

/

12

Passed

Implementation

42%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill provides comprehensive coverage of the geniml library with good progressive disclosure and reasonable code examples, but is significantly too verbose for a skill file. It over-explains concepts, includes generic best practices Claude already knows, and has sections like 'Related Projects' and 'Additional Resources' that add little value. The workflows would benefit from explicit validation checkpoints and the installation commands contain a typo.

Suggestions

Cut the content by at least 50%: remove 'When to Use Which Tool' (Claude can infer this from descriptions), 'Best Practices' generic advice, 'Related Projects', and 'Additional Resources' sections. Keep only what Claude cannot infer.

Fix the installation typo ('uv uv pip install' → 'uv pip install') and verify that code examples match the actual library API to ensure actionability.

Add explicit validation checkpoints to workflows, e.g., after tokenization check coverage percentage before proceeding to training, and after training verify embedding quality before downstream use.

Move the CLI reference and troubleshooting sections to a separate reference file to reduce the main skill's token footprint.

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~250+ lines. It includes extensive explanatory text ('Use for:' descriptions, 'When to Use Which Tool' decision matrix, 'Best Practices' general guidelines, 'Related Projects', 'Additional Resources') that Claude can infer or doesn't need. The 'General Guidelines' section contains generic advice like 'record parameters and random seeds for reproducibility' that Claude already knows. Much of this content could be trimmed significantly.

1 / 3

Actionability

The skill provides concrete code examples and CLI commands that appear executable, but there are concerns: the installation commands have a typo ('uv uv pip install'), the API calls may not match the actual library interface (e.g., `region2vec()` as a function, `ScEmbed` class API), and the code examples look plausible but potentially fabricated rather than verified. The CLI reference and code pipelines do provide specific, copy-paste-ready guidance.

2 / 3

Workflow Clarity

Multi-step workflows are clearly numbered and sequenced (tokenize → train → evaluate), but validation checkpoints are mostly missing. The 'Basic Region Embedding Pipeline' has an evaluate step but no conditional logic for failure. The universe building workflow lacks validation between steps. For operations involving data transformation and model training, explicit validation/feedback loops are absent.

2 / 3

Progressive Disclosure

The skill effectively uses progressive disclosure with a clear overview structure and well-signaled one-level-deep references to dedicated files (references/region2vec.md, references/bedspace.md, references/scembed.md, references/consensus_peaks.md, references/utilities.md). The main file serves as a navigable overview with enough context to choose the right tool, while deferring detailed content to reference files.

3 / 3

Total

8

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_version

'metadata.version' is missing

Warning

Total

10

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.