CtrlK
BlogDocsLog inGet started
Tessl Logo

gtars

High-performance toolkit for genomic interval analysis in Rust with Python bindings. Use when working with genomic regions, BED files, coverage tracks, overlap detection, tokenization for ML models, or fragment analysis in computational genomics and machine learning applications.

64

1.23x
Quality

66%

Does it follow best practices?

Impact

32%

1.23x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/gtars/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a solid skill description with excellent trigger term coverage and completeness, clearly specifying both what the skill does and when to use it. The main weakness is that the capabilities are described more as topic areas than concrete actions—listing verbs like 'parse', 'compute', 'generate' would strengthen specificity. The highly specialized domain makes it very distinctive and unlikely to conflict with other skills.

Suggestions

Replace topic-area nouns with concrete action phrases, e.g., 'parse and manipulate BED files, compute interval overlaps, generate coverage tracks, tokenize genomic regions for ML models' instead of listing categories.

DimensionReasoningScore

Specificity

Names the domain (genomic interval analysis) and mentions several areas like BED files, coverage tracks, overlap detection, tokenization, and fragment analysis, but these read more as topic areas than concrete actions. It lacks specific verbs describing what the toolkit does (e.g., 'parse BED files', 'compute overlaps', 'generate coverage tracks').

2 / 3

Completeness

Clearly answers both 'what' (high-performance toolkit for genomic interval analysis in Rust with Python bindings) and 'when' (explicit 'Use when...' clause listing specific trigger scenarios like working with BED files, coverage tracks, overlap detection, tokenization for ML, or fragment analysis).

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'genomic regions', 'BED files', 'coverage tracks', 'overlap detection', 'tokenization', 'ML models', 'fragment analysis', 'computational genomics', 'Rust', 'Python bindings'. These cover a wide range of terms a user in this domain would naturally use.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive niche in genomic interval analysis with specific domain terms like BED files, coverage tracks, and fragment analysis. Very unlikely to conflict with other skills given the specialized computational genomics focus.

3 / 3

Total

11

/

12

Passed

Implementation

42%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill has good structural organization with clear progressive disclosure to reference documents, but is significantly too verbose—spending many tokens on 'When to use' lists, descriptive context, and sections like Performance Characteristics and Data Formats that Claude can infer. Code examples appear illustrative rather than verified executable, and workflows lack validation checkpoints that would be important for file processing operations.

Suggestions

Remove all 'When to use' bullet lists and the 'Python vs CLI Usage', 'Performance Characteristics', and 'Data Formats' sections—these are inferrable by Claude and waste tokens.

Verify that all Python code examples reflect the actual gtars API (e.g., confirm `gtars.RegionSet.from_bed()` and `igd.build_index()` are real methods) and make them copy-paste executable.

Add validation/verification steps to workflows, e.g., checking output file existence, verifying overlap counts, or validating tokenizer output shape before feeding to ML pipelines.

Condense each module section to just the quick example code and the reference link, removing the descriptive paragraphs that precede them.

DimensionReasoningScore

Conciseness

The skill is very verbose with extensive 'When to use' lists that Claude can infer, explanations of what each module does that are descriptive rather than instructive, a 'Python vs CLI Usage' section stating obvious heuristics, 'Performance Characteristics' and 'Data Formats' sections that add little actionable value, and repeated pattern of describing before instructing. Many sections explain concepts Claude already knows (e.g., what BED files are, when to use coverage tracks).

1 / 3

Actionability

Code examples are provided for each module and look plausible, but many appear to be pseudocode or illustrative rather than verified executable code. For instance, `gtars.igd.build_index()`, `gtars.RegionSet.from_bed()`, and `peaks.filter_overlapping()` may not reflect the actual API accurately—they look like idealized examples rather than copy-paste ready code. The CLI examples are more concrete but lack validation of actual flag names.

2 / 3

Workflow Clarity

Three workflows are listed with sequential steps, but none include validation checkpoints or error recovery. The coverage track pipeline is just two commands with no verification step. The ML preprocessing workflow ends with a vague comment '(integrate with geniml or custom models)'. No feedback loops for any workflow despite dealing with file processing operations.

2 / 3

Progressive Disclosure

The skill has a clear overview structure with well-signaled one-level-deep references to specific documentation files (references/overlap.md, references/coverage.md, etc.). The reference documentation section provides a clean index. Content is appropriately split between the overview and detailed reference files.

3 / 3

Total

8

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_version

'metadata.version' is missing

Warning

Total

10

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.