CtrlK
BlogDocsLog inGet started
Tessl Logo

microbiome-diversity-reporter

Interpret Alpha and Beta diversity metrics from 16S rRNA sequencing results.

46

Quality

33%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/microbiome-diversity-reporter/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

40%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description targets a clear and specific bioinformatics niche (16S rRNA diversity analysis), which gives it strong distinctiveness. However, it lacks a 'Use when...' clause, limiting its usefulness for skill selection, and could benefit from listing more concrete actions and common trigger terms that users in this domain would naturally use.

Suggestions

Add a 'Use when...' clause with explicit triggers, e.g., 'Use when the user asks about microbiome analysis, 16S rRNA results, diversity indices, or amplicon sequencing outputs.'

Expand trigger terms to include common synonyms and related concepts like 'microbiome', 'OTU tables', 'ASV', 'Shannon diversity', 'UniFrac', 'rarefaction', or 'QIIME output'.

List more specific concrete actions, e.g., 'Interpret Shannon, Simpson, and Chao1 alpha diversity indices; analyze PCoA and NMDS beta diversity plots; compare community composition across sample groups.'

DimensionReasoningScore

Specificity

Names the domain (16S rRNA sequencing) and some actions (interpret Alpha and Beta diversity metrics), but doesn't list specific concrete actions like generating plots, comparing samples, running statistical tests, or identifying taxa.

2 / 3

Completeness

Describes what the skill does (interpret diversity metrics) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, a missing 'Use when...' clause caps completeness at 2, and the 'what' is also only partially described, warranting a score of 1.

1 / 3

Trigger Term Quality

Includes relevant technical keywords like 'Alpha diversity', 'Beta diversity', '16S rRNA', and 'sequencing results' that a bioinformatics user would naturally use, but misses common variations like 'microbiome', 'OTU', 'ASV', 'Shannon index', 'UniFrac', 'amplicon sequencing', or 'metagenomics'.

2 / 3

Distinctiveness Conflict Risk

The combination of '16S rRNA sequencing' with 'Alpha and Beta diversity metrics' is a very specific niche that is unlikely to conflict with other skills; this is a clearly defined bioinformatics subdomain.

3 / 3

Total

8

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill suffers from extreme verbosity and heavy boilerplate that obscures the genuinely useful domain-specific content (CLI usage, input formats, output examples, parameter table). The circular internal references and generic workflow/error handling sections that Claude already knows waste significant token budget. The core microbiome diversity analysis guidance is buried under layers of template-driven content that adds no instructional value.

Suggestions

Remove all generic boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria, Response Template, Output Requirements) that don't contain domain-specific information—these waste tokens on things Claude already knows.

Consolidate the duplicated and circularly-referencing sections (Example Usage, Usage, Workflow, Implementation Details) into a single clear workflow with the concrete CLI commands and validation steps.

Move the detailed parameter table, input format examples, and output format examples into a separate reference file and link to it from a concise overview in SKILL.md.

Add domain-specific validation checkpoints (e.g., verify rarefaction depth is adequate, check that sample counts match metadata, validate diversity index ranges) instead of generic 'validate inputs' steps.

DimensionReasoningScore

Conciseness

Extremely verbose and repetitive. Contains massive amounts of boilerplate (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria, Response Template) that add no value for Claude. Multiple sections reference each other circularly ('See ## Usage above', 'See ## Workflow above'). The generic workflow steps, output requirements, and error handling sections explain things Claude already knows.

1 / 3

Actionability

The Usage section provides concrete CLI commands with specific parameters, input format examples, and example JSON output, which is genuinely useful. However, much of the 'actionable' content is generic boilerplate (e.g., the 5-step Workflow is entirely abstract), and the actual scripts/main.py is referenced but never shown—it's unclear if it actually exists or what it does internally.

2 / 3

Workflow Clarity

The 'Example run plan' provides a reasonable 4-step sequence, and the Usage section shows concrete commands. However, the main 'Workflow' section is entirely generic and abstract with no domain-specific validation checkpoints. There's no validation step for checking output correctness (e.g., verifying diversity metrics are within expected ranges, checking for rarefaction adequacy). The Quick Check is just py_compile, which is minimal.

2 / 3

Progressive Disclosure

The content is a monolithic wall of text with many sections that could be separate files but are all inlined. Sections are poorly organized with circular references ('See ## Usage above' appears before the Usage section). The references/ directory is mentioned but only one file is linked. Boilerplate sections like Risk Assessment, Security Checklist, and Lifecycle Status bloat the main file unnecessarily.

1 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.