CtrlK
BlogDocsLog inGet started
Tessl Logo

pydeseq2

Differential gene expression analysis (Python DESeq2). Identify DE genes from bulk RNA-seq counts, Wald tests, FDR correction, volcano/MA plots, for RNA-seq analysis.

71

Quality

66%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/pydeseq2/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

82%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong, domain-specific description with excellent specificity and trigger term coverage for bioinformatics users. Its main weakness is the absence of an explicit 'Use when...' clause, which would help Claude know precisely when to select this skill. The technical terms are well-chosen and naturally match what users in this domain would say.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about differential gene expression, DESeq2 analysis, bulk RNA-seq count data, or generating volcano/MA plots from expression data.'

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'Identify DE genes from bulk RNA-seq counts, Wald tests, FDR correction, volcano/MA plots'. These are precise, domain-specific analytical steps.

3 / 3

Completeness

Clearly answers 'what does this do' with specific actions and methods, but lacks an explicit 'Use when...' clause or equivalent trigger guidance. The 'when' is only implied by the domain terms, which caps this at 2 per the rubric guidelines.

2 / 3

Trigger Term Quality

Includes strong natural keywords a bioinformatician would use: 'differential gene expression', 'DESeq2', 'RNA-seq', 'bulk RNA-seq counts', 'Wald tests', 'FDR correction', 'volcano plots', 'MA plots', 'DE genes'. Good coverage of domain-specific terms users would naturally mention.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive niche: differential gene expression analysis using DESeq2 in Python for bulk RNA-seq. The combination of specific method (DESeq2), language (Python), and data type (bulk RNA-seq counts) makes it very unlikely to conflict with other skills.

3 / 3

Total

11

/

12

Passed

Implementation

50%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill is highly actionable with excellent executable code examples covering the full PyDESeq2 workflow, but it is far too verbose for a SKILL.md file. Much of the content (visualization code, troubleshooting, multiple analysis patterns) duplicates what should live in the referenced workflow_guide.md and could be cut to reduce token consumption. Adding explicit validation checkpoints in the core workflow would improve reliability.

Suggestions

Reduce the main SKILL.md to ~100-150 lines by moving visualization code, troubleshooting, common analysis patterns, and quality metrics into the referenced workflow_guide.md file

Remove the 'When to Use This Skill' and 'Overview' sections — Claude can infer applicability from the content itself

Add explicit validation checkpoints in the core workflow, e.g., verify counts are non-negative integers after loading, confirm sample index alignment between counts and metadata before fitting, and check design matrix rank before running deseq2()

Eliminate the 'Key Reminders' section which largely restates information already present in the workflow steps above it

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~400+ lines. It explains concepts Claude already knows (what design formulas are, what p-values mean, what volcano plots show), includes redundant patterns (the quick start workflow is essentially repeated in 'Common Analysis Patterns'), and has unnecessary sections like 'When to Use This Skill' and extensive explanations of what deseq2() does internally. Much of this content belongs in reference files, not the main skill.

1 / 3

Actionability

The skill provides fully executable, copy-paste ready Python code throughout. Every pattern includes complete working code with proper imports, and the CLI script usage is well-specified with concrete flags and arguments.

3 / 3

Workflow Clarity

Steps are clearly numbered and sequenced (Steps 1-6), but there are no explicit validation checkpoints between steps. For a workflow involving data transformation and statistical analysis, there should be validation steps (e.g., verify counts are non-negative integers, confirm sample alignment after filtering, validate design matrix rank before fitting). The troubleshooting section covers issues reactively rather than building verification into the workflow.

2 / 3

Progressive Disclosure

References to external files (api_reference.md, workflow_guide.md, scripts/run_deseq2_analysis.py) are well-signaled and one level deep, which is good. However, the main SKILL.md contains far too much inline content that should be in those reference files — the visualization code, troubleshooting section, common analysis patterns, and quality metrics could all be offloaded, keeping the main file as a concise overview.

2 / 3

Total

8

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (558 lines); consider splitting into references/ and linking

Warning

metadata_version

'metadata.version' is missing

Warning

Total

9

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.