CtrlK
BlogDocsLog inGet started
Tessl Logo

scientific-visualization

Meta-skill for publication-ready figures. Use when creating journal submission figures requiring multi-panel layouts, significance annotations, error bars, colorblind-safe palettes, and specific journal formatting (Nature, Science, Cell). Orchestrates matplotlib/seaborn/plotly with publication styles. For quick exploration use seaborn or plotly directly.

76

1.13x
Quality

67%

Does it follow best practices?

Impact

94%

1.13x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/scientific-visualization/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an excellent skill description that clearly defines a specific niche (publication-ready scientific figures), lists concrete capabilities, includes natural trigger terms researchers would use, and explicitly differentiates itself from general plotting skills. The negative guidance about when NOT to use this skill is a particularly strong feature that reduces conflict risk.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: multi-panel layouts, significance annotations, error bars, colorblind-safe palettes, and specific journal formatting. Also names concrete tools (matplotlib/seaborn/plotly) and concrete journals (Nature, Science, Cell).

3 / 3

Completeness

Clearly answers both 'what' (creating publication-ready figures with multi-panel layouts, annotations, error bars, etc.) and 'when' (explicit 'Use when creating journal submission figures requiring...' clause). Also includes a helpful negative trigger ('For quick exploration use seaborn or plotly directly') to reduce false matches.

3 / 3

Trigger Term Quality

Excellent coverage of natural terms a user would say: 'publication-ready figures', 'journal submission', 'multi-panel layouts', 'significance annotations', 'error bars', 'colorblind-safe', and specific journal names (Nature, Science, Cell). These are highly natural terms researchers would use.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive niche targeting publication-ready scientific figures with journal-specific formatting. The explicit contrast with quick exploration plotting ('For quick exploration use seaborn or plotly directly') actively reduces conflict with general plotting skills.

3 / 3

Total

12

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is comprehensive and covers scientific visualization thoroughly with real code examples, journal-specific guidance, and a useful final checklist. However, it is severely bloated — the seaborn section alone is nearly half the document and heavily duplicates earlier content. The reliance on custom helper scripts that aren't provided in the bundle reduces actionability, and the lack of validation feedback loops in workflows limits workflow clarity.

Suggestions

Reduce the seaborn section to ~20 lines with 1-2 key examples and a reference to the dedicated seaborn SKILL.md, eliminating the massive duplication of box plots, heatmaps, color palettes, and best practices already covered earlier.

Move detailed content (color palette specifications, typography settings, journal dimension tables, common issues/solutions) into the referenced markdown files to make SKILL.md a true overview document.

Add explicit validation and error recovery steps to the workflow — e.g., what to do when check_figure_size() fails, how to programmatically test colorblind accessibility, and a validate-fix-retry loop.

Remove explanatory prose that Claude already knows (e.g., 'Seaborn provides a high-level, dataset-oriented interface...', 'Scientific visualization transforms data into clear, accurate figures') and replace with terse directives.

DimensionReasoningScore

Conciseness

Extremely verbose at ~600+ lines. Massive duplication: seaborn examples appear in Quick Start, Common Tasks, and then again in a dedicated ~200-line seaborn section that repeats box plots, heatmaps, color palettes, and best practices already covered. Explains concepts Claude already knows (e.g., 'Seaborn provides a high-level, dataset-oriented interface for statistical graphics'). The seaborn section alone could be reduced by 80% by referencing the earlier examples and the linked seaborn SKILL.md.

1 / 3

Actionability

Contains many concrete code examples, but they depend on custom helper scripts (style_presets.py, figure_export.py, color_palettes.py) that are referenced but not provided in the bundle, making the code not truly executable as-is. The matplotlib rcParams and seaborn examples are directly actionable, but the core workflow relies on unavailable imports.

2 / 3

Workflow Clarity

The Workflow Summary section provides a clear 6-step sequence with code snippets, and the Final Checklist is excellent. However, there are no explicit validation checkpoints or error recovery steps (e.g., what to do if check_figure_size fails, how to verify colorblind accessibility programmatically). The 'Fix an Existing Figure' task is a checklist without a feedback loop.

2 / 3

Progressive Disclosure

References to external files (references/, scripts/, assets/) are well-organized and clearly signaled in the Resources section. However, the SKILL.md itself is monolithic with enormous inline content that should be in reference files — particularly the ~200-line seaborn section which duplicates content and even points to a separate seaborn SKILL.md. The body contains far too much detail that undermines the overview-with-references pattern.

2 / 3

Total

7

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (778 lines); consider splitting into references/ and linking

Warning

metadata_version

'metadata.version' is missing

Warning

Total

9

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.