CtrlK
BlogDocsLog inGet started
Tessl Logo

multi-panel-figure-assembler

Automatically assemble 6 sub-figures (A-F) into a high-resolution composite figure with aligned edges, unified fonts, and labels.

Install with Tessl CLI

npx tessl i github:aipoch/medical-research-skills --skill multi-panel-figure-assembler
What are skills?

65

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Discovery

50%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description excels at specificity and carves out a clear, distinctive niche for scientific figure assembly. However, it critically lacks any 'Use when...' guidance, making it difficult for Claude to know when to select this skill from a large pool. The trigger terms could also be expanded to include common user phrasings.

Suggestions

Add a 'Use when...' clause with trigger terms like 'combine figures', 'panel figure', 'multi-panel layout', 'scientific figure', 'figure assembly'

Include common file format mentions users might reference (e.g., 'PNG', 'TIFF', 'publication-ready figures')

Consider adding context about the domain (e.g., 'for scientific publications' or 'journal figures') to help Claude match user intent

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'assemble 6 sub-figures (A-F)', 'high-resolution composite figure', 'aligned edges', 'unified fonts', and 'labels'. These are precise, actionable capabilities.

3 / 3

Completeness

Describes WHAT it does well but completely lacks a 'Use when...' clause or any explicit trigger guidance. Per rubric guidelines, missing explicit trigger guidance caps completeness at 2, and this has none at all.

1 / 3

Trigger Term Quality

Contains some relevant terms like 'sub-figures', 'composite figure', 'labels', but misses common user variations like 'panel figure', 'figure layout', 'multi-panel', 'scientific figure', or file format mentions.

2 / 3

Distinctiveness Conflict Risk

Very specific niche: assembling exactly 6 sub-figures (A-F) into composites with specific formatting requirements. Unlikely to conflict with general image editing or document skills.

3 / 3

Total

9

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill provides good actionable guidance with executable examples for both CLI and Python usage. However, it suffers from redundancy (duplicate parameter tables) and unnecessary boilerplate sections that don't help Claude use the tool. The core functionality documentation is solid but buried under template-like content that should be removed.

Suggestions

Remove the duplicate Parameters table - the Command Line Arguments table already covers this with more detail

Remove or relocate boilerplate sections (Risk Assessment, Security Checklist, Evaluation Criteria, Lifecycle Status) that don't help Claude execute the task

Add error handling guidance - what to do if images fail to load, have incompatible formats, or if the output looks wrong

Add a quick validation step to verify output quality (e.g., check file size, open and inspect result)

DimensionReasoningScore

Conciseness

Contains significant redundancy - the Parameters table duplicates the Command Line Arguments table with less information. The Risk Assessment, Security Checklist, Evaluation Criteria, and Lifecycle Status sections add boilerplate that doesn't help Claude use the tool. Could be tightened considerably.

2 / 3

Actionability

Provides fully executable command-line examples and Python code that are copy-paste ready. Clear argument tables with defaults and descriptions. Both CLI and programmatic usage patterns are concrete and complete.

3 / 3

Workflow Clarity

This is a single-command tool, so complex workflows aren't needed. However, there's no validation guidance - what happens if images have incompatible formats? No error handling examples or verification steps for checking output quality before using in publications.

2 / 3

Progressive Disclosure

Content is reasonably organized with clear sections, but everything is in one monolithic file. The boilerplate sections (Risk Assessment, Security Checklist, Evaluation Criteria, Lifecycle Status) clutter the main content and could be removed or moved elsewhere. No references to external documentation.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.