CtrlK
BlogDocsLog inGet started
Tessl Logo

figure-legend-gen

Generate standardized figure legends for scientific charts and graphs.

41

Quality

27%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/figure-legend-gen/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a clear niche—generating figure legends for scientific visuals—but is too terse to be effective for skill selection. It lacks a 'Use when...' clause, misses common user trigger terms like 'figure caption' or 'plot description', and doesn't elaborate on the specific capabilities (e.g., formatting conventions, journal styles, statistical annotations).

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks for figure captions, figure legends, or descriptions for scientific plots, charts, or graphs in manuscripts or publications.'

Include common trigger term variations such as 'figure caption', 'plot description', 'chart label', 'manuscript figure', and 'publication-ready legend'.

Expand the capability list with specific actions, e.g., 'Generates standardized figure legends including panel descriptions, statistical annotations, axis explanations, and journal-compliant formatting.'

DimensionReasoningScore

Specificity

Names the domain (scientific charts/graphs) and one action (generate figure legends), but doesn't list multiple concrete actions or elaborate on what 'standardized' entails (e.g., formatting, numbering, caption structure).

2 / 3

Completeness

Describes what the skill does but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, a missing 'Use when...' clause caps completeness at 2, and since the 'what' is also thin, this scores a 1.

1 / 3

Trigger Term Quality

Includes relevant terms like 'figure legends', 'scientific charts', and 'graphs', but misses common variations users might say such as 'figure caption', 'plot description', 'chart annotation', or 'manuscript figures'.

2 / 3

Distinctiveness Conflict Risk

The combination of 'figure legends' and 'scientific' provides some distinctiveness, but it could overlap with general scientific writing skills or broader chart/visualization description tools.

2 / 3

Total

7

/

12

Passed

Implementation

22%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is heavily padded with generic boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria, Response Template) that are not specific to figure legend generation and waste significant token budget. The core domain knowledge—how to actually construct a good scientific figure legend—is buried under layers of generic process scaffolding. The document has structural issues including broken cross-references and duplicate workflow sections that contradict each other in ordering.

Suggestions

Remove all generic boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria, Response Template, Output Requirements, Input Validation) that don't contain figure-legend-specific information—these waste tokens on things Claude already knows.

Consolidate the three competing workflow descriptions (Example Usage run plan, Implementation Details, Workflow) into a single clear workflow with specific validation steps for legend quality.

Add a concrete example showing an actual input image description and the expected figure legend output, so Claude knows what good output looks like rather than just how to invoke a script.

Fix broken cross-references ('See ## Prerequisites above' when Prerequisites appears below) and the syntax error in the third CLI example (`--image.png` should be `--input image.png`).

DimensionReasoningScore

Conciseness

Extremely verbose and repetitive. Contains numerous sections that add no value (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria), cross-references to sections that appear later ('See ## Prerequisites above'), redundant commands (--help listed 3 times in Audit-Ready Commands), and generic boilerplate that Claude already knows (Error Handling, Input Validation, Response Template sections are generic instructions not specific to figure legend generation).

1 / 3

Actionability

Provides concrete CLI commands and parameter tables which are useful, but the actual figure legend generation logic is entirely delegated to an opaque `scripts/main.py` with no visibility into what it does. The examples show how to invoke the script but not how to actually construct a figure legend. The third example has a syntax error (`--image.png` instead of `--input image.png`).

2 / 3

Workflow Clarity

Multiple competing workflow sections exist (Example Usage run plan, Workflow section, Implementation Details) that are all generic and vague ('Confirm the user objective', 'Validate that the request matches the documented scope'). No specific validation checkpoints for the actual figure legend generation process. The workflows read like generic templates rather than task-specific guidance.

1 / 3

Progressive Disclosure

References to external files like `references/legend_templates.md` and `references/academic_style_guide.md` are appropriately signaled, and the Supported Chart Types table is well-organized. However, the document itself is a monolithic wall of text with many sections that should be consolidated or removed, and cross-references like 'See ## Usage above' point incorrectly (Usage appears below, not above).

2 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.