Generate standardized figure legends for scientific charts and graphs.
46
33%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Academic Writing/figure-legend-gen/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a reasonably specific niche—generating figure legends for scientific visuals—but is too terse to be effective for skill selection. It lacks a 'Use when...' clause, misses common user trigger terms like 'figure caption' or 'plot description', and does not enumerate the specific sub-tasks it handles.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks for figure captions, figure legends, or descriptive text for scientific plots, charts, or graphs.'
Include common trigger term variations such as 'figure caption', 'plot description', 'chart label', 'manuscript figure', and file-type hints like '.png', '.svg'.
Expand the capability list with concrete actions, e.g., 'Generates structured figure legends including panel descriptions, statistical annotations, axis explanations, and journal-compliant formatting.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (scientific charts/graphs) and a single action (generate figure legends), but does not list multiple concrete actions or elaborate on what 'standardized' entails (e.g., formatting, numbering, caption structure). | 2 / 3 |
Completeness | Describes what the skill does but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and since the 'what' is also thin, this scores a 1. | 1 / 3 |
Trigger Term Quality | Includes relevant terms like 'figure legends', 'scientific charts', and 'graphs', but misses common variations users might say such as 'figure caption', 'plot description', 'chart annotation', or 'manuscript figures'. | 2 / 3 |
Distinctiveness Conflict Risk | The combination of 'figure legends' and 'scientific charts' is fairly specific, but could overlap with general scientific writing skills or chart-creation skills without clearer trigger boundaries. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is heavily padded with generic boilerplate content (security checklists, risk assessments, lifecycle status, evaluation criteria) that is not specific to figure legend generation and wastes significant token budget. The core domain knowledge—what makes a good scientific figure legend, examples of generated output, and chart-type-specific guidance—is largely absent, replaced by generic workflow templates. The Legend Structure section is the most valuable part but is too brief relative to the overall document length.
Suggestions
Remove all generic boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria, Input Validation template text) and focus tokens on domain-specific content like example legends for each chart type.
Add at least one complete input→output example showing a chart description and the resulting publication-quality legend, so Claude knows exactly what format and content to produce.
Fix the circular cross-references ('See ## Prerequisites above' appearing before Prerequisites) and consolidate duplicate sections (two workflow sections, repeated --help commands).
Expand the Legend Structure section with chart-type-specific templates or examples, since this is the core value of the skill—currently it's only 7 bullet points with no concrete examples.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose and repetitive. Contains massive amounts of boilerplate (risk assessment tables, security checklists, lifecycle status, evaluation criteria) that add no value for generating figure legends. Multiple sections reference each other circularly ('See ## Prerequisites above', 'See ## Usage above'). The audit-ready commands section repeats the same --help command three times. Much content is generic template filler rather than task-specific guidance. | 1 / 3 |
Actionability | Provides concrete CLI commands and parameter tables which are somewhat actionable, but the actual figure legend generation logic is entirely delegated to an opaque scripts/main.py with no indication of what it actually does or how to use it without the script. The example commands include a typo ('--image.png' instead of '--input image.png'). No example of actual generated output (a sample legend) is provided, which would be the most actionable element for this skill. | 2 / 3 |
Workflow Clarity | There is a numbered workflow in the 'Example Usage' section and a separate 'Workflow' section, but both are generic and not specific to figure legend generation. Steps like 'Confirm the user objective' and 'Validate that the request matches the documented scope' are vague process steps, not concrete figure-legend-specific actions. No validation checkpoint verifies the quality of the generated legend output. | 2 / 3 |
Progressive Disclosure | References to external files (references/legend_templates.md, references/academic_style_guide.md) are present and clearly signaled, which is good. However, the main file itself is a monolithic wall of text with many sections that could be separated or removed entirely. The circular cross-references ('See ## Prerequisites above') are confusing and suggest poor organization. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
0b96148
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.