Generate publication-quality figures and tables from experiment results. Use when user says "画图", "作图", "generate figures", "paper figures", or needs plots for a paper.
80
77%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/paper-figure/SKILL.mdQuality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a solid description that clearly communicates its purpose and includes explicit trigger guidance with both Chinese and English keywords. Its main weakness is that the 'what' portion could be more specific about the types of figures and tables it can generate. Overall it performs well across all dimensions.
Suggestions
Add specific concrete actions like 'create scatter plots, bar charts, heatmaps, statistical tables, and format them with journal-ready styling' to improve specificity.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | It names the domain (publication-quality figures and tables from experiment results) and describes some actions (generate figures/tables), but doesn't list multiple specific concrete actions like creating scatter plots, bar charts, heatmaps, formatting axes, etc. | 2 / 3 |
Completeness | Clearly answers both 'what' (generate publication-quality figures and tables from experiment results) and 'when' (explicit 'Use when' clause with specific trigger phrases and a contextual condition). | 3 / 3 |
Trigger Term Quality | Includes both Chinese ('画图', '作图') and English ('generate figures', 'paper figures') natural trigger terms, plus the contextual trigger 'plots for a paper'. Good coverage of terms users would naturally say. | 3 / 3 |
Distinctiveness Conflict Risk | The focus on publication-quality figures from experiment results creates a clear niche distinct from general plotting or data visualization skills. The bilingual trigger terms and 'paper' context further reduce conflict risk. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a highly actionable skill with excellent concrete code examples and a well-thought-out workflow for generating publication-quality figures. Its main weaknesses are verbosity (redundant tables, unnecessary scope explanations) and lack of proper validation/feedback loops in the workflow. The content would benefit from splitting detailed code templates into separate reference files and adding automated verification steps.
Suggestions
Remove the duplicate figure type reference table (appears in both Step 3 and the bottom section) and consolidate into one location
Add automated validation in Step 5 — e.g., a Python script that checks file sizes, verifies PDF validity, and renders a preview grid for visual inspection
Split detailed code templates (line plot, bar chart, heatmap, etc.) into a separate TEMPLATES.md file, keeping only one representative example in the main skill
Trim the scope table — Claude doesn't need detailed explanations of what architecture diagrams or photograph screenshots are; a simple bullet list of supported vs unsupported types would suffice
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is quite long (~250 lines) with some redundancy — the figure type reference table appears twice (Steps 3 and bottom), the scope table explains things Claude could infer, and the acknowledgements section adds no actionable value. However, most content is substantive code examples and concrete guidance rather than explaining basic concepts. | 2 / 3 |
Actionability | Excellent actionability with fully executable Python scripts for line plots, bar charts, and style configuration. LaTeX snippets are copy-paste ready, the bash command for running scripts is concrete, and the decision tree for figure type selection provides specific, actionable guidance. | 3 / 3 |
Workflow Clarity | The 8-step workflow is clearly sequenced and logical, but validation is weak — Step 5 only says 'verify all output files exist and are non-empty' without showing how, and there's no feedback loop for when figure generation scripts fail or produce incorrect output. The quality checklist in Step 8 is manual rather than automated verification. | 2 / 3 |
Progressive Disclosure | The content is monolithic — everything is in one large file with no references to external files for detailed content. The figure type reference table, detailed code examples for each plot type, and the LaTeX templates could be split into separate reference files. The scope table and constants section are well-structured but the overall document is too long for a single SKILL.md. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
700fbe2
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.