CtrlK
BlogDocsLog inGet started
Tessl Logo

paper-figure

Generate publication-quality figures and tables from experiment results. Use when user says "画图", "作图", "generate figures", "paper figures", or needs plots for a paper.

85

Quality

83%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Paper Figure: Publication-Quality Plots from Experiment Data

Generate all figures and tables for a paper based on: $ARGUMENTS

Scope: What This Skill Can and Cannot Do

CategoryCan auto-generate?Examples
Data-driven plots✅ YesLine plots (training curves), bar charts (method comparison), scatter plots, heatmaps, box/violin plots
Comparison tables✅ YesLaTeX tables comparing prior bounds, method features, ablation results
Multi-panel figures✅ YesSubfigure grids combining multiple plots (e.g., 3×3 dataset × method)
Architecture/pipeline diagrams❌ No — manualModel architecture, data flow diagrams, system overviews. At best can generate a rough TikZ skeleton, but expect to draw these yourself using tools like draw.io, Figma, or TikZ
Generated image grids❌ No — manualGrids of generated samples (e.g., GAN/diffusion outputs). These come from running your model, not from this skill
Photographs / screenshots❌ No — manualReal-world images, UI screenshots, qualitative examples

In practice: For a typical ML paper, this skill handles ~60% of figures (all data plots + tables). The remaining ~40% (hero figure, architecture diagram, qualitative results) need to be created manually and placed in figures/ before running /paper-write. The skill will detect these as "existing figures" and preserve them.

Constants

  • STYLE = publication — Visual style preset. Options: publication (default, clean for print), poster (larger fonts), slide (bold colors)
  • DPI = 300 — Output resolution
  • FORMAT = pdf — Output format. Options: pdf (vector, best for LaTeX), png (raster fallback)
  • COLOR_PALETTE = tab10 — Default matplotlib color cycle. Options: tab10, Set2, colorblind (deuteranopia-safe)
  • FONT_SIZE = 10 — Base font size (matches typical conference body text)
  • FIG_DIR = figures/ — Output directory for generated figures
  • REVIEWER_MODEL = gpt-5.4 — Model used via Codex MCP for figure quality review.

Inputs

  1. PAPER_PLAN.md — figure plan table (from /paper-plan)
  2. Experiment data — JSON files, CSV files, or screen logs in figures/ or project root
  3. Existing figures — any manually created figures to preserve

If no PAPER_PLAN.md exists, scan for data files and ask the user which figures to generate.

Workflow

Step 1: Read Figure Plan

Parse the Figure Plan table from PAPER_PLAN.md:

| ID | Type | Description | Data Source | Priority |
|----|------|-------------|-------------|----------|
| Fig 1 | Architecture | ... | manual | HIGH |
| Fig 2 | Line plot | ... | figures/exp.json | HIGH |

Identify:

  • Which figures can be auto-generated from data
  • Which need manual creation (architecture diagrams, etc.)
  • Which are comparison tables (generate as LaTeX)

Step 2: Set Up Plotting Environment

Create a shared style configuration script:

# paper_plot_style.py — shared across all figure scripts
import matplotlib.pyplot as plt
import matplotlib
matplotlib.rcParams.update({
    'font.size': FONT_SIZE,
    'font.family': 'serif',
    'font.serif': ['Times New Roman', 'Times', 'DejaVu Serif'],
    'axes.labelsize': FONT_SIZE,
    'axes.titlesize': FONT_SIZE + 1,
    'xtick.labelsize': FONT_SIZE - 1,
    'ytick.labelsize': FONT_SIZE - 1,
    'legend.fontsize': FONT_SIZE - 1,
    'figure.dpi': DPI,
    'savefig.dpi': DPI,
    'savefig.bbox': 'tight',
    'savefig.pad_inches': 0.05,
    'axes.grid': False,
    'axes.spines.top': False,
    'axes.spines.right': False,
    'text.usetex': False,  # set True if LaTeX is available
    'mathtext.fontset': 'stix',
})

# Color palette
COLORS = plt.cm.tab10.colors  # or Set2, or colorblind-safe

def save_fig(fig, name, fmt=FORMAT):
    """Save figure to FIG_DIR with consistent naming."""
    fig.savefig(f'{FIG_DIR}/{name}.{fmt}')
    print(f'Saved: {FIG_DIR}/{name}.{fmt}')

Step 3: Auto-Select Figure Type

Use this decision tree for data-driven figures (inspired by Imbad0202/academic-research-skills):

Data PatternRecommended TypeSize
X=time/steps, Y=metricLine plot0.48\textwidth
Methods × 1 metricBar chart0.48\textwidth
Methods × multiple metricsGrouped bar / radar0.95\textwidth
Two continuous variablesScatter plot0.48\textwidth
Matrix / grid valuesHeatmap0.48\textwidth
Distribution comparisonBox/violin plot0.48\textwidth
Multi-dataset resultsMulti-panel (subfigure)0.95\textwidth
Prior work comparisonLaTeX table

Step 4: Generate Each Figure

For each figure in the plan, create a standalone Python script:

Line plots (training curves, scaling):

# gen_fig2_training_curves.py
from paper_plot_style import *
import json

with open('figures/exp_results.json') as f:
    data = json.load(f)

fig, ax = plt.subplots(1, 1, figsize=(5, 3.5))
ax.plot(data['steps'], data['fac_loss'], label='Factorized', color=COLORS[0])
ax.plot(data['steps'], data['crf_loss'], label='CRF-LR', color=COLORS[1])
ax.set_xlabel('Training Steps')
ax.set_ylabel('Cross-Entropy Loss')
ax.legend(frameon=False)
save_fig(fig, 'fig2_training_curves')

Bar charts (comparison, ablation):

fig, ax = plt.subplots(1, 1, figsize=(5, 3))
methods = ['Baseline', 'Method A', 'Method B', 'Ours']
values = [82.3, 85.1, 86.7, 89.2]
bars = ax.bar(methods, values, color=[COLORS[i] for i in range(len(methods))])
ax.set_ylabel('Accuracy (%)')
# Add value labels on bars
for bar, val in zip(bars, values):
    ax.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.3,
            f'{val:.1f}', ha='center', va='bottom', fontsize=FONT_SIZE-1)
save_fig(fig, 'fig3_comparison')

Comparison tables (LaTeX, for theory papers):

\begin{table}[t]
\centering
\caption{Comparison of estimation error bounds. $n$: sample size, $D$: ambient dim, $d$: latent dim, $K$: subspaces, $n_k$: modes.}
\label{tab:bounds}
\begin{tabular}{lccc}
\toprule
Method & Rate & Depends on $D$? & Multi-modal? \\
\midrule
\citet{MinimaxOkoAS23} & $n^{-s'/D}$ & Yes (curse) & No \\
\citet{ScoreMatchingdistributionrecovery} & $n^{-2/d}$ & No & No \\
\textbf{Ours} & $\sqrt{\sum n_k d_k / n}$ & No & Yes \\
\bottomrule
\end{tabular}
\end{table}

Architecture/pipeline diagrams (MANUAL — outside this skill's scope):

  • These require manual creation using draw.io, Figma, Keynote, or TikZ
  • This skill can generate a rough TikZ skeleton as a starting point, but do not expect publication-quality results
  • If the figure already exists in figures/, preserve it and generate only the LaTeX \includegraphics snippet
  • Flag as [MANUAL] in the figure plan and latex_includes.tex

Step 5: Run All Scripts

# Run all figure generation scripts
for script in gen_fig*.py; do
    python "$script"
done

Verify all output files exist and are non-empty.

Step 6: Generate LaTeX Include Snippets

For each figure, output the LaTeX code to include it:

% === Fig 2: Training Curves ===
\begin{figure}[t]
    \centering
    \includegraphics[width=0.48\textwidth]{figures/fig2_training_curves.pdf}
    \caption{Training curves comparing factorized and CRF-LR denoising.}
    \label{fig:training_curves}
\end{figure}

Save all snippets to figures/latex_includes.tex for easy copy-paste into the paper.

Step 7: Figure Quality Review with REVIEWER_MODEL

Send figure descriptions and captions to GPT-5.4 for review:

mcp__codex__codex:
  model: gpt-5.4
  config: {"model_reasoning_effort": "xhigh"}
  prompt: |
    Review these figure/table plans for a [VENUE] submission.

    For each figure:
    1. Is the caption informative and self-contained?
    2. Does the figure type match the data being shown?
    3. Is the comparison fair and clear?
    4. Any missing baselines or ablations?
    5. Would a different visualization be more effective?

    [list all figures with captions and descriptions]

Step 8: Quality Checklist

Before finishing, verify each figure (from pedrohcgs/claude-code-my-workflow):

  • Font size readable at printed paper size (not too small)
  • Colors distinguishable in grayscale (print-friendly)
  • No title inside figures — titles go only in LaTeX \caption{} (from pedrohcgs)
  • Legend does not overlap data
  • Axis labels have units where applicable
  • Axis labels are publication-quality (not variable names like emp_rate)
  • Figure width fits single column (0.48\textwidth) or full width (0.95\textwidth)
  • PDF output is vector (not rasterized text)
  • No matplotlib default title (remove plt.title for publications)
  • Serif font matches paper body text (Times / Computer Modern)
  • Colorblind-accessible (if using colorblind palette)

Output

figures/
├── paper_plot_style.py          # shared style config
├── gen_fig1_architecture.py     # per-figure scripts
├── gen_fig2_training_curves.py
├── gen_fig3_comparison.py
├── fig1_architecture.pdf        # generated figures
├── fig2_training_curves.pdf
├── fig3_comparison.pdf
├── latex_includes.tex           # LaTeX snippets for all figures
└── TABLE_*.tex                  # standalone table LaTeX files

Key Rules

  • Every figure must be reproducible — save the generation script alongside the output
  • Do NOT hardcode data — always read from JSON/CSV files
  • Use vector format (PDF) for all plots — PNG only as fallback
  • No decorative elements — no background colors, no 3D effects, no chart junk
  • Consistent style across all figures — same fonts, colors, line widths
  • Colorblind-safe — verify with https://davidmathlogic.com/colorblind/ if needed
  • One script per figure — easy to re-run individual figures when data changes
  • No titles inside figures — captions are in LaTeX only
  • Comparison tables count as figures — generate them as standalone .tex files

Figure Type Reference

TypeWhen to UseTypical Size
Line plotTraining curves, scaling trends0.48\textwidth
Bar chartMethod comparison, ablation0.48\textwidth
Grouped barMulti-metric comparison0.95\textwidth
Scatter plotCorrelation analysis0.48\textwidth
HeatmapAttention, confusion matrix0.48\textwidth
Box/violinDistribution comparison0.48\textwidth
ArchitectureSystem overview0.95\textwidth
Multi-panelCombined results (subfigures)0.95\textwidth
Comparison tablePrior bounds vs. ours (theory)full width

Acknowledgements

Design pattern (type × style matrix) inspired by baoyu-skills. Publication style defaults and figure rules from pedrohcgs/claude-code-my-workflow. Visualization decision tree from Imbad0202/academic-research-skills.

Repository
wanshuiyin/Auto-claude-code-research-in-sleep
Last updated
Created

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.