Create professional research posters in LaTeX using beamerposter, tikzposter, or baposter. Support for conference presentations, academic posters, and scientific communication. Includes layout design, color schemes, multi-column formats, figure integration, and poster-specific best practices for visual communication.
52
58%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/latex-posters/SKILL.mdQuality
Discovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong description with excellent specificity and trigger term coverage, naming concrete LaTeX packages and specific capabilities. The main weakness is the absence of an explicit 'Use when...' clause, which would help Claude know exactly when to select this skill. The domain is distinctive enough to avoid conflicts with other skills.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to create a research poster, conference poster, or academic poster in LaTeX, or mentions beamerposter, tikzposter, or baposter.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: layout design, color schemes, multi-column formats, figure integration, and poster-specific best practices. Also names specific tools (beamerposter, tikzposter, baposter). | 3 / 3 |
Completeness | Clearly answers 'what does this do' with specific capabilities, but lacks an explicit 'Use when...' clause or equivalent trigger guidance. The when is only implied through the domain context. | 2 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'research posters', 'LaTeX', 'beamerposter', 'tikzposter', 'baposter', 'conference presentations', 'academic posters', 'scientific communication', 'poster'. Good coverage of both general and specific terms. | 3 / 3 |
Distinctiveness Conflict Risk | Very clear niche — LaTeX research posters with specific package names (beamerposter, tikzposter, baposter). Unlikely to conflict with general LaTeX skills, presentation skills, or other document creation skills due to the highly specific domain. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill contains genuinely useful LaTeX poster creation guidance with executable code examples, validation workflows, and proper references to external files. However, it is severely undermined by extreme verbosity and repetition—the same AI graphic generation rules (element counts, font sizes, white space percentages) are restated nearly identically 6-8 times throughout the document. The skill is roughly 4x longer than it needs to be, with the AI-generated graphics workflow dominating what should be a LaTeX poster creation skill.
Suggestions
Consolidate all AI graphic generation rules into ONE section with a single reference table, then reference that section from the workflow steps instead of repeating the same limits (3-4 elements, 60% white space, 150pt+ fonts) dozens of times.
Move the detailed PDF review checklist (Section 11, ~200 lines), accessibility guidance, and presentation tips into separate reference files, keeping only a brief summary with links in the main SKILL.md.
Remove explanatory content Claude already knows: what research posters are, what DPI means, what PDF/X-1a is, basic LaTeX compilation concepts, and general design principles like 'tell a story, don't just list facts'.
Reduce the good/bad prompt examples from 8+ pairs to 2-3 representative pairs, using a compact table format rather than full bash code blocks for each example.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at 800+ lines with massive repetition. The same rules about '3-4 elements max', '60% white space', '150pt+ fonts' are repeated dozens of times across multiple sections. Content about AI-generated graphics dominates the skill but repeats identical guidance in Step 0, Step 1, Step 2, Step 2b, Visual Element Guidelines, Stage 2, and Common Pitfalls. Explains basic concepts Claude already knows (what a research poster is, what PDF is, what DPI means). The skill could be reduced to ~25% of its length without losing actionable information. | 1 / 3 |
Actionability | Provides concrete LaTeX code snippets for tikzposter, baposter, and beamerposter that are executable, plus bash commands for compilation and validation. However, much of the 'actionable' content is prompt engineering examples for an AI image generation tool (generate_schematic.py) rather than LaTeX poster creation itself. The LaTeX templates are incomplete (missing \title, \author definitions in some cases). The core LaTeX guidance is solid but buried under repetitive AI graphic generation instructions. | 2 / 3 |
Workflow Clarity | A 6-stage workflow is defined (Planning → Generate Visuals → Design → Integration → Refinement → Compilation) with numbered steps and validation checkpoints (Step 2b post-generation review, Step 11 overflow check). However, the workflow is obscured by extreme repetition—the same validation steps appear in at least 4 different places. The overflow checking workflow in Section 11 is well-structured with explicit feedback loops, but the overall document structure makes it hard to follow the actual sequence. | 2 / 3 |
Progressive Disclosure | References external files (references/latex_poster_packages.md, references/poster_layout_design.md, etc.) and templates in assets/, which is good structure. However, no bundle files are provided to verify these exist. The main SKILL.md is monolithic at 800+ lines—much of the AI graphic generation guidance, the detailed QR code section, the accessibility section, and the presentation tips could be in separate reference files. The inline content is far too long for an overview document. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (1595 lines); consider splitting into references/ and linking | Warning |
Total | 10 / 11 Passed | |
cbcae7b
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.