CtrlK
BlogDocsLog inGet started
Tessl Logo

latex-posters

Create professional research posters in LaTeX using beamerposter, tikzposter, or baposter. Support for conference presentations, academic posters, and scientific communication. Includes layout design, color schemes, multi-column formats, figure integration, and poster-specific best practices for visual communication.

71

1.95x
Quality

58%

Does it follow best practices?

Impact

92%

1.95x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/latex-posters/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

82%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong description with excellent specificity, naming concrete LaTeX packages and multiple specific capabilities. The trigger terms are well-chosen, covering both technical package names and natural language terms like 'research posters' and 'conference presentations'. The main weakness is the absence of an explicit 'Use when...' clause, which would help Claude know precisely when to select this skill.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to create a research poster, academic poster, or conference poster in LaTeX, or mentions beamerposter, tikzposter, or baposter.'

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: creating research posters in LaTeX, layout design, color schemes, multi-column formats, figure integration, and poster-specific best practices. Also names specific packages (beamerposter, tikzposter, baposter).

3 / 3

Completeness

Clearly answers 'what does this do' with specific capabilities, but lacks an explicit 'Use when...' clause or equivalent trigger guidance. The 'when' is only implied through the domain context. Per rubric guidelines, missing 'Use when...' caps completeness at 2.

2 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'research posters', 'LaTeX', 'beamerposter', 'tikzposter', 'baposter', 'conference presentations', 'academic posters', 'scientific communication', 'multi-column', 'poster'. Good coverage of both general and specific terms.

3 / 3

Distinctiveness Conflict Risk

Very distinct niche: LaTeX research posters with specific package names (beamerposter, tikzposter, baposter). Unlikely to conflict with general LaTeX skills, general poster design skills, or presentation skills due to the specific combination of domain and tools.

3 / 3

Total

11

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill contains genuinely useful, actionable content for creating LaTeX research posters with AI-generated graphics, including executable code, validation workflows, and concrete examples. However, it is severely undermined by extreme verbosity and repetition — the same constraints about AI graphic simplicity are restated 5-6 times across different sections, and the document includes extensive explanations of concepts Claude already knows (poster design basics, color theory, what fonts are). The content would be dramatically more effective at 20-30% of its current length with repeated material consolidated and reference content moved to external files.

Suggestions

Consolidate the AI graphic generation guidelines into ONE authoritative section instead of repeating the same 3-4 element / 10 word / 60% white space rules across Steps 0, 1, 2, 2b, Visual Element Guidelines, Stage 2, and Common Pitfalls — this alone would cut 40%+ of the document.

Move the detailed Section 11 quality control checklist (Steps 1-9, ~200 lines) and the poster content patterns/accessibility/presentation tips into reference files, keeping only a brief summary with links in the main SKILL.md.

Remove explanations of concepts Claude already knows: what research posters are, what sans-serif fonts are, what color blindness is, what DPI means, basic LaTeX compilation, etc. Focus only on project-specific conventions and non-obvious guidance.

Reduce the good/bad example pairs for AI graphics from ~15 repetitive examples to 3-4 well-chosen ones that cover the key patterns, and put the full catalog in a reference file.

DimensionReasoningScore

Conciseness

Extremely verbose at ~1000+ lines. Massively repetitive — the same rules about '3-4 elements max', '60% white space', '150pt+ fonts' are restated dozens of times across multiple sections. Explains basic concepts Claude already knows (what a research poster is, what PDF is, what sans-serif fonts are, what color blindness is). The AI graphic generation guidelines alone repeat the same constraints in tables, examples, checklists, and prose at least 5-6 times.

1 / 3

Actionability

Contains executable LaTeX code snippets and bash commands that are concrete and copy-paste ready. However, much of the actionability is diluted by the sheer volume of repetitive guidance. The AI graphic generation commands reference a 'scripts/generate_schematic.py' tool with specific examples, and LaTeX templates are provided with real code. But many sections are descriptive lists rather than executable instructions (e.g., design principles, presentation tips).

2 / 3

Workflow Clarity

There is a clear multi-stage workflow (Planning → Generate Visuals → Design → Integrate → Refine → Compile) with numbered steps and validation checkpoints (Step 0 pre-generation review, Step 2b post-generation review, overflow checks). However, the workflow is buried in an enormous document with so much repetition that the actual sequence is hard to follow. The validation steps are good but repeated across multiple sections (Step 0, Step 2b, Section 11) creating confusion about which checklist to use when.

2 / 3

Progressive Disclosure

References to external files exist (references/latex_poster_packages.md, references/poster_layout_design.md, assets/ templates, scripts/), which is good progressive disclosure. However, the SKILL.md itself is monolithic — it contains enormous amounts of inline content that should be in reference files (the entire Section 11 quality control checklist, the repeated AI graphic guidelines, the detailed package configuration examples). The overview is not concise enough to serve as a quick-start guide.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (1603 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.