CtrlK
BlogDocsLog inGet started
Tessl Logo

latex-posters

Create professional research posters in LaTeX using beamerposter, tikzposter, or baposter. Support for conference presentations, academic posters, and scientific communication. Includes layout design, color schemes, multi-column formats, figure integration, and poster-specific best practices for visual communication.

66

Quality

58%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/latex-posters/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

82%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong description with excellent specificity, naming concrete LaTeX packages and specific capabilities like layout design, color schemes, and figure integration. The trigger terms are comprehensive, covering both technical (beamerposter, tikzposter) and natural language terms (research posters, conference presentations). The main weakness is the absence of an explicit 'Use when...' clause, which would help Claude know exactly when to select this skill.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to create a research poster, conference poster, or academic poster in LaTeX, or mentions beamerposter, tikzposter, or baposter.'

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: layout design, color schemes, multi-column formats, figure integration, and poster-specific best practices. Also names specific LaTeX packages (beamerposter, tikzposter, baposter).

3 / 3

Completeness

Clearly answers 'what does this do' with specific capabilities, but lacks an explicit 'Use when...' clause or equivalent trigger guidance. The when is only implied through the domain context.

2 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'research posters', 'LaTeX', 'beamerposter', 'tikzposter', 'baposter', 'conference presentations', 'academic posters', 'scientific communication', 'poster'. Good coverage of both general and specific terms.

3 / 3

Distinctiveness Conflict Risk

Very clear niche: LaTeX research posters with specific package names. Unlikely to conflict with general LaTeX skills, general poster design skills, or presentation skills due to the specific combination of LaTeX + poster + named packages.

3 / 3

Total

11

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill suffers severely from extreme verbosity and repetition—the same constraints about AI-generated graphics (3-4 elements max, 60% white space, giant fonts) are restated in nearly identical form at least 6-8 times throughout the document. While it contains useful LaTeX templates and a structured workflow, the signal-to-noise ratio is very poor, making it difficult to extract actionable guidance. The document would benefit enormously from being reduced to ~20% of its current length by eliminating redundancy and moving detailed checklists to reference files.

Suggestions

Eliminate the massive repetition of AI graphic generation rules—state them ONCE in a concise table, then reference that table. The same '3-4 elements, 60% white space, 150pt+ fonts' rules appear 6+ times.

Move the detailed PDF review checklist (Steps 1-9), common content patterns, and accessibility guidelines to separate reference files, keeping only a brief summary with links in the main SKILL.md.

Remove explanations of concepts Claude already knows: what research posters are, what PDF is, basic LaTeX compilation, general typography principles, and presentation tips (standing to the side, preparing verbal summaries).

Consolidate the conflicting numbering systems (Step 0-3 vs Stage 1-6) into a single clear workflow with one numbering scheme.

DimensionReasoningScore

Conciseness

This skill is extremely verbose at ~1000+ lines with massive repetition. The same rules about '3-4 elements max', '60% white space', '150pt+ fonts' are repeated dozens of times across multiple sections. Content that Claude already knows (what a research poster is, basic LaTeX compilation, what PDF is, typography basics) is explained at length. The AI graphic generation guidelines alone repeat the same constraints in at least 6 different formats (tables, examples, checklists, rules).

1 / 3

Actionability

The skill provides concrete LaTeX code examples and bash commands that are mostly executable, but much of the content is pseudocode-like prompt templates for an AI image generation tool rather than actual executable code. The LaTeX templates are useful but incomplete (missing \title, \author definitions, etc.). The generate_schematic.py commands reference a script that may or may not exist without clear documentation of its interface.

2 / 3

Workflow Clarity

There is a multi-stage workflow (Stages 1-6) with some validation checkpoints (post-generation review, overflow checks, PDF quality control). However, the workflow is buried within an enormous document and the validation steps, while present, are scattered across multiple sections with heavy duplication. The Step 0/Step 1/Step 2/Step 2b/Step 3 numbering conflicts with the Stage 1-6 numbering, creating confusion about the actual sequence.

2 / 3

Progressive Disclosure

The skill references external files (references/*.md, assets/*.tex, scripts/*.sh) which is good progressive disclosure, but the main SKILL.md itself is a monolithic wall of text that should have much more content pushed to those reference files. The AI graphic generation guidelines, the detailed PDF review checklist, and the common content patterns could all be separate reference documents, leaving the main file much leaner.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (1595 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.