CtrlK
BlogDocsLog inGet started
Tessl Logo

conference-poster-pitch

Use conference poster pitch for academic writing workflows that need structured execution, explicit assumptions, and clear output boundaries.

46

Quality

33%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/conference-poster-pitch/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

40%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description fails to explain what the skill actually does—no concrete actions or outputs are mentioned. While it includes some relevant trigger terms like 'conference poster pitch' and 'academic writing', the rest of the language is abstract and process-oriented rather than capability-oriented. The description needs a clear 'what it does' section with specific actions.

Suggestions

Add specific concrete actions the skill performs, e.g., 'Creates structured conference poster pitches by organizing research into title, abstract, key findings, methods, and conclusions sections.'

Include more natural trigger term variations such as 'research poster', 'poster presentation', 'poster session', 'academic poster', '.pptx poster'.

Restructure to clearly separate what from when, e.g., 'Generates conference poster pitches from research content, organizing findings into visual sections with key takeaways. Use when the user asks to create a poster, prepare a poster presentation, or summarize research for a conference.'

DimensionReasoningScore

Specificity

The description does not list any concrete actions. 'Structured execution, explicit assumptions, and clear output boundaries' are abstract process descriptors, not specific capabilities like 'create poster layout' or 'summarize research findings into poster sections.'

1 / 3

Completeness

The 'when' is partially addressed with 'Use [for] academic writing workflows that need structured execution...', but the 'what' is essentially missing—there is no explanation of what the skill actually does or produces. The description reads more like a 'when' clause without a 'what' clause.

2 / 3

Trigger Term Quality

'Conference poster pitch' and 'academic writing' are relevant keywords a user might use, but the description lacks common variations like 'research poster', 'poster presentation', 'poster design', or 'academic poster'. The remaining terms ('structured execution', 'output boundaries') are not natural user language.

2 / 3

Distinctiveness Conflict Risk

'Conference poster pitch' provides some niche specificity, but 'academic writing workflows' is broad enough to overlap with other academic writing skills. The lack of concrete actions makes it harder to distinguish from general academic writing or presentation skills.

2 / 3

Total

7

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is heavily padded with generic boilerplate (risk assessment tables, security checklists, lifecycle status, evaluation criteria) that provides no value for Claude's task execution. The core functionality—generating conference poster pitches—is almost entirely delegated to an opaque script with minimal explanation of the actual output or logic. Circular section references and redundant content make the skill harder to follow than necessary.

Suggestions

Remove all boilerplate sections that don't help Claude execute the task (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria, Response Template) to reduce token waste by ~60%.

Add concrete examples of actual pitch output for each duration (30s, 60s, 180s) so Claude understands the expected deliverable format without relying solely on the script.

Eliminate circular cross-references ('See ## X above') and consolidate the workflow into a single clear sequence instead of spreading it across Example Usage, Implementation Details, and Workflow sections.

Either show the core logic of scripts/main.py inline or explain the pitch structure/template so Claude can generate pitches even if the script is unavailable.

DimensionReasoningScore

Conciseness

Extremely verbose and repetitive. Multiple sections reference each other circularly ('See ## Prerequisites above', 'See ## Usage above', 'See ## Workflow above'). Contains extensive boilerplate (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria) that adds no actionable value for Claude. The skill's core task—generating an elevator pitch—is buried under layers of generic project management scaffolding that Claude doesn't need.

1 / 3

Actionability

Provides concrete CLI commands with specific parameters (--poster-title, --duration) and usage examples, which is good. However, the actual skill content delegates everything to `scripts/main.py` without showing what the script does or how to generate a pitch without it. The actionable guidance is limited to 'run this script' rather than teaching Claude how to perform the task.

2 / 3

Workflow Clarity

The Workflow section provides a 5-step sequence with a fallback path for failures, and the Example Usage section has a 4-step run plan. However, validation steps are vague ('Validate that the request matches the documented scope') rather than concrete, and there's no explicit checkpoint between steps. The circular section references ('See ## Workflow above') create confusion about the actual execution order.

2 / 3

Progressive Disclosure

The content is a monolithic wall of text with 15+ sections, many of which are boilerplate or redundant. Circular cross-references ('See ## Prerequisites above for related details' placed before the Prerequisites section) are disorienting. The single external reference (references/audit-reference.md) is fine, but the inline content desperately needs trimming rather than splitting—most sections should be removed entirely.

1 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.