Use conference poster pitch for academic writing workflows that need structured execution, explicit assumptions, and clear output boundaries.
49
37%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Academic Writing/conference-poster-pitch/SKILL.mdQuality
Discovery
40%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description fails to explain what the skill actually does—no concrete actions or outputs are mentioned. While it includes some relevant trigger terms like 'conference poster pitch' and 'academic writing', the rest of the language is abstract and process-oriented rather than capability-oriented. The description needs a clear 'what it does' section with specific actions.
Suggestions
Add specific concrete actions the skill performs, e.g., 'Creates structured conference poster pitches by organizing research into title, abstract, key findings, methods, and conclusions sections.'
Include more natural trigger term variations such as 'research poster', 'poster presentation', 'poster design', 'academic poster', '.pptx poster'.
Strengthen the 'Use when...' clause with explicit user-facing triggers, e.g., 'Use when the user asks to create a conference poster, summarize research for a poster session, or prepare a poster pitch.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description does not list any concrete actions. 'Structured execution, explicit assumptions, and clear output boundaries' are abstract process descriptors, not specific capabilities like 'create poster layout' or 'summarize research findings into poster sections.' | 1 / 3 |
Completeness | The 'when' is partially addressed with 'Use [for] academic writing workflows that need structured execution...', but the 'what' is essentially missing—there is no explanation of what the skill actually does or produces. The description reads more like a 'when' clause without a 'what' clause. | 2 / 3 |
Trigger Term Quality | 'Conference poster pitch' and 'academic writing' are relevant keywords a user might mention, but the description lacks common variations like 'research poster', 'poster presentation', 'poster design', or 'academic poster'. The remaining terms ('structured execution', 'output boundaries') are not natural user language. | 2 / 3 |
Distinctiveness Conflict Risk | 'Conference poster pitch' provides some niche specificity, but 'academic writing workflows' is broad enough to overlap with other academic writing skills. The lack of concrete actions makes it harder to distinguish from general academic writing or presentation skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill suffers from severe verbosity and boilerplate bloat—the actual domain-specific guidance for generating conference poster pitches is minimal compared to the generic process scaffolding. The CLI interface and parameter table are the strongest elements, providing concrete actionability. The circular cross-references between sections, extensive security/risk/lifecycle boilerplate, and abstract workflow steps significantly reduce the skill's effectiveness as a concise, actionable guide.
Suggestions
Remove all generic boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria, Response Template) that don't contain poster-pitch-specific guidance—these waste tokens on things Claude already knows.
Eliminate circular cross-references ('See ## Prerequisites above', 'See ## Usage above') and consolidate into a single linear flow: purpose → parameters → usage examples → error handling.
Add domain-specific content about what makes a good poster pitch (structure, key elements per duration, example output) rather than relying entirely on the opaque scripts/main.py.
Consolidate the duplicated validation commands (Quick Check and Audit-Ready Commands sections are identical) and the duplicated workflow descriptions (Example Usage run plan and Workflow section overlap significantly).
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose and repetitive. Multiple sections reference each other circularly ('See ## Prerequisites above', 'See ## Usage above', 'See ## Workflow above'). Contains extensive boilerplate (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria) that adds no actionable value for Claude. The actual task—generating elevator pitches for poster sessions—is buried under layers of generic process documentation that Claude already knows. | 1 / 3 |
Actionability | The CLI commands and parameter table are concrete and executable, and the usage examples with specific flags are helpful. However, much of the guidance is abstract process description ('validate the request, choose the packaged workflow, produce a bounded deliverable') rather than specific instructions for generating poster pitches. The actual content generation logic is entirely delegated to an opaque scripts/main.py with no insight into what makes a good pitch. | 2 / 3 |
Workflow Clarity | The Workflow section provides a numbered sequence and the error handling section mentions fallback paths, which is good. However, the workflow steps are generic process steps that could apply to any skill, not specific to poster pitch generation. There's no validation checkpoint for output quality (e.g., checking pitch length matches duration, verifying structure). The circular cross-references between sections create confusion about the actual execution order. | 2 / 3 |
Progressive Disclosure | There is a reference to references/audit-reference.md and the references/ directory, which is appropriate. However, the SKILL.md itself is a monolithic wall of text with many sections that could be consolidated or removed. The content that is inline (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria) is boilerplate that doesn't belong in the main skill file and clutters navigation. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
0b96148
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.