Use conference poster pitch for academic writing workflows that need structured execution, explicit assumptions, and clear output boundaries.
42
Quality
28%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Academic Writing/conference-poster-pitch/SKILL.mdQuality
Discovery
22%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description critically fails to explain what the skill actually does, focusing only on when to use it with abstract process-oriented language. The core capability (creating/editing/reviewing conference poster pitches?) is never stated, making it nearly impossible for Claude to know what actions this skill enables.
Suggestions
Add concrete actions describing what the skill does (e.g., 'Creates conference poster pitches from research papers, structures key findings into visual layouts, generates presenter talking points').
Expand trigger terms to include natural variations users would say: 'poster presentation', 'research poster', 'conference abstract', 'poster session'.
Restructure to lead with capabilities before the 'Use when' clause, following the pattern: '[What it does]. Use when [triggers].'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague language like 'structured execution', 'explicit assumptions', and 'clear output boundaries' without describing any concrete actions. It doesn't specify what the skill actually does (e.g., creates posters, generates pitches, formats content). | 1 / 3 |
Completeness | The description only provides a 'Use when' clause but fails to explain WHAT the skill actually does. The 'what' is entirely missing - we don't know if it creates posters, reviews them, generates pitches, or something else. | 1 / 3 |
Trigger Term Quality | Contains some relevant keywords like 'conference poster pitch' and 'academic writing' that users might naturally say, but lacks common variations (e.g., 'poster presentation', 'research poster', 'abstract', 'conference submission'). | 2 / 3 |
Distinctiveness Conflict Risk | The 'conference poster pitch' and 'academic writing' terms provide some specificity, but 'academic writing workflows' is broad enough to potentially conflict with other academic or writing-focused skills. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is heavily over-engineered with boilerplate that obscures the core task of generating conference poster pitches. The actual actionable content (CLI parameters, example commands) is buried under excessive process documentation, security checklists, and meta-instructions. The skill would benefit from dramatic simplification to focus on what makes a good poster pitch and concrete output examples.
Suggestions
Remove boilerplate sections (Security Checklist, Lifecycle Status, Evaluation Criteria, Risk Assessment) that don't help Claude generate better pitches - these add ~80 lines of noise
Add concrete examples of actual pitch outputs for 30s/60s/180s durations so Claude understands the expected format and quality
Consolidate redundant workflow descriptions into a single clear sequence - currently 'Example Usage', 'Implementation Details', and 'Workflow' sections overlap significantly
Show what the script actually produces or include the pitch generation logic inline if the script is simple, rather than treating it as a black box
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with excessive boilerplate, redundant sections (multiple 'See X above' cross-references), and unnecessary content like security checklists, lifecycle status, and evaluation criteria that don't help Claude generate poster pitches. The core task (generate elevator pitch) is buried under ~200 lines of template noise. | 1 / 3 |
Actionability | Provides concrete CLI commands and parameter documentation, but the actual pitch generation logic is delegated to an external script without showing what the script does or how pitches are structured. No examples of actual pitch output or the format Claude should produce. | 2 / 3 |
Workflow Clarity | Has numbered workflow steps and error handling sections, but the steps are generic ('confirm user objective', 'validate request') rather than specific to poster pitch generation. Missing validation checkpoints for the actual content quality of generated pitches. | 2 / 3 |
Progressive Disclosure | References external files (references/audit-reference.md, scripts/main.py) appropriately, but the main document itself is poorly organized with redundant sections and circular cross-references ('See ## Prerequisites above', 'See ## Usage above'). Content that should be in separate files (security checklist, evaluation criteria) is inline. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
4a48721
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.