Access comprehensive LaTeX templates, formatting requirements, and submission guidelines for major scientific publication venues (Nature, Science, PLOS, IEEE, ACM), academic conferences (NeurIPS, ICML, CVPR, CHI), research posters, and grant proposals (NSF, NIH, DOE, DARPA). This skill should be used when preparing manuscripts for journal submission, conference papers, research posters, or grant proposals and need venue-specific formatting requirements and templates.
72
67%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/venue-templates/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly defines its scope with specific venue names, document types, and use cases. It effectively answers both what the skill does and when to use it, with excellent trigger term coverage across journals, conferences, and grant agencies. The only minor note is the use of third person is maintained appropriately throughout.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions and domains: LaTeX templates, formatting requirements, submission guidelines, and enumerates specific venues (Nature, Science, PLOS, IEEE, ACM), conferences (NeurIPS, ICML, CVPR, CHI), and grant agencies (NSF, NIH, DOE, DARPA). | 3 / 3 |
Completeness | Clearly answers both 'what' (access LaTeX templates, formatting requirements, and submission guidelines for specific venues) and 'when' (explicitly states 'This skill should be used when preparing manuscripts for journal submission, conference papers, research posters, or grant proposals and need venue-specific formatting requirements and templates'). | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural terms users would say: specific venue names (Nature, Science, IEEE), document types (manuscripts, conference papers, research posters, grant proposals), and technical terms (LaTeX templates, formatting requirements, submission guidelines). Users searching for any of these specific venues or document types would naturally match. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche focused on academic/scientific publication formatting with specific venue names as triggers. Unlikely to conflict with general writing, coding, or document processing skills due to the very specific domain of scientific publication templates and submission guidelines. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is extremely verbose, spending most of its ~500+ lines listing information Claude already knows (journal names, acronyms, poster dimensions, citation style names) rather than providing actionable, executable guidance. The workflow structure exists but is buried in excessive cataloging. The skill would be dramatically improved by cutting 70%+ of the content and keeping only the workflow, script commands, and file references.
Suggestions
Cut all venue/journal/conference listing tables and move them to reference files - the SKILL.md should just point to references/journals_formatting.md etc. rather than duplicating catalog information inline.
Remove the 'Visual Enhancement with Scientific Schematics' section entirely - it's a cross-promotion for another skill that doesn't belong in the core workflow.
Remove expansions of well-known acronyms (NSF, NIH, NeurIPS, etc.) and descriptions of what journals/conferences are - Claude knows these.
Add a feedback loop to the validation step: what to do when validate_format.py reports errors, with specific examples of common failures and fixes.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~500+ lines. Massive amounts of content Claude already knows (what Nature is, what NSF stands for, what citation styles are). Lists of journal names, conference acronyms with expansions, poster sizes, and tables of common knowledge add no value. The 'When to Use This Skill' section repeats the overview. The 'Visual Enhancement with Scientific Schematics' section is a lengthy tangent. The summary at the end restates everything. | 1 / 3 |
Actionability | Provides some concrete commands (customize_template.py, validate_format.py, latexmk) but these reference scripts that may or may not exist. No actual LaTeX template content is shown - it only points to file paths. The 'Example Usage' sections show markdown pseudocode of what a response would look like rather than executable guidance. The workflow steps are procedural but lack executable substance. | 2 / 3 |
Workflow Clarity | The 6-step workflow (Identify → Query → Review → Customize → Validate → Compile) is clearly sequenced and includes a validation step with validate_format.py and a review checklist. However, there's no error recovery/feedback loop - if validation fails, there's no guidance on what to do. The compile step mentions running pdflatex three times but doesn't explain why or what to do if compilation fails. | 2 / 3 |
Progressive Disclosure | References many external files (references/, assets/, scripts/) which is good progressive disclosure in principle. However, the SKILL.md itself is a monolithic wall of text with enormous inline tables and lists that should be in reference files. The writing style guides section adds another large block that could be a simple pointer. The content that IS inline (venue lists, formatting tables) should largely be in the referenced files instead. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (688 lines); consider splitting into references/ and linking | Warning |
metadata_version | 'metadata.version' is missing | Warning |
Total | 9 / 11 Passed | |
25e1c0f
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.