Access comprehensive LaTeX templates, formatting requirements, and submission guidelines for major scientific publication venues (Nature, Science, PLOS, IEEE, ACM), academic conferences (NeurIPS, ICML, CVPR, CHI), research posters, and grant proposals (NSF, NIH, DOE, DARPA). This skill should be used when preparing manuscripts for journal submission, conference papers, research posters, or grant proposals and need venue-specific formatting requirements and templates.
57
67%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/venue-templates/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly defines its scope with specific venue names, document types, and concrete capabilities. It includes an explicit 'Use when' clause with natural trigger terms that users would actually say. The enumeration of specific venues and agencies makes it highly distinctive and unlikely to conflict with other skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions and domains: LaTeX templates, formatting requirements, submission guidelines, and enumerates specific venues (Nature, Science, PLOS, IEEE, ACM), conferences (NeurIPS, ICML, CVPR, CHI), and grant agencies (NSF, NIH, DOE, DARPA). | 3 / 3 |
Completeness | Clearly answers both 'what' (access LaTeX templates, formatting requirements, and submission guidelines for specific venues) and 'when' (explicit 'Use when' clause: 'when preparing manuscripts for journal submission, conference papers, research posters, or grant proposals and need venue-specific formatting requirements and templates'). | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural terms users would say: specific venue names (Nature, Science, IEEE), document types (manuscripts, conference papers, research posters, grant proposals), and technical terms (LaTeX templates, formatting requirements, submission guidelines). Users searching for any of these specific venues or document types would naturally match. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche focused on academic/scientific publication formatting with specific venue names as triggers. Unlikely to conflict with general writing, coding, or document processing skills due to the very specific domain of scientific publication venues and grant agencies. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is extremely verbose and violates token efficiency by inlining extensive reference material (venue lists, formatting tables, citation styles) that belongs in bundled reference files. While it provides a reasonable workflow structure and some concrete commands, the sheer volume of descriptive content—much of which Claude already knows or could look up in referenced files—makes it inefficient. The lack of bundle files means all the referenced templates, scripts, and guides are unverifiable.
Suggestions
Reduce the SKILL.md body to under 100 lines by moving all venue-specific formatting tables, citation styles, figure requirements, and venue lists into the referenced files (journals_formatting.md, conferences_formatting.md, etc.)
Remove the 'Visual Enhancement with Scientific Schematics' section entirely—cross-skill promotion doesn't belong in the core workflow and wastes significant tokens
Add explicit error recovery to the validation step: 'If validation fails → check specific error → fix → re-validate before proceeding'
Provide at least one actual LaTeX template snippet inline (e.g., a minimal Nature article preamble) rather than only referencing file paths to non-existent bundle files
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at 500+ lines. Massive amounts of content Claude already knows (what Nature is, what NSF stands for, lists of journal names, poster sizes, basic LaTeX compilation commands). Tables listing venues, citation styles, and figure requirements are reference material that belongs in bundled files, not the skill body. The 'When to Use This Skill' section restates the overview. The 'Visual Enhancement with Scientific Schematics' section is a lengthy tangent about another skill. | 1 / 3 |
Actionability | Provides concrete bash commands for helper scripts (customize_template.py, validate_format.py, query_template.py) and LaTeX compilation, which is good. However, the 'Example Usage' section shows pseudocode-like markdown responses rather than executable code, and the templates/scripts referenced don't exist in any bundle. Much of the content describes rather than instructs. | 2 / 3 |
Workflow Clarity | The 6-step workflow (Identify → Query → Review → Customize → Validate → Compile) is clearly sequenced and includes a validation step with validate_format.py and a review checklist. However, there's no error recovery/feedback loop—if validation fails, there's no guidance on what to do. The workflow also mixes concrete commands with vague instructions like 'Replace placeholder text.' | 2 / 3 |
Progressive Disclosure | The skill references many external files (references/, assets/, scripts/) which suggests good intent for progressive disclosure, but no bundle files are provided, making all references unverifiable. The SKILL.md itself is a monolithic wall containing extensive reference tables (page limits, citation styles, figure requirements, venue lists) that should be in the referenced files rather than inline. The Resources section at the end properly lists bundled files, but the body duplicates much of what those files should contain. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (688 lines); consider splitting into references/ and linking | Warning |
metadata_version | 'metadata.version' is missing | Warning |
Total | 9 / 11 Passed | |
cbcae7b
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.