Use when creating journal cover images, generating scientific artwork prompts, or designing graphical abstracts. Creates detailed prompts for AI image generators to produce publication-quality scientific visuals.
61
52%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Academic Writing/journal-cover-prompter/SKILL.mdQuality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-crafted description that clearly communicates both what the skill does and when to use it, with strong domain-specific trigger terms. The 'Use when...' clause is placed first, which is slightly unconventional but still effective. The only minor weakness is that the core action ('creates detailed prompts') could be expanded with more specific sub-actions to better convey the range of capabilities.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (scientific visuals/publication imagery) and some actions (creating journal cover images, generating artwork prompts, designing graphical abstracts), but the core capability is somewhat narrow—it primarily 'creates detailed prompts' rather than listing multiple distinct concrete actions. | 2 / 3 |
Completeness | Explicitly answers both 'what' (creates detailed prompts for AI image generators to produce publication-quality scientific visuals) and 'when' (use when creating journal cover images, generating scientific artwork prompts, or designing graphical abstracts) with clear trigger guidance. | 3 / 3 |
Trigger Term Quality | Includes strong natural trigger terms users would actually say: 'journal cover images', 'scientific artwork', 'graphical abstracts', 'publication-quality', 'AI image generators'. These cover the key variations a researcher or scientist would use. | 3 / 3 |
Distinctiveness Conflict Risk | Occupies a very clear niche—scientific publication imagery and AI prompt generation for that domain. Unlikely to conflict with general image generation skills or general science skills due to the specific combination of scientific visuals + prompt creation. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
14%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is heavily padded with generic boilerplate that applies to any task, not specifically to journal cover image generation. The domain-specific content (prompt generation, style selection, technical specs) is reasonable but buried under repetitive scaffolding. The skill would be dramatically improved by removing ~60% of the generic content and instead showing concrete example prompts that would actually be generated for different journal types.
Suggestions
Remove all generic boilerplate sections (Output Requirements, Response Template, Input Validation, Error Handling) that don't contain domain-specific guidance—Claude already knows how to handle missing inputs and structure responses.
Add a concrete example showing an actual generated prompt output (e.g., 'For CRISPR research in Nature style, the generated prompt would be: ...'). This is the core deliverable and currently has zero examples.
Resolve the contradiction between scripts/main.py and scripts/cover_prompter.py—pick one entry point and remove references to the other.
Consolidate the duplicated 'When to Use' / 'Key Features' / 'Workflow' / 'Implementation Details' sections into a single concise workflow with domain-specific validation steps (e.g., verify resolution meets journal requirements).
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose and repetitive. The description is copy-pasted multiple times (in 'When to Use' and 'Key Features'). Massive amounts of boilerplate about error handling, input validation, output requirements, and response templates that are generic project management advice Claude already knows. The actual domain-specific content (prompt generation for journal covers) is buried under layers of scaffolding. | 1 / 3 |
Actionability | The Quick Start and Core Capabilities sections provide concrete Python code examples with specific parameters (research_topic, visual_style, etc.) and CLI usage. However, it's unclear if these code examples are actually executable—they import from 'scripts.cover_prompter' but the earlier sections reference 'scripts/main.py', creating confusion. The prompt structure output is described abstractly rather than showing an actual generated prompt example. | 2 / 3 |
Workflow Clarity | The 'Workflow' section is entirely generic project management steps ('Confirm the user objective', 'Validate that the request matches documented scope') with no specificity to journal cover image generation. There are no validation checkpoints specific to the domain (e.g., checking image dimensions, verifying prompt quality). The 'Example run plan' references main.py while the actual code examples use cover_prompter.py, creating contradictions. | 1 / 3 |
Progressive Disclosure | The content is a monolithic wall of text with heavily duplicated sections. 'Implementation Details' says 'See Workflow above' while adding redundant information. Multiple sections cover overlapping concerns (Quick Check vs Audit-Ready Commands, Error Handling vs Input Validation). The single reference to 'references/audit-reference.md' is vague. Content is poorly organized with no clear hierarchy. | 1 / 3 |
Total | 5 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
8277276
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.