Create publication-quality scientific diagrams using Nano Banana 2 AI with smart iterative refinement. Uses Gemini 3.1 Pro Preview for quality review. Only regenerates if quality is below threshold for your document type. Specialized in neural network architectures, system diagrams, flowcharts, biological pathways, and complex scientific visualizations.
68
55%
Does it follow best practices?
Impact
95%
2.87xAverage score across 3 eval scenarios
Risky
Do not use without reviewing
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/scientific-schematics/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is strong on specificity and distinctiveness, clearly identifying its niche in scientific diagram generation with specific tools and diagram types. However, it lacks an explicit 'Use when...' clause which limits its completeness score, and some of the trigger terms are tool-specific jargon rather than natural user language. Adding explicit trigger guidance and more user-facing keywords would improve skill selection accuracy.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user needs to create scientific figures, diagrams, schematics, or illustrations for papers or publications.'
Include more natural user-facing trigger terms such as 'figure', 'illustration', 'schematic', 'diagram for paper', 'research figure', or 'publication figure' to improve matching with how users naturally phrase requests.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions and domains: 'neural network architectures, system diagrams, flowcharts, biological pathways, and complex scientific visualizations.' Also mentions specific tools (Nano Banana 2 AI, Gemini 3.1 Pro Preview) and processes (iterative refinement, quality threshold checking). | 3 / 3 |
Completeness | The 'what' is well covered (create scientific diagrams with iterative refinement and quality review), but there is no explicit 'Use when...' clause or equivalent trigger guidance. The 'when' is only implied by the domain specializations listed. Per rubric guidelines, a missing 'Use when...' clause caps completeness at 2. | 2 / 3 |
Trigger Term Quality | Includes some good natural keywords like 'scientific diagrams', 'neural network architectures', 'flowcharts', 'biological pathways', and 'system diagrams'. However, it misses common user variations like 'diagram', 'figure', 'illustration', 'plot', 'schematic', or file format terms. The tool names 'Nano Banana 2 AI' and 'Gemini 3.1 Pro Preview' are not terms users would naturally say. | 2 / 3 |
Distinctiveness Conflict Risk | The description carves out a clear niche: publication-quality scientific diagrams using a specific AI tool with quality review. The combination of scientific visualization focus, specific tool names, and specialized diagram types makes it highly distinctive and unlikely to conflict with general diagramming or coding skills. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill provides highly actionable, concrete guidance with executable commands and detailed examples, which is its primary strength. However, it is severely undermined by extreme verbosity and repetition—the same workflow is explained at least 3-4 times, and the troubleshooting section is largely redundant. The document desperately needs restructuring into multiple files with a concise overview in SKILL.md, as it currently reads as a monolithic wall of repetitive content.
Suggestions
Reduce the SKILL.md to a concise overview (~100 lines) with Quick Start, and move detailed examples, prompt engineering tips, troubleshooting, and checklists into separate referenced files (e.g., EXAMPLES.md, TROUBLESHOOTING.md, CHECKLIST.md).
Eliminate redundant explanations of the smart iteration workflow—describe it once clearly in the overview and reference that section elsewhere instead of repeating it in Quick Start, How to Use, and AI Generation Mode sections.
Add explicit guidance for what to do when max iterations are reached and quality is still below threshold (e.g., manual prompt refinement strategies, fallback approaches).
Remove the troubleshooting entries that simply say 'AI handles this automatically' or 'increase iterations'—these provide no actionable value and waste tokens.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose and repetitive. The same concepts (smart iteration, document-type thresholds, how the workflow works) are explained 3-4 times throughout the document. The troubleshooting section repeats 'AI generation handles this automatically' and 'Increase iterations' for nearly every problem. The overview, quick start, 'How to Use', and 'AI Generation Mode' sections all redundantly describe the same workflow. Massive token waste. | 1 / 3 |
Actionability | Provides fully executable CLI commands, Python API examples, and detailed prompt examples that are copy-paste ready. The command-line options are well-documented with concrete flags and values. The examples (CONSORT, Transformer, MAPK, IoT) are specific and actionable. | 3 / 3 |
Workflow Clarity | The iterative generation workflow is clearly described with a decision point and the ASCII diagram is helpful. However, there are no validation steps for the user to verify outputs beyond trusting the automated quality score. The troubleshooting section lacks a clear error-recovery sequence, and there's no explicit guidance on what to do if max iterations are reached and quality is still below threshold. | 2 / 3 |
Progressive Disclosure | The document is a monolithic wall of text (~500+ lines) with massive amounts of inline content that should be split into separate files. Only one reference file is mentioned (references/best_practices.md). The prompt engineering tips, detailed examples, troubleshooting, checklists, and best practices could all be separate files. No bundle files are provided to support the references that do exist. | 1 / 3 |
Total | 7 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (619 lines); consider splitting into references/ and linking | Warning |
metadata_version | 'metadata.version' is missing | Warning |
Total | 9 / 11 Passed | |
cbcae7b
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.