Create publication-quality scientific diagrams using Nano Banana 2 AI with smart iterative refinement. Uses Gemini 3.1 Pro Preview for quality review. Only regenerates if quality is below threshold for your document type. Specialized in neural network architectures, system diagrams, flowcharts, biological pathways, and complex scientific visualizations.
Install with Tessl CLI
npx tessl i github:K-Dense-AI/claude-scientific-skills --skill scientific-schematics80
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillEvaluation — 95%
↑ 3.27xAgent success when using this skill
Validation for skill structure
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description excels at specificity and distinctiveness, clearly defining its niche in scientific diagram creation with named tools and specific diagram types. However, it lacks an explicit 'Use when...' clause which limits its completeness score, and could benefit from more natural trigger terms that users would actually say when requesting diagrams.
Suggestions
Add a 'Use when...' clause with explicit triggers like 'Use when the user needs scientific figures, publication diagrams, neural network visualizations, or asks to draw/create technical illustrations'
Include more natural user terms such as 'figure', 'draw', 'visualize', 'architecture diagram', or 'create a diagram of' to improve trigger term coverage
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Create publication-quality scientific diagrams', 'smart iterative refinement', 'quality review', 'regenerates if quality is below threshold'. Also specifies concrete diagram types: neural network architectures, system diagrams, flowcharts, biological pathways. | 3 / 3 |
Completeness | Clearly answers 'what does this do' with specific capabilities and diagram types, but lacks an explicit 'Use when...' clause or equivalent trigger guidance. The 'when' is only implied through the specialized diagram types mentioned. | 2 / 3 |
Trigger Term Quality | Contains some relevant keywords like 'scientific diagrams', 'neural network architectures', 'flowcharts', 'biological pathways', but lacks common user variations. Users might say 'diagram', 'figure', 'visualization', 'architecture diagram', or 'draw' which aren't explicitly covered. | 2 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with specific tool names (Nano Banana 2 AI, Gemini 3.1 Pro Preview) and a clear niche in scientific/publication-quality diagrams. The specialized focus on neural networks, biological pathways, and scientific visualizations creates a distinct identity unlikely to conflict with general diagramming skills. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
50%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill provides excellent actionable guidance with executable code examples and clear command-line usage, but suffers from severe verbosity and redundancy. The same workflow explanation appears multiple times, and the document could be reduced by 70-80% while retaining all useful information. The structure attempts progressive disclosure but fails by repeating content across sections.
Suggestions
Consolidate the Quick Start, How to Use, and AI Generation sections into a single concise section - currently the same information is repeated 3+ times
Remove marketing-style language and obvious explanations (e.g., 'That's it!', explaining what flowcharts are, listing checkmarks of what AI handles)
Move the extensive examples and troubleshooting to separate reference files, keeping only 1-2 examples inline
Add explicit user validation step: 'Review the generated image and quality report before using in publication'
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with massive redundancy - the same concepts (smart iteration, quality thresholds, how to use the script) are repeated 4-5 times throughout. Explains obvious concepts like what flowcharts are, includes excessive marketing-style language ('That's it!', '✅'), and the document is ~500+ lines when ~100 would suffice. | 1 / 3 |
Actionability | Provides fully executable bash commands and Python code examples that are copy-paste ready. Includes specific prompts, command-line flags, and concrete examples for multiple diagram types with exact parameters. | 3 / 3 |
Workflow Clarity | The iterative workflow is explained clearly with a diagram and decision points, but there's no explicit validation step for the user to verify output quality themselves. The 'smart iteration' is automatic with no user checkpoints or error recovery guidance if the system fails. | 2 / 3 |
Progressive Disclosure | References external files (references/diagram_types.md, references/best_practices.md) appropriately, but the main document is a monolithic wall of text with excessive inline content. The same information appears in Quick Start, How to Use, and AI Generation sections when it should be consolidated. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (620 lines); consider splitting into references/ and linking | Warning |
metadata_version | 'metadata.version' is missing | Warning |
Total | 9 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.