CtrlK
BlogDocsLog inGet started
Tessl Logo

scientific-schematics

Create publication-quality scientific diagrams using Nano Banana 2 AI with smart iterative refinement. Uses Gemini 3.1 Pro Preview for quality review. Only regenerates if quality is below threshold for your document type. Specialized in neural network architectures, system diagrams, flowcharts, biological pathways, and complex scientific visualizations.

68

2.87x
Quality

55%

Does it follow best practices?

Impact

95%

2.87x

Average score across 3 eval scenarios

SecuritybySnyk

Risky

Do not use without reviewing

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/scientific-schematics/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

67%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is strong in specificity and distinctiveness, clearly identifying its niche in scientific diagram generation with specific tools and diagram types. Its main weaknesses are the lack of an explicit 'Use when...' clause and incomplete coverage of natural trigger terms users might employ when requesting diagrams or visualizations.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to create, generate, or draw scientific diagrams, figures, schematics, or visualizations for papers or publications.'

Include more natural user-facing trigger terms such as 'figure', 'illustration', 'schematic', 'draw', 'generate diagram', and 'paper figure' to improve keyword coverage.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions and domains: 'neural network architectures, system diagrams, flowcharts, biological pathways, and complex scientific visualizations.' Also mentions specific tools (Nano Banana 2 AI, Gemini 3.1 Pro Preview) and processes (iterative refinement, quality threshold checking).

3 / 3

Completeness

The 'what' is well covered (create scientific diagrams with iterative refinement and quality review), but there is no explicit 'Use when...' clause or equivalent trigger guidance. The 'when' is only implied by the domain specializations listed. Per rubric guidelines, a missing 'Use when...' clause caps completeness at 2.

2 / 3

Trigger Term Quality

Includes some good natural keywords like 'scientific diagrams', 'neural network architectures', 'flowcharts', 'biological pathways', and 'system diagrams'. However, it misses common user variations like 'diagram', 'figure', 'illustration', 'plot', 'schematic', or file format terms. The tool names 'Nano Banana 2 AI' and 'Gemini 3.1 Pro Preview' are not terms users would naturally say.

2 / 3

Distinctiveness Conflict Risk

The description carves out a very clear niche: publication-quality scientific diagrams using a specific AI tool with quality review. The combination of scientific visualization focus, specific tool names, and specialized diagram types makes it highly unlikely to conflict with other skills.

3 / 3

Total

10

/

12

Passed

Implementation

42%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill provides highly actionable, concrete guidance with executable examples and good prompt engineering tips, but is severely undermined by extreme verbosity and repetition. The same information (smart iteration workflow, CLI usage, quality thresholds) is restated 3-4 times throughout the document. The content would benefit enormously from being split into a concise overview with references to detailed sub-documents for examples, troubleshooting, and best practices.

Suggestions

Reduce the document to ~100-150 lines by eliminating redundant sections (merge 'Quick Start', 'How to Use', and 'Command-Line Usage' into one section; remove repeated workflow explanations)

Move the detailed examples, troubleshooting, prompt engineering tips, and checklists into separate referenced files (e.g., EXAMPLES.md, TROUBLESHOOTING.md, PROMPTS.md, CHECKLIST.md)

Consolidate the troubleshooting section - many entries have identical solutions ('increase iterations') and should be merged into a single entry

Remove explanatory content Claude already knows (design principles, what publication standards are, what various diagram types are) and focus only on tool-specific instructions

DimensionReasoningScore

Conciseness

Extremely verbose and repetitive. The same concepts (smart iteration, document type thresholds, command-line usage, troubleshooting) are repeated multiple times. The workflow explanation appears at least 3 times in different forms. The troubleshooting section has many near-duplicate entries ('Increase iterations: --iterations 2' appears as a solution to almost every problem). Much content explains things Claude already knows (what CONSORT is, what IoT is, basic design principles).

1 / 3

Actionability

Provides fully executable command-line examples, Python API usage, and detailed prompt examples that are copy-paste ready. The CLI options are well-documented with concrete flags and values, and the examples include realistic scientific diagram descriptions.

3 / 3

Workflow Clarity

The iterative generation workflow is clearly described with a decision flowchart and steps, but the validation is entirely automated (Gemini review) with no guidance on what to do if max iterations are reached and quality is still below threshold. There's no manual intervention path or fallback strategy for when the automated process fails.

2 / 3

Progressive Disclosure

This is a monolithic wall of text at ~450+ lines with massive amounts of inline content that should be in separate files. The troubleshooting section, detailed examples, prompt engineering tips, best practices, and checklists could all be separate reference files. Only one external reference is mentioned (references/best_practices.md). The content is poorly organized with redundant sections (Quick Start, How to Use, Command-Line Usage all cover the same thing).

1 / 3

Total

7

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (619 lines); consider splitting into references/ and linking

Warning

metadata_version

'metadata.version' is missing

Warning

Total

9

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.