CtrlK
BlogDocsLog inGet started
Tessl Logo

omero-integration

Microscopy data management platform. Access images via Python, retrieve datasets, analyze pixels, manage ROIs/annotations, batch processing, for high-content screening and microscopy workflows.

72

1.49x
Quality

62%

Does it follow best practices?

Impact

88%

1.49x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/omero-integration/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

67%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description does a good job listing specific capabilities and occupying a distinct niche in microscopy data management. Its main weakness is the lack of an explicit 'Use when...' clause, which would help Claude know exactly when to select this skill. The trigger terms are domain-appropriate but could benefit from broader natural language variations.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about microscopy images, OMERO, high-content screening, or biological image analysis.'

Include common natural language variations and platform names users might mention, such as 'OMERO', 'biological imaging', 'fluorescence microscopy', or specific file formats like '.tif', '.nd2', '.czi'.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'Access images via Python', 'retrieve datasets', 'analyze pixels', 'manage ROIs/annotations', 'batch processing'. These are concrete, actionable capabilities.

3 / 3

Completeness

The 'what' is well-covered with specific actions and capabilities. However, there is no explicit 'Use when...' clause or equivalent trigger guidance telling Claude when to select this skill, which caps this at 2 per the rubric guidelines.

2 / 3

Trigger Term Quality

Includes domain-relevant terms like 'microscopy', 'ROIs', 'annotations', 'high-content screening', 'pixels', and 'batch processing', but these are somewhat specialized. Missing common user-facing terms like specific platform names (e.g., OMERO), file formats (.tif, .nd2), or broader terms like 'biological imaging' or 'fluorescence' that users might naturally say.

2 / 3

Distinctiveness Conflict Risk

The description occupies a very clear niche — microscopy data management with terms like 'ROIs/annotations', 'high-content screening', and 'microscopy workflows'. This is highly unlikely to conflict with other skills.

3 / 3

Total

10

/

12

Passed

Implementation

57%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured index/overview skill that effectively organizes a complex domain into navigable sections with clear references to detailed documentation. Its main weaknesses are verbosity in introductory sections (repeating the description, explaining when to use it), lack of executable code in the capability summaries, and workflows that lack validation checkpoints. The progressive disclosure pattern is its strongest aspect.

Suggestions

Remove or significantly trim the 'When to Use This Skill' and 'Overview' sections—they largely duplicate the capability list and description that Claude can infer from context.

Add at least one executable code snippet per capability area (e.g., a 2-3 line example for ROI creation, pixel access, table creation) to make the overview more actionable without requiring reference file lookups.

Add validation/verification steps to the Common Workflows, such as checking that images were retrieved successfully, verifying pixel array dimensions, or confirming table writes completed.

DimensionReasoningScore

Conciseness

The skill includes some unnecessary explanations (e.g., 'OMERO is an open-source platform for managing, visualizing...' and the 'When to Use This Skill' section largely restates the capability list). The Notes section repeats information already covered. However, the code examples are lean and the reference structure avoids inlining too much detail.

2 / 3

Actionability

The Quick Start provides executable connection code and the error handling section has a concrete pattern. However, the eight capability areas are described abstractly with no inline code examples—they just point to reference files. The common workflows are high-level step lists without executable code or commands.

2 / 3

Workflow Clarity

Three workflows are listed with numbered steps and reference file pointers, which provides reasonable sequencing. However, none include validation checkpoints, error recovery steps, or feedback loops. For operations like batch processing and data manipulation, the absence of verification steps is a notable gap.

2 / 3

Progressive Disclosure

The skill excels at progressive disclosure with a clear overview structure, eight well-signaled one-level-deep references to specific capability files, and a 'Selecting the Right Capability' guide that helps navigate to the right reference. The content is appropriately split between overview and detailed reference files.

3 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_version

'metadata.version' is missing

Warning

Total

10

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.