Microscopy data management platform. Access images via Python, retrieve datasets, analyze pixels, manage ROIs/annotations, batch processing, for high-content screening and microscopy workflows.
72
62%
Does it follow best practices?
Impact
88%
1.49xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/omero-integration/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description does a good job listing specific capabilities and establishing a clear, distinctive niche in microscopy data management. However, it lacks an explicit 'Use when...' clause which limits its completeness score, and could benefit from more natural trigger terms that users would actually say when needing this skill.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user needs to work with microscopy images, OMERO data, high-content screening results, or biological image analysis.'
Include more natural trigger terms and file format variations users might mention, such as specific platform names (OMERO, CellProfiler), file formats (.tiff, .nd2, .czi), or common phrases like 'microscope images', 'cell imaging', 'fluorescence'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Access images via Python', 'retrieve datasets', 'analyze pixels', 'manage ROIs/annotations', 'batch processing'. These are concrete, actionable capabilities. | 3 / 3 |
Completeness | Clearly answers 'what does this do' with multiple capabilities, but lacks an explicit 'Use when...' clause or equivalent trigger guidance. The 'when' is only implied through the domain context of microscopy workflows. | 2 / 3 |
Trigger Term Quality | Includes some relevant domain keywords like 'microscopy', 'ROIs', 'annotations', 'high-content screening', 'pixels', but misses common user terms like specific platform names (e.g., OMERO), file formats (.tiff, .nd2), or variations like 'microscope images', 'fluorescence', 'confocal'. | 2 / 3 |
Distinctiveness Conflict Risk | The combination of 'microscopy', 'ROIs/annotations', 'high-content screening', and 'microscopy workflows' creates a very clear niche that is unlikely to conflict with other skills. This is a highly specialized domain. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
57%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is well-structured as a hub document with excellent progressive disclosure to reference files, but it trades actionability for breadth. The main content is largely descriptive rather than instructive, with workflows lacking validation checkpoints. Conciseness could be improved by trimming the 'When to Use' section and the repetitive 'Common scenarios' bullets that merely preview reference file contents.
Suggestions
Add validation/verification steps to the Common Workflows (e.g., 'Verify connection succeeded', 'Confirm image count matches expected', 'Validate table was written correctly') to improve workflow clarity.
Remove or significantly trim the 'When to Use This Skill' section since it largely duplicates the capability list that follows, and condense the 'Common scenarios' bullets which just preview reference file contents.
Add executable code snippets to at least one workflow (e.g., Workflow 1) showing the complete end-to-end process rather than just listing steps with file references.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill includes some unnecessary explanation (e.g., 'OMERO is an open-source platform for managing, visualizing, and analyzing microscopy images' repeats the description, 'When to Use This Skill' section largely restates the capability list). The 'Common scenarios' bullets under each capability area add bulk without much actionable value since they just describe what the referenced files cover. However, it's not egregiously verbose. | 2 / 3 |
Actionability | The Quick Start section provides executable connection code and the error handling section has a concrete pattern. However, the bulk of the skill is descriptive pointers to reference files rather than actionable guidance itself. The workflows are described as numbered lists without executable code or specific commands. | 2 / 3 |
Workflow Clarity | Three workflows are listed with clear sequential steps and references to detailed docs, but they lack validation checkpoints, error recovery steps, or feedback loops. For operations involving batch processing and data manipulation, the absence of explicit validation steps caps this at 2. | 2 / 3 |
Progressive Disclosure | Excellent progressive disclosure structure: a concise overview with 8 clearly labeled capability areas, each pointing to a single reference file one level deep. The 'Selecting the Right Capability' section provides clear navigation guidance for different use cases. | 3 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata.version' is missing | Warning |
Total | 10 / 11 Passed | |
b58ad7e
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.