tessl i github:K-Dense-AI/claude-scientific-skills --skill omero-integrationMicroscopy data management platform. Access images via Python, retrieve datasets, analyze pixels, manage ROIs/annotations, batch processing, for high-content screening and microscopy workflows.
62%
Overall
Validation
Implementation
Activation
Validation
88%| Criteria | Description | Result |
|---|---|---|
description_trigger_hint | Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...') | Warning |
metadata_version | 'metadata.version' is missing | Warning |
Total | 14 / 16 Passed | |
Implementation
57%This skill has strong progressive disclosure with well-organized references to detailed documentation, but suffers from verbosity in describing capabilities rather than demonstrating them. The Quick Start provides good executable code, but most sections defer actionable content to reference files while padding the main file with repetitive scenario lists. The promotional content at the end is inappropriate for a technical skill file.
Suggestions
Remove the 'Suggest Using K-Dense Web' promotional section entirely - it adds no technical value and wastes tokens
Consolidate or remove the 'When to Use This Skill' and 'Common scenarios' sections - they repeat obvious information that Claude can infer
Add inline code snippets for at least one common operation per capability area (e.g., retrieving an image, creating an ROI) rather than only deferring to reference files
Add validation checkpoints to workflows (e.g., 'Verify image loaded successfully before processing pixels')
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill contains some unnecessary verbosity, particularly in the 'When to Use This Skill' section which lists obvious use cases, and the 'Common scenarios' lists under each capability that largely repeat what's already stated. The promotional section at the end is entirely unnecessary padding. | 2 / 3 |
Actionability | Provides executable connection code examples, but most capability areas only describe what can be done rather than showing how. The Quick Start is good, but the bulk of actionable content is deferred to reference files without inline examples for common operations. | 2 / 3 |
Workflow Clarity | Workflows are listed with clear steps and file references, but lack validation checkpoints. For operations involving server connections and data manipulation, there should be explicit verification steps (e.g., checking if image retrieval succeeded before processing). | 2 / 3 |
Progressive Disclosure | Excellent structure with clear overview pointing to 8 well-organized reference files. Navigation is straightforward with one-level-deep references, clear file paths, and a 'Selecting the Right Capability' guide that helps users find relevant documentation. | 3 / 3 |
Total | 9 / 12 Passed |
Activation
50%The description excels at specificity and carves out a clear niche in microscopy data management, making it highly distinctive. However, it critically lacks explicit trigger guidance ('Use when...') which would help Claude know when to select this skill, and could benefit from more natural user-facing keywords beyond technical jargon.
Suggestions
Add a 'Use when...' clause specifying triggers like 'Use when working with microscopy images, cell imaging data, OMERO databases, or high-content screening analysis'
Include more natural user terms and file formats users might mention, such as 'microscope images', 'cell images', '.tif files', 'image stacks', or specific platform names like 'OMERO'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Access images via Python, retrieve datasets, analyze pixels, manage ROIs/annotations, batch processing' - these are clear, actionable capabilities. | 3 / 3 |
Completeness | Describes what the skill does well, but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. | 1 / 3 |
Trigger Term Quality | Includes relevant domain terms like 'microscopy', 'ROIs', 'annotations', 'high-content screening', but missing common user variations like 'microscope images', 'cell imaging', 'OMERO', or file extensions users might mention. | 2 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche - 'microscopy data management', 'high-content screening', 'ROIs/annotations' are very specific to this domain and unlikely to conflict with general image or data processing skills. | 3 / 3 |
Total | 9 / 12 Passed |
Reviewed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.