Full-featured computational pathology toolkit. Use for advanced WSI analysis including multiplexed immunofluorescence (CODEX, Vectra), nucleus segmentation, tissue graph construction, and ML model training on pathology data. Supports 160+ slide formats. For simple tile extraction from H&E slides, histolab may be simpler.
77
67%
Does it follow best practices?
Impact
96%
2.74xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/pathml/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that clearly defines a specialized computational pathology toolkit with specific capabilities, natural trigger terms, and explicit guidance on when to use it versus alternatives. The inclusion of concrete techniques (CODEX, Vectra, nucleus segmentation, tissue graph construction) and the helpful contrast with histolab for simpler tasks make this highly effective for skill selection. The third-person voice and concise structure are well-executed.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: multiplexed immunofluorescence analysis (CODEX, Vectra), nucleus segmentation, tissue graph construction, ML model training on pathology data, and support for 160+ slide formats. Also mentions tile extraction from H&E slides as a contrast point. | 3 / 3 |
Completeness | Clearly answers 'what' (computational pathology toolkit with specific capabilities) and 'when' ('Use for advanced WSI analysis including...'). Also provides a helpful negative trigger distinguishing it from simpler tools ('For simple tile extraction from H&E slides, histolab may be simpler'). | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural domain-specific terms users would search for: WSI, multiplexed immunofluorescence, CODEX, Vectra, nucleus segmentation, tissue graph, pathology, slide formats, H&E slides, computational pathology, tile extraction. These are precisely the terms a computational pathology researcher would use. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche in computational pathology with very specific triggers (CODEX, Vectra, nucleus segmentation, tissue graphs, WSI). The explicit contrast with histolab for simpler tasks further reduces conflict risk with related skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill is well-structured in concept with clear reference file organization, but suffers from significant verbosity and redundancy—the reference files are listed three separate times and many sections describe capabilities rather than providing actionable instructions. The Quick Start code example is a strength, but the Common Workflows sections are too vague to be truly useful, reading more like a table of contents than executable guidance.
Suggestions
Eliminate the redundant reference listings—list references once with clear signposting, removing the 'References to Detailed Documentation' and 'Resources' sections entirely since they duplicate 'Core Capabilities'.
Remove the 'When to Use This Skill' and 'Overview' sections, which explain things Claude already knows; start directly with Quick Start or a minimal one-line description.
Convert the 'Common Workflows' from vague numbered descriptions into concrete code examples with validation steps (e.g., checking tile counts after tissue detection, verifying model output shapes).
Trim the Core Capabilities descriptions to just the key transforms/classes and the reference link—remove the explanatory prose about what each capability area does.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is verbose and repetitive. The 'When to Use This Skill' section restates what's in the overview. The references are listed three separate times (in Core Capabilities, References to Detailed Documentation, and Resources sections). The overview explains what computational pathology is, which Claude already knows. Many sections describe rather than instruct. | 1 / 3 |
Actionability | The Quick Start section provides a concrete, executable code example which is good. However, the 'Common Workflows' sections are vague step descriptions ('Load WSI with appropriate slide class', 'Conduct downstream analysis') rather than executable code. Most of the content describes capabilities rather than providing actionable instructions. | 2 / 3 |
Workflow Clarity | The Common Workflows section lists numbered steps for H&E, CODEX, and ML training, but they are high-level descriptions without validation checkpoints, error handling, or feedback loops. For operations involving large-scale data processing and model training, the absence of validation steps and error recovery is a notable gap. | 2 / 3 |
Progressive Disclosure | The skill does reference six separate reference files which is good progressive disclosure structure. However, the references are redundantly listed three times in different sections (Core Capabilities, References to Detailed Documentation, and Resources), and the main file contains too much descriptive content that could be trimmed. The overview section is too long for what should be a concise entry point. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata.version' is missing | Warning |
Total | 10 / 11 Passed | |
b58ad7e
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.