Auto-annotate cell clusters from single-cell RNA data using marker genes, tissue context, and species-specific reference databases.
83
Quality
80%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Data analysis/scrna-cell-type-annotator/SKILL.mdQuality
Discovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong, domain-specific description with excellent specificity and trigger terms for the bioinformatics/single-cell genomics field. The main weakness is the absence of an explicit 'Use when...' clause, which would help Claude know exactly when to select this skill over others.
Suggestions
Add a 'Use when...' clause with trigger phrases like 'Use when annotating cell types, identifying cell populations, or working with scRNA-seq clustering results'
Consider adding common tool/format references users might mention, such as 'Seurat objects', 'Scanpy', 'h5ad files', or 'UMAP clusters'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'auto-annotate cell clusters', 'single-cell RNA data', 'using marker genes, tissue context, and species-specific reference databases'. These are precise, domain-specific capabilities. | 3 / 3 |
Completeness | Clearly answers 'what' (auto-annotate cell clusters using marker genes and reference databases), but lacks an explicit 'Use when...' clause or equivalent trigger guidance for when Claude should select this skill. | 2 / 3 |
Trigger Term Quality | Includes natural keywords users in this domain would say: 'cell clusters', 'single-cell RNA', 'marker genes', 'tissue context', 'species-specific', 'reference databases'. These are terms bioinformaticians would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Highly specialized niche in single-cell RNA analysis with distinct terminology. Unlikely to conflict with other skills due to the specific domain (scRNA-seq, cell annotation, marker genes). | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides strong actionable guidance with clear CLI usage, well-defined parameters, and robust error handling with fallback paths. The workflow is well-sequenced with validation checkpoints. However, it could be more concise by trimming the response template and output requirements sections that describe behaviors Claude would naturally exhibit, and could benefit from splitting detailed reference content into separate files.
Suggestions
Remove or significantly compress the 'Output Requirements' and 'Response Template' sections - these describe standard Claude behaviors that don't need explicit instruction
Move the detailed marker database coverage explanation to a separate REFERENCE.md file, keeping only a brief note about PBMC-focused coverage in the main skill
Consider extracting the Risk Assessment table to a separate file since it's metadata rather than operational guidance
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill contains some unnecessary verbosity, particularly in the detailed response templates and output requirements sections that Claude would naturally handle. The marker database coverage explanation is useful but could be tighter. | 2 / 3 |
Actionability | Provides concrete CLI commands with clear parameter tables, executable quick check commands, and specific examples. The workflow steps are actionable with explicit validation and fallback paths. | 3 / 3 |
Workflow Clarity | Clear 5-step workflow with explicit validation-first approach, fallback template for failures, and error handling that specifies exact behaviors. The input validation gate and path traversal checks demonstrate proper checkpoints. | 3 / 3 |
Progressive Disclosure | Content is reasonably organized with clear sections, but the response template and output requirements sections add bulk that could be referenced externally. No external file references for detailed documentation like marker database schemas or extended examples. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
ca9aaa4
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.