CtrlK
BlogDocsLog inGet started
Tessl Logo

cellxgene-census

Query the CELLxGENE Census (61M+ cells) programmatically. Use when you need expression data across tissues, diseases, or cell types from the largest curated single-cell atlas. Best for population-scale queries, reference atlas comparisons. For analyzing your own data use scanpy or scvi-tools.

74

Quality

70%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/cellxgene-census/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly identifies a specific domain (CELLxGENE Census), states when to use it, and helpfully distinguishes it from related tools. The main weakness is that the concrete actions could be more granular — it says 'query' but doesn't enumerate specific operations like downloading matrices, filtering by metadata, or performing cross-dataset comparisons.

Suggestions

Add 2-3 more specific concrete actions beyond 'query', such as 'download expression matrices, filter cells by metadata, retrieve gene counts across datasets' to improve specificity.

DimensionReasoningScore

Specificity

Names the domain (CELLxGENE Census, single-cell atlas) and a general action ('Query... programmatically', 'expression data across tissues, diseases, or cell types'), but doesn't list multiple concrete actions like 'download expression matrices, filter by metadata, compute differential expression'.

2 / 3

Completeness

Clearly answers 'what' (query CELLxGENE Census for expression data across tissues/diseases/cell types) and 'when' ('Use when you need expression data across tissues, diseases, or cell types... Best for population-scale queries, reference atlas comparisons'). Also includes a helpful negative boundary ('For analyzing your own data use scanpy or scvi-tools').

3 / 3

Trigger Term Quality

Includes strong natural keywords a bioinformatics user would say: 'CELLxGENE', 'Census', 'expression data', 'tissues', 'diseases', 'cell types', 'single-cell atlas', 'reference atlas', 'scanpy', 'scvi-tools'. Good coverage of domain-specific terms users would naturally mention.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive — CELLxGENE Census is a very specific resource, and the description clearly delineates its niche (population-scale queries on the curated atlas) versus other bioinformatics tools (scanpy, scvi-tools for own data). Unlikely to conflict with other skills.

3 / 3

Total

11

/

12

Passed

Implementation

50%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill provides excellent actionable code examples with proper API usage patterns, but is severely bloated with redundant content, repeated examples, and explanations Claude doesn't need. Much of the inline content (ML integration, scanpy workflows, multi-dataset patterns, common use cases) should be offloaded to the referenced files, leaving the main skill as a concise overview with the core query pattern and best practices.

Suggestions

Cut the skill to ~100 lines by moving sections 5-7 (ML, scanpy, multi-dataset), the 'Common Use Cases' section, and the troubleshooting section into the referenced files, keeping only the core open/explore/query workflow in SKILL.md.

Remove the 'When to Use This Skill' section entirely—this duplicates the skill description and Claude doesn't need to be told when to use a skill it's already reading.

Consolidate the repeated `is_primary_data == True` guidance into a single prominent note rather than repeating it in nearly every code example and a dedicated best practices bullet.

Remove the scanpy standard workflow steps (normalize, log1p, PCA, UMAP)—Claude already knows scanpy; just show the Census-to-AnnData handoff.

DimensionReasoningScore

Conciseness

Extremely verbose at ~350+ lines. The overview explains what Census is (Claude can read the description), lists bullet points of when to use it (redundant with the description), repeats the same patterns multiple times (e.g., is_primary_data filtering is shown 10+ times), and the 'Common Use Cases' section largely duplicates earlier workflow examples. The scanpy integration section explains standard scanpy steps Claude already knows.

1 / 3

Actionability

Provides fully executable, copy-paste ready code examples throughout. Filter syntax is clearly documented with concrete examples, API calls include all required parameters, and both small-scale (get_anndata) and large-scale (axis_query) patterns are shown with complete working code.

3 / 3

Workflow Clarity

The two-step 'explore then query' workflow and the size estimation before loading are good validation patterns. However, there's no explicit validation/error-handling feedback loop for the large-scale out-of-core processing, and the overall structure reads more like a reference manual than a clear sequential workflow with checkpoints.

2 / 3

Progressive Disclosure

References to census_schema.md and common_patterns.md are well-signaled with clear 'when to read' guidance. However, the main SKILL.md contains far too much inline content that overlaps with what those reference files presumably cover—the bulk of the patterns and examples should be in the reference files rather than duplicated in the main skill.

2 / 3

Total

8

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (510 lines); consider splitting into references/ and linking

Warning

metadata_version

'metadata.version' is missing

Warning

Total

9

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.