CtrlK
BlogDocsLog inGet started
Tessl Logo

cellxgene-census

Query the CELLxGENE Census (61M+ cells) programmatically. Use when you need expression data across tissues, diseases, or cell types from the largest curated single-cell atlas. Best for population-scale queries, reference atlas comparisons. For analyzing your own data use scanpy or scvi-tools.

74

Quality

70%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/cellxgene-census/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong description that clearly identifies a specific tool (CELLxGENE Census), states when to use it, and helpfully distinguishes it from related tools. The main weakness is that the concrete actions could be more granular — it says 'query' but doesn't enumerate specific operations like downloading matrices, filtering metadata, or computing statistics. The negative boundary ('For analyzing your own data use scanpy or scvi-tools') is a nice touch for disambiguation.

Suggestions

Add more specific concrete actions beyond 'query', e.g., 'download expression matrices, filter cells by metadata, retrieve gene counts across datasets' to improve specificity.

DimensionReasoningScore

Specificity

Names the domain (CELLxGENE Census, single-cell atlas) and a general action ('Query... programmatically', 'expression data across tissues, diseases, or cell types'), but doesn't list multiple concrete actions like 'download expression matrices, filter by metadata, compute differential expression'.

2 / 3

Completeness

Clearly answers 'what' (query CELLxGENE Census for expression data across tissues/diseases/cell types) and 'when' ('Use when you need expression data across tissues, diseases, or cell types... Best for population-scale queries, reference atlas comparisons'). Also includes a helpful negative boundary ('For analyzing your own data use scanpy or scvi-tools').

3 / 3

Trigger Term Quality

Includes strong natural keywords a bioinformatics user would say: 'CELLxGENE', 'Census', 'expression data', 'tissues', 'diseases', 'cell types', 'single-cell atlas', 'reference atlas', 'population-scale queries', 'scanpy', 'scvi-tools'. Good coverage of domain-specific terms.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive — CELLxGENE Census is a very specific tool/database, and the description clearly delineates its niche versus general single-cell analysis tools (scanpy, scvi-tools). Unlikely to conflict with other skills.

3 / 3

Total

11

/

12

Passed

Implementation

50%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill provides excellent, actionable code examples covering the full range of CELLxGENE Census usage patterns, from simple queries to PyTorch integration. However, it is severely bloated—repeating patterns, explaining obvious concepts, and inlining content that belongs in reference files. The result is a ~400-line document that could deliver the same value in under 150 lines with better progressive disclosure.

Suggestions

Cut the 'Overview' bullets, 'When to Use This Skill' section, and 'Key points' annotations—Claude already knows what Census is from the skill description and can infer usage context.

Move 'Available Metadata Fields', 'Common Use Cases', and 'Troubleshooting' sections to the referenced files (census_schema.md and common_patterns.md) to reduce the main file to a concise quick-start with pointers.

Consolidate the repeated is_primary_data == True guidance into a single prominent note rather than repeating it in nearly every code block.

Add an explicit validation/error-recovery step for large-scale queries (e.g., check batch count, handle connection timeouts) to improve workflow clarity for out-of-core processing.

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~400+ lines. It explains concepts Claude already knows (what Census includes, when to use the skill, basic scanpy workflows, what a context manager is), repeats the same patterns multiple times (e.g., is_primary_data filtering is shown 10+ times), and the 'Common Use Cases' section largely duplicates earlier examples. The overview bullets, 'When to Use' section, and many 'Key points' annotations are unnecessary padding.

1 / 3

Actionability

The skill provides fully executable, copy-paste ready code examples throughout. Filter syntax is clearly documented with concrete examples, API calls include all required parameters, and both small-scale (get_anndata) and large-scale (axis_query) patterns are shown with complete working code.

3 / 3

Workflow Clarity

The numbered workflow sections (1-7) provide a reasonable sequence, and the 'Two-Step Workflow: Explore Then Query' and 'Estimate Query Size Before Loading' patterns include validation-like checkpoints. However, there's no explicit validation/error-handling feedback loop for the large-scale out-of-core processing or ML training workflows, and the overall structure reads more like a reference manual than a guided workflow.

2 / 3

Progressive Disclosure

The skill references two external files (references/census_schema.md and references/common_patterns.md) with clear descriptions of when to read them. However, the main SKILL.md itself contains far too much inline content that overlaps with what those reference files should contain—the 'Available Metadata Fields', detailed code patterns, and common use cases could be offloaded to keep the main file as a concise overview.

2 / 3

Total

8

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (510 lines); consider splitting into references/ and linking

Warning

metadata_version

'metadata.version' is missing

Warning

Total

9

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.