CtrlK
BlogDocsLog inGet started
Tessl Logo

zarr-python

Chunked N-D arrays for cloud storage. Compressed arrays, parallel I/O, S3/GCS integration, NumPy/Dask/Xarray compatible, for large-scale scientific computing pipelines.

Install with Tessl CLI

npx tessl i github:K-Dense-AI/claude-scientific-skills --skill zarr-python
What are skills?

73

1.25x

Quality

48%

Does it follow best practices?

Impact

90%

1.25x

Average score across 6 eval scenarios

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/zarr-python/SKILL.md
SKILL.md
Review
Evals

Evaluation results

98%

57%

Geospatial Raster Storage Optimization

Chunk alignment and compression configuration

Criteria
Without context
With context

Column-aligned chunks

100%

100%

Minimum 1 MB chunk size

0%

80%

BloscCodec import path

0%

100%

Shuffle filter for numeric data

0%

100%

Balanced BloscCodec params

37%

100%

GzipCodec for max compression

0%

100%

GzipCodec level setting

0%

100%

nbytes_stored used for stats

0%

100%

Compression ratio reported

100%

100%

float32 dtype used

100%

100%

zarr.create_array used

100%

100%

Cleanup performed

100%

100%

Without context: $0.6298 · 3m 31s · 34 turns · 40 in / 8,419 out tokens

With context: $0.9762 · 3m 21s · 39 turns · 76 in / 11,964 out tokens

88%

2%

Environmental Monitoring Data Archive

Hierarchical groups, attributes, and consolidated metadata

Criteria
Without context
With context

zarr.group() for root

66%

100%

create_group() for sub-groups

100%

100%

Attribute list type

100%

100%

Attribute int/string types

100%

100%

zarr.consolidate_metadata called

100%

100%

zarr.open_consolidated used

0%

0%

ZipStore used for export

100%

100%

ZipStore closed explicitly

100%

100%

Array attributes present

100%

100%

root.tree() printed

100%

100%

Cleanup performed

100%

100%

Without context: $0.6887 · 2m 22s · 40 turns · 48 in / 7,931 out tokens

With context: $1.1194 · 2m 42s · 41 turns · 295 in / 8,834 out tokens

97%

20%

High-Throughput Simulation Results Processing

Parallel I/O with Dask and thread synchronization

Criteria
Without context
With context

ThreadSynchronizer import

0%

100%

ThreadSynchronizer used

0%

100%

Not ProcessSynchronizer

100%

100%

da.from_zarr() used

100%

100%

da.to_zarr() used

100%

100%

No full array load

100%

100%

Parallel compute call

70%

70%

Mean across timesteps

100%

100%

Result shape printed

100%

100%

Cleanup performed

100%

100%

Dask import used

100%

100%

Without context: $0.3413 · 1m 40s · 22 turns · 26 in / 5,112 out tokens

With context: $0.7596 · 2m · 35 turns · 40 in / 6,636 out tokens

94%

21%

Sensor Network Data Pipeline

Time series appending and Xarray integration

Criteria
Without context
With context

Initial shape zero first dim

100%

100%

Chunks one along time axis

0%

100%

z.append() used for ingestion

33%

100%

xr.open_zarr used

100%

100%

NetCDF export via to_netcdf

100%

100%

dtype f4 used

100%

100%

Three separate variables stored

100%

100%

Mode append used

50%

40%

Cleanup performed

100%

100%

Without context: $0.5067 · 2m 6s · 23 turns · 29 in / 7,339 out tokens

With context: $1.5182 · 4m 12s · 45 turns · 542 in / 15,581 out tokens

95%

7%

Genomics Reference Panel Storage

Sharding configuration for large-scale arrays

Criteria
Without context
With context

shards parameter used

75%

100%

Shard larger than chunk

53%

66%

Fine-grained chunks set

100%

100%

Object count comparison printed

100%

100%

Integer dtype for genotypes

100%

100%

Data written and read back

100%

100%

ShardingCodec or shards param from correct location

100%

100%

Cleanup performed

100%

100%

Without context: $0.3573 · 1m 39s · 20 turns · 25 in / 5,708 out tokens

With context: $0.5047 · 1m 36s · 24 turns · 28 in / 4,949 out tokens

68%

-1%

Parallel Scientific Data Conversion

Multi-process synchronization and format conversion

Criteria
Without context
With context

ProcessSynchronizer used

0%

0%

synchronizer passed to open_array

0%

0%

Not ThreadSynchronizer for processes

100%

100%

multiprocessing module used

100%

100%

Chunk-wise read for stats

100%

100%

np.load used for npy conversion

100%

100%

float32 dtype maintained

100%

100%

Per-timestep means printed

100%

100%

Cleanup performed

90%

80%

Without context: $0.2262 · 1m 7s · 16 turns · 21 in / 3,520 out tokens

With context: $0.8610 · 2m 44s · 34 turns · 38 in / 9,966 out tokens

Evaluated
Agent
Claude Code
Model
Claude Sonnet 4.6

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.