Chunked N-D arrays for cloud storage. Compressed arrays, parallel I/O, S3/GCS integration, NumPy/Dask/Xarray compatible, for large-scale scientific computing pipelines.
69
37%
Does it follow best practices?
Impact
90%
1.25xAverage score across 6 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/zarr-python/SKILL.mdChunk alignment and compression configuration
Column-aligned chunks
100%
100%
Minimum 1 MB chunk size
0%
80%
BloscCodec import path
0%
100%
Shuffle filter for numeric data
0%
100%
Balanced BloscCodec params
37%
100%
GzipCodec for max compression
0%
100%
GzipCodec level setting
0%
100%
nbytes_stored used for stats
0%
100%
Compression ratio reported
100%
100%
float32 dtype used
100%
100%
zarr.create_array used
100%
100%
Cleanup performed
100%
100%
Hierarchical groups, attributes, and consolidated metadata
zarr.group() for root
66%
100%
create_group() for sub-groups
100%
100%
Attribute list type
100%
100%
Attribute int/string types
100%
100%
zarr.consolidate_metadata called
100%
100%
zarr.open_consolidated used
0%
0%
ZipStore used for export
100%
100%
ZipStore closed explicitly
100%
100%
Array attributes present
100%
100%
root.tree() printed
100%
100%
Cleanup performed
100%
100%
Parallel I/O with Dask and thread synchronization
ThreadSynchronizer import
0%
100%
ThreadSynchronizer used
0%
100%
Not ProcessSynchronizer
100%
100%
da.from_zarr() used
100%
100%
da.to_zarr() used
100%
100%
No full array load
100%
100%
Parallel compute call
70%
70%
Mean across timesteps
100%
100%
Result shape printed
100%
100%
Cleanup performed
100%
100%
Dask import used
100%
100%
Time series appending and Xarray integration
Initial shape zero first dim
100%
100%
Chunks one along time axis
0%
100%
z.append() used for ingestion
33%
100%
xr.open_zarr used
100%
100%
NetCDF export via to_netcdf
100%
100%
dtype f4 used
100%
100%
Three separate variables stored
100%
100%
Mode append used
50%
40%
Cleanup performed
100%
100%
Sharding configuration for large-scale arrays
shards parameter used
75%
100%
Shard larger than chunk
53%
66%
Fine-grained chunks set
100%
100%
Object count comparison printed
100%
100%
Integer dtype for genotypes
100%
100%
Data written and read back
100%
100%
ShardingCodec or shards param from correct location
100%
100%
Cleanup performed
100%
100%
Multi-process synchronization and format conversion
ProcessSynchronizer used
0%
0%
synchronizer passed to open_array
0%
0%
Not ThreadSynchronizer for processes
100%
100%
multiprocessing module used
100%
100%
Chunk-wise read for stats
100%
100%
np.load used for npy conversion
100%
100%
float32 dtype maintained
100%
100%
Per-timestep means printed
100%
100%
Cleanup performed
90%
80%
25e1c0f
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.