Chunked N-D arrays for cloud storage. Compressed arrays, parallel I/O, S3/GCS integration, NumPy/Dask/Xarray compatible, for large-scale scientific computing pipelines.
69
37%
Does it follow best practices?
Impact
90%
1.25xAverage score across 6 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/zarr-python/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description reads like a feature list or tagline rather than a functional skill description. It lacks the critical 'Use when...' clause needed for Claude to know when to select this skill, and notably omits the word 'Zarr' which is likely the primary trigger term users would use. The technical keywords provide some signal but the description needs restructuring to clearly state what actions it enables and when it should be invoked.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about Zarr stores, chunked array storage, or reading/writing large N-D arrays to cloud storage (S3/GCS).'
Include the term 'Zarr' prominently since it's the most natural and distinctive trigger term users would use when needing this skill.
Replace the feature-list style with concrete actions, e.g., 'Creates, reads, and writes Zarr stores for chunked N-dimensional arrays. Supports compression, parallel I/O, and integration with S3/GCS backends.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (chunked N-D arrays, cloud storage) and some actions (compressed arrays, parallel I/O, S3/GCS integration), but these read more like feature bullet points than concrete actions a user would perform. It doesn't list specific tasks like 'create Zarr stores', 'read/write chunked arrays', or 'convert NetCDF to Zarr'. | 2 / 3 |
Completeness | Describes what the skill covers (chunked N-D arrays, cloud storage, integrations) but completely lacks any 'Use when...' clause or explicit trigger guidance for when Claude should select this skill. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and the 'what' portion is also somewhat weak (feature list rather than clear capability description), so this scores a 1. | 1 / 3 |
Trigger Term Quality | Includes some relevant technical keywords (S3, GCS, NumPy, Dask, Xarray, N-D arrays, scientific computing) that users in this domain would use, but misses the most obvious trigger term 'Zarr' and common variations like 'zarr store', 'zarr format', '.zarr'. Also missing natural phrases users might say like 'store large arrays' or 'chunked data'. | 2 / 3 |
Distinctiveness Conflict Risk | The combination of chunked N-D arrays, cloud storage, and specific library names (Dask, Xarray) narrows the domain somewhat, but without mentioning 'Zarr' explicitly, it could overlap with general NumPy/Dask/Xarray skills or generic cloud storage skills. The niche is implied but not crisply delineated. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads more like comprehensive library documentation than a focused skill file. While the code examples are excellent and fully executable, the content is far too verbose for a SKILL.md—it explains concepts Claude already knows, exhaustively covers API surface area that belongs in reference files, and dumps everything into a single monolithic document. The content would benefit enormously from aggressive trimming to essential patterns and splitting detailed sections into referenced sub-files.
Suggestions
Reduce the SKILL.md to a Quick Start section and a Performance Optimization checklist (~50-80 lines), moving detailed API coverage (storage backends, compression codecs, group hierarchies, integrations) into separate referenced files like STORAGE.md, COMPRESSION.md, INTEGRATIONS.md
Remove explanations of concepts Claude already knows, such as what NumPy indexing is, what compression does, or how HDF5 works—focus only on Zarr-specific gotchas and non-obvious patterns
Add explicit validation steps to workflows, e.g., after writing to cloud storage verify the data can be read back, or after format conversion validate array shapes and checksums
Cut the Common Issues section to brief bullet points with solutions rather than full code blocks for each—Claude can generate diagnostic code from concise guidance
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | This is extremely verbose at ~500+ lines, covering extensive API surface area that Claude already knows or can infer. It explains basic concepts like what chunking is, what NumPy indexing looks like, and provides exhaustive examples for every storage backend, compression codec, and integration pattern. Much of this is library documentation, not skill-specific guidance. | 1 / 3 |
Actionability | The content is highly actionable with fully executable, copy-paste ready code examples throughout. Every section includes concrete Python code with specific imports, parameters, and realistic usage patterns rather than pseudocode or vague descriptions. | 3 / 3 |
Workflow Clarity | While the content includes a performance optimization checklist and common patterns, it lacks explicit validation checkpoints and feedback loops. For example, the cloud-native workflow pattern doesn't verify successful writes, and the format conversion pattern doesn't validate output integrity. The troubleshooting section provides diagnosis steps but no structured validate-fix-retry loops. | 2 / 3 |
Progressive Disclosure | This is a monolithic wall of text with everything inlined into a single file. The extensive API reference, compression details, storage backend examples, integration guides, and troubleshooting could all be split into separate referenced files. The 'Additional Resources' section at the end links to external docs but the body itself has no internal file references for progressive disclosure. | 1 / 3 |
Total | 7 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (778 lines); consider splitting into references/ and linking | Warning |
metadata_version | 'metadata.version' is missing | Warning |
Total | 9 / 11 Passed | |
25e1c0f
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.