CtrlK
BlogDocsLog inGet started
Tessl Logo

imaging-data-commons

Query and download public cancer imaging data from NCI Imaging Data Commons using idc-index. Use for accessing large-scale radiology (CT, MR, PET) and pathology datasets for AI training or research. No authentication required. Query by metadata, visualize in browser, check licenses.

77

1.35x
Quality

75%

Does it follow best practices?

Impact

73%

1.35x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/imaging-data-commons/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly identifies a specific domain (cancer imaging data from NCI), names the tool (idc-index), lists concrete actions (query, download, visualize, check licenses), and provides explicit usage triggers. The inclusion of imaging modalities (CT, MR, PET) and use cases (AI training, research) adds excellent specificity and trigger term coverage.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: query, download, access radiology/pathology datasets, query by metadata, visualize in browser, check licenses. Also specifies modalities (CT, MR, PET) and the specific tool (idc-index).

3 / 3

Completeness

Clearly answers 'what' (query and download public cancer imaging data, query by metadata, visualize, check licenses) and 'when' ('Use for accessing large-scale radiology and pathology datasets for AI training or research'). The 'Use for' clause serves as an explicit trigger guidance.

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'cancer imaging', 'NCI Imaging Data Commons', 'idc-index', 'radiology', 'CT', 'MR', 'PET', 'pathology', 'AI training', 'research', 'download'. Good coverage of domain-specific terms a researcher would use.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with a clear niche: NCI Imaging Data Commons, idc-index, cancer imaging data. Very unlikely to conflict with other skills given the specific domain, tool name, and data source.

3 / 3

Total

12

/

12

Passed

Implementation

50%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill excels at actionability with comprehensive, executable code examples covering the full IDC workflow. However, it suffers significantly from verbosity — it tries to be both an overview document and a comprehensive reference, undermining the progressive disclosure structure it sets up with its reference guides. The workflow lacks explicit validation checkpoints for batch operations like large downloads.

Suggestions

Reduce inline content by 50-60%: move CLI documentation to cli_guide.md, cloud storage details to cloud_storage_guide.md, integration pipeline code to use_cases.md, and detailed table schemas to index_tables_guide.md — keep only the core query/download/visualize patterns inline

Add validation checkpoints to the download workflow: verify download count matches query count, check file sizes, and include retry logic for batch operations

Remove the version check/upgrade code block at the top — a single line instruction ('Ensure idc-index ≥0.11.10') suffices; Claude knows how to upgrade packages

Consolidate the repeated version verification reminders (appears 3+ times) into a single best-practice bullet point

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~600+ lines. It includes extensive inline content that should be in reference files (full SQL examples, detailed table schemas, integration pipeline code with SimpleITK/numpy, CLI documentation, cloud storage details). Much of this duplicates what the referenced guide files already cover. Claude doesn't need explanations of what DICOM is or how to stack numpy arrays.

1 / 3

Actionability

The skill provides fully executable, copy-paste ready Python code throughout — SQL queries, download commands, visualization code, CLI examples, and integration patterns. Every code block is concrete with real function calls, parameters, and expected outputs.

3 / 3

Workflow Clarity

The core workflow is stated (query → download → visualize) and individual steps have clear code, but there are no validation checkpoints for potentially destructive batch download operations. The batch processing section lacks error handling, retry logic, or verification that downloads completed successfully. The troubleshooting section is separate rather than integrated into workflows.

2 / 3

Progressive Disclosure

The skill has a good Quick Navigation table pointing to 9 reference guides with clear decision triggers, which is excellent. However, the main document itself contains far too much inline content that belongs in those reference files — full CLI documentation, cloud storage details, integration pipeline code, detailed table schemas, and extensive SQL examples are all inline when they should be in the referenced guides.

2 / 3

Total

8

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (844 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.