CtrlK
BlogDocsLog inGet started
Tessl Logo

imaging-data-commons

Query and download public cancer imaging data from NCI Imaging Data Commons using idc-index. Use for accessing large-scale radiology (CT, MR, PET) and pathology datasets for AI training or research. No authentication required. Query by metadata, visualize in browser, check licenses.

84

Quality

81%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly identifies the specific tool (idc-index), data source (NCI Imaging Data Commons), concrete actions (query, download, visualize, check licenses), and use cases (AI training, research). It includes rich domain-specific trigger terms and has a clear 'Use for' clause. The description is concise yet comprehensive, making it easy for Claude to select appropriately.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: query, download, access radiology/pathology datasets, query by metadata, visualize in browser, check licenses. Also specifies modalities (CT, MR, PET) and the specific tool (idc-index).

3 / 3

Completeness

Clearly answers 'what' (query and download public cancer imaging data, query by metadata, visualize, check licenses) and 'when' ('Use for accessing large-scale radiology and pathology datasets for AI training or research'). The 'Use for' clause serves as an explicit trigger guidance.

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'cancer imaging', 'NCI Imaging Data Commons', 'idc-index', 'radiology', 'CT', 'MR', 'PET', 'pathology', 'AI training', 'research', 'download'. These cover the domain well and match how researchers would phrase requests.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with a clear niche: NCI Imaging Data Commons, idc-index, cancer imaging data. Very unlikely to conflict with other skills given the specific domain, tool name, and data source.

3 / 3

Total

12

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill excels at actionability with comprehensive, executable code examples and clear workflow sequencing. However, it significantly undermines its own progressive disclosure architecture by inlining extensive content that should live in the reference guides it already points to. The result is a monolithic document that wastes tokens on content Claude doesn't need upfront, despite having a good reference file structure available.

Suggestions

Move cloud storage details (bucket names, file organization), CLI documentation, and integration pipeline examples (pydicom, SimpleITK) to their respective reference guides — keep only 1-2 sentence summaries with links in the main file

Trim the Index Tables section to just the table names, granularity, and load method — move the full join column reference table to index_tables_guide.md

Remove the 'Related Skills' descriptions of what matplotlib/seaborn/plotly do — Claude already knows these tools; a simple list of skill names is sufficient

Consolidate the version verification — it appears in three places (critical block at top, installation section, best practices); keep it only in the critical block

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~600+ lines. It includes extensive inline content that should be in reference files (full table schemas, detailed join patterns, cloud storage bucket details, CLI documentation, integration examples with SimpleITK/pydicom). Much of this duplicates what the reference guides already cover. The 'Related Skills' section explaining what matplotlib and seaborn do is unnecessary for Claude.

1 / 3

Actionability

The skill provides fully executable, copy-paste ready Python code throughout — from querying metadata, to downloading files, to visualization, to building 3D volumes. SQL queries are complete and specific, CLI commands include real options, and code examples use actual collection IDs and field names.

3 / 3

Workflow Clarity

The core workflow is clearly stated upfront (query → download → visualize). The version verification step is prominently placed first with explicit validation. The 'explore filter values first, then query' pattern is well-sequenced. Batch processing includes size estimation guidance. The BigQuery decision tree (check index tables first → only use BQ if needed) provides a clear decision workflow.

3 / 3

Progressive Disclosure

The skill has a well-organized Quick Navigation table pointing to 10+ reference guides with clear 'when to load' triggers, which is excellent. However, the main document itself contains far too much inline content that belongs in those reference files — cloud storage details, CLI documentation, full join tables, integration pipeline examples, and digital pathology references are all inlined despite having dedicated reference guides.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (863 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.