Query and download public cancer imaging data from NCI Imaging Data Commons using idc-index. Use for accessing large-scale radiology (CT, MR, PET) and pathology datasets for AI training or research. No authentication required. Query by metadata, visualize in browser, check licenses.
74
92%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Advisory
Suggest reviewing before use
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly identifies a specific domain (cancer imaging data from NCI), names the tool (idc-index), lists concrete actions (query, download, visualize, check licenses), and specifies when to use it. The description is concise yet comprehensive, with excellent trigger terms covering modalities, use cases, and the specific data source.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: query, download, access radiology/pathology datasets, query by metadata, visualize in browser, check licenses. Also specifies modalities (CT, MR, PET) and the specific tool (idc-index). | 3 / 3 |
Completeness | Clearly answers 'what' (query and download public cancer imaging data, query by metadata, visualize, check licenses) and 'when' ('Use for accessing large-scale radiology and pathology datasets for AI training or research'). The 'Use for' clause serves as an explicit trigger guidance. | 3 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'cancer imaging', 'NCI Imaging Data Commons', 'idc-index', 'radiology', 'CT', 'MR', 'PET', 'pathology', 'AI training', 'research', 'download'. These cover the domain well and match how researchers would phrase requests. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with a clear niche: NCI Imaging Data Commons, idc-index, cancer imaging data. Very unlikely to conflict with other skills given the specific domain, tool name, and data source. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
85%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a high-quality, comprehensive skill that provides excellent actionable guidance for querying and downloading cancer imaging data from IDC. Its greatest strengths are the executable code examples throughout, clear progressive disclosure with well-organized reference guides, and thorough coverage of the API surface. The main weakness is moderate verbosity — some sections explain concepts Claude already knows (related tool descriptions) and the document could be tightened by ~15-20% without losing information.
Suggestions
Trim the 'Related Skills' section — Claude doesn't need explanations of what matplotlib, seaborn, or plotly are; just list the skill names and when to use them in one line each.
Condense the version check code block at the top; the string comparison approach and subprocess upgrade logic could be simplified to a brief note with a one-liner pip command.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is quite long (~600+ lines) with some unnecessary verbosity. The version check code block at the top is overly detailed, the 'Related Skills' section explains what matplotlib/seaborn/plotly are (Claude knows this), and some sections like 'Downloaded file names' and cloud storage details could be more concise. However, most content is genuinely informative and domain-specific. | 2 / 3 |
Actionability | Excellent actionability throughout — nearly every section includes complete, executable Python code with real SQL queries, concrete API calls, and copy-paste ready examples. The code covers querying, downloading, visualization, license checking, citations, batch processing, and DICOM integration with pydicom and SimpleITK. | 3 / 3 |
Workflow Clarity | The core workflow is clearly stated upfront (query → download → visualize). Multi-step processes like batch downloading include explicit size estimation guidance, the 'Best Practices' section emphasizes starting small with LIMIT, and the troubleshooting section covers common failure modes with solutions. The version verification step at the top serves as a validation checkpoint. | 3 / 3 |
Progressive Disclosure | Excellent progressive disclosure with a clear Quick Navigation table mapping 10 reference guides to specific decision triggers ('When to Load'). Core capabilities are inline while advanced topics (BigQuery, cloud storage, DICOMweb, pathology, Parquet) are properly deferred to reference files. References are one level deep and clearly signaled. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (863 lines); consider splitting into references/ and linking | Warning |
Total | 10 / 11 Passed | |
cbcae7b
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.