CtrlK
BlogDocsLog inGet started
Tessl Logo

unstructured-medical-text-miner

Mine unstructured clinical text from MIMIC-IV to extract diagnostic logic and treatment details

50

2.00x

Quality

31%

Does it follow best practices?

Impact

76%

2.00x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Data analysis/unstructured-medical-text-miner/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

40%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a clear, specialized domain (MIMIC-IV clinical text mining) which provides good distinctiveness, but suffers from incomplete guidance. It lacks explicit trigger conditions telling Claude when to use this skill, and could benefit from more specific action verbs and natural user terminology like 'medical records' or 'patient notes'.

Suggestions

Add a 'Use when...' clause with explicit triggers like 'Use when analyzing MIMIC-IV data, extracting information from clinical notes, or when the user mentions medical records, patient notes, or clinical NLP'

Include common user terminology variations such as 'medical records', 'patient notes', 'EHR data', 'clinical NLP', or 'healthcare text mining'

Expand specific actions beyond 'extract' to include concrete outputs like 'identify diagnoses, extract medication dosages, map to ICD codes, summarize treatment timelines'

DimensionReasoningScore

Specificity

Names the domain (clinical text, MIMIC-IV) and some actions (mine, extract diagnostic logic, treatment details), but lacks comprehensive concrete actions like specific extraction methods or output formats.

2 / 3

Completeness

Describes what the skill does but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, missing explicit trigger guidance caps this at 2, but the 'what' is also only partial.

1 / 3

Trigger Term Quality

Includes relevant keywords like 'MIMIC-IV', 'clinical text', 'diagnostic', and 'treatment', but misses common variations users might say like 'medical records', 'patient notes', 'EHR', 'clinical notes', or 'NLP'.

2 / 3

Distinctiveness Conflict Risk

MIMIC-IV is a very specific clinical database, and the combination of 'unstructured clinical text' with 'diagnostic logic' creates a clear niche that is unlikely to conflict with other skills.

3 / 3

Total

8

/

12

Passed

Implementation

22%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is heavily padded with boilerplate template sections (Risk Assessment, Security Checklist, Evaluation Criteria, Lifecycle Status) that provide no actionable guidance for Claude. While it includes some concrete code examples and output schemas, it lacks a clear workflow with validation steps for what should be a complex multi-step medical text mining process. The content reads more like product documentation than an actionable skill.

Suggestions

Remove all boilerplate sections (Risk Assessment, Security Checklist, Evaluation Criteria, Lifecycle Status) that don't provide actionable guidance

Add a clear step-by-step workflow with validation checkpoints, especially for verifying extraction quality on medical data

Consolidate the multiple output JSON examples into a single reference file and link to it

Verify and clarify the actual module/script paths - the import path 'skills.unstructured_medical_text_miner.scripts.main' suggests a specific project structure that should be documented or simplified

DimensionReasoningScore

Conciseness

Extremely verbose with extensive boilerplate sections (Risk Assessment, Security Checklist, Evaluation Criteria, Lifecycle Status) that add no instructional value. Explains obvious concepts and includes template-like content that wastes tokens.

1 / 3

Actionability

Provides code examples and CLI commands that appear executable, but the main script path references a non-standard module structure. The code shows API usage but lacks verification that the library actually exists or works as described.

2 / 3

Workflow Clarity

No clear multi-step workflow with validation checkpoints. The usage section shows isolated code snippets but doesn't guide through a complete process. Missing validation steps for a complex medical data extraction pipeline where errors could be critical.

1 / 3

Progressive Disclosure

Content is organized into sections but everything is inline in one massive file. References to external files (requirements.txt, config.yaml) are mentioned but the skill itself is monolithic. Could benefit from splitting detailed output schemas and configuration into separate reference files.

2 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.