CtrlK
BlogDocsLog inGet started
Tessl Logo

clinical-reports

Write comprehensive clinical reports including case reports (CARE guidelines), diagnostic reports (radiology/pathology/lab), clinical trial reports (ICH-E3, SAE, CSR), and patient documentation (SOAP, H&P, discharge summaries). Full support with templates, regulatory compliance (HIPAA, FDA, ICH-GCP), and validation tools.

66

1.06x
Quality

51%

Does it follow best practices?

Impact

94%

1.06x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/clinical-reports/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

82%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong description with excellent specificity and domain-appropriate trigger terms that clearly carve out a clinical report writing niche. Its main weakness is the absence of an explicit 'Use when...' clause, which would help Claude know precisely when to select this skill over others. Adding trigger guidance would elevate this from a good to an excellent description.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to write, draft, or review clinical reports, medical documentation, case reports, or clinical trial documents.'

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions and document types: case reports (CARE guidelines), diagnostic reports (radiology/pathology/lab), clinical trial reports (ICH-E3, SAE, CSR), patient documentation (SOAP, H&P, discharge summaries), plus templates, regulatory compliance, and validation tools.

3 / 3

Completeness

The 'what' is thoroughly covered with specific report types and features, but there is no explicit 'Use when...' clause or equivalent trigger guidance telling Claude when to select this skill. Per the rubric, a missing 'Use when...' clause caps completeness at 2.

2 / 3

Trigger Term Quality

Excellent coverage of natural terms a user in clinical/medical settings would use: 'clinical reports', 'case reports', 'SOAP', 'discharge summaries', 'radiology', 'pathology', 'clinical trial', 'HIPAA', 'FDA', 'H&P', 'CSR', 'SAE'. These are highly specific domain terms users would naturally mention.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with a clear clinical/medical niche. The specific report types (SOAP, H&P, CSR, SAE), regulatory frameworks (HIPAA, FDA, ICH-GCP), and clinical guidelines (CARE, ICH-E3) make it very unlikely to conflict with non-medical writing skills.

3 / 3

Total

11

/

12

Passed

Implementation

20%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads like a comprehensive medical documentation textbook rather than a concise, actionable skill for Claude. It is extremely verbose, explaining well-known medical concepts at length while providing almost no executable guidance, concrete examples of generated output, or usable code. The content that is inline should largely be in the referenced files, and the SKILL.md should be a lean overview with clear action steps and validation workflows.

Suggestions

Reduce the SKILL.md to a concise overview (~100-200 lines) that summarizes report types and links to reference files for detailed structures. Move all the detailed report structures (radiology, pathology, lab, SOAP, H&P, discharge summary, SAE, CSR) into their respective reference files.

Add concrete, executable examples: show actual script invocations with sample inputs and expected outputs (e.g., `python scripts/validate_case_report.py report.md` with sample output showing pass/fail results).

Remove explanations of concepts Claude already knows (what HIPAA is, what SOAP stands for, what radiology reports are, definitions of medical terminology standards) and focus on project-specific conventions and tool usage.

Add explicit validation checkpoints to workflows: e.g., 'Run `python scripts/check_deidentification.py draft.md` and fix any flagged identifiers before proceeding to submission formatting.'

DimensionReasoningScore

Conciseness

Extremely verbose at ~1000+ lines. Explains concepts Claude already knows extensively (what HIPAA is, what SOAP notes are, what radiology reports contain, basic medical terminology standards). The content reads like a medical documentation textbook rather than a concise skill instruction. Massive amounts of definitional content that adds no value for Claude.

1 / 3

Actionability

Despite referencing scripts like `scripts/validate_case_report.py` and `scripts/check_deidentification.py`, there are no executable code examples, no concrete command invocations with expected outputs, and no actual template content. The single bash command shown (`python scripts/generate_schematic.py`) is for a different skill. The content describes what reports should contain rather than providing actionable instructions for generating them.

1 / 3

Workflow Clarity

Workflows are listed (Case Report Workflow with phases, Diagnostic Report Workflow, Clinical Trial Report Workflow) with reasonable sequencing, but they lack validation checkpoints and feedback loops. The 'Final Checklist' is a good addition but is disconnected from the workflows. No explicit 'validate then proceed' steps or error recovery guidance for document generation.

2 / 3

Progressive Disclosure

References to external files are well-organized (references/, assets/, scripts/ directories) and clearly signaled at the end. However, the SKILL.md itself is a monolithic wall of text that inlines enormous amounts of content that should be in those reference files. The radiology report structure, pathology report structure, HIPAA identifiers list, ICH-E3 structure, etc. should all be in the referenced files rather than duplicated inline.

2 / 3

Total

6

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (1132 lines); consider splitting into references/ and linking

Warning

metadata_version

'metadata.version' is missing

Warning

Total

9

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.