CtrlK
BlogDocsLog inGet started
Tessl Logo

medical-scribe-dictation

Convert physician verbal dictation into structured SOAP notes. Trigger.

41

Quality

27%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/medical-scribe-dictation/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a clear domain (medical dictation to SOAP notes) but is critically incomplete. The word 'Trigger.' appears to be a placeholder or truncated text rather than a functional trigger clause, leaving the 'when to use' guidance entirely absent. The description also lacks specificity about the concrete actions performed beyond basic conversion.

Suggestions

Replace 'Trigger.' with an explicit 'Use when...' clause, e.g., 'Use when the user provides physician dictation, voice transcripts, or clinical encounter notes that need to be formatted into structured SOAP notes.'

Add natural trigger terms users would say, such as 'medical notes', 'clinical documentation', 'transcription', 'patient encounter', 'progress notes', 'subjective objective assessment plan'.

List more specific actions beyond 'convert', e.g., 'Parses dictation into Subjective, Objective, Assessment, and Plan sections, extracts diagnoses and medications, and formats output as structured clinical documentation.'

DimensionReasoningScore

Specificity

Names the domain (physician dictation, SOAP notes) and one action (convert), but does not list multiple specific concrete actions like parsing sections, formatting assessments, or handling medical terminology.

2 / 3

Completeness

The 'what' is present (convert dictation to SOAP notes), but the 'when' clause is essentially absent — 'Trigger.' is a fragment that provides no meaningful guidance on when Claude should select this skill. Per rubric guidelines, a missing 'Use when...' clause should cap completeness at 2, and this is even weaker than implied triggers.

1 / 3

Trigger Term Quality

Includes relevant terms like 'dictation', 'SOAP notes', and 'physician', but misses common variations users might say such as 'medical notes', 'clinical documentation', 'transcription', 'patient encounter', or 'progress notes'.

2 / 3

Distinctiveness Conflict Risk

The mention of SOAP notes and physician dictation provides a reasonably specific niche, but the brevity and lack of explicit file types or workflow context could cause overlap with general medical documentation or transcription skills.

2 / 3

Total

7

/

12

Passed

Implementation

22%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is heavily padded with generic boilerplate content that is not specific to medical scribe dictation. While it includes some useful domain-specific elements (SOAP output structure, medical terminology handling, specialty support), the majority of the content is templated filler (security checklists, risk assessments, lifecycle status, generic workflow steps) that wastes tokens without adding actionable guidance. The self-referential cross-links to sections that appear later in the document create confusion rather than clarity.

Suggestions

Remove all generic boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria, Input Validation template, Response Template) and focus on medical-scribe-specific content.

Replace the generic workflow with a concrete, medical-scribe-specific workflow: e.g., 1) Parse transcription → 2) Identify SOAP sections via NLP → 3) Normalize medical terms → 4) Validate completeness → 5) Flag ambiguities for physician review.

Remove the self-referential 'See ## X above for related details' lines and consolidate duplicated content (e.g., merge 'Example Usage' and 'Usage' into one section).

Add a concrete input/output example showing a sample dictation text and the expected SOAP note output, which would dramatically improve actionability.

DimensionReasoningScore

Conciseness

Extremely verbose and repetitive. Contains numerous sections that add no value (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria are boilerplate). Multiple self-referential loops like 'See `## Features` above for related details' and 'See `## Usage` above for related details' that reference sections appearing later. The 'When to Use' section redundantly restates the description and includes a nonsensical bullet about 'academic writing tasks.' Audit-Ready Commands repeats `--help` three times.

1 / 3

Actionability

Provides some concrete code examples (Python API usage, CLI commands) and a clear SOAP output template, but much of the workflow guidance is generic and abstract ('Confirm the user objective,' 'Validate that the request matches the documented scope'). The main.py script is referenced but it's unclear if it actually exists or works. CLI examples use `--input` with inline text which may not be real functionality.

2 / 3

Workflow Clarity

The workflow section is entirely generic boilerplate ('Confirm the user objective, required inputs, and non-negotiable constraints') with no medical-scribe-specific steps. There are no validation checkpoints specific to clinical note generation, no feedback loops for verifying medical terminology accuracy, and no concrete sequence for the actual dictation-to-SOAP conversion process. The 'Example run plan' is also generic.

1 / 3

Progressive Disclosure

References to `references/` directory files (soap-templates.md, medical-abbreviations.json, etc.) provide one-level-deep navigation, which is good. However, the main document itself is a monolithic wall of text with many sections that should be consolidated or removed. The self-referential 'See ## X above' links are confusing and suggest poor organization.

2 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.