CtrlK
BlogDocsLog inGet started
Tessl Logo

automated-soap-note-generator

1. Confirm the user objective, required inputs, and non-negotiable constraints before doing detailed work. 2. Validate that the request matches the documented scope and stop early if the task would require unsupported as.

30

Quality

13%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/automated-soap-note-generator/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description reads like generic process instructions rather than a skill description. It lacks any concrete actions, domain specificity, trigger terms, or 'use when' guidance. The text also appears truncated ('unsupported as' cuts off), further reducing its utility for skill selection.

Suggestions

Identify and state the specific domain or task this skill handles (e.g., 'Validates API request parameters' or 'Scopes data migration tasks') instead of generic process language.

Add an explicit 'Use when...' clause with natural trigger terms that describe scenarios where this skill should be selected over others.

Replace abstract process steps with concrete, specific actions the skill performs, and ensure the description is complete (the current text appears truncated at 'unsupported as').

DimensionReasoningScore

Specificity

The description contains no concrete actions or domain-specific capabilities. Phrases like 'confirm the user objective' and 'validate that the request matches the documented scope' are abstract process steps, not specific skill actions.

1 / 3

Completeness

The description fails to answer 'what does this do' in any meaningful way and completely lacks a 'when should Claude use it' clause. It reads like internal process instructions rather than a skill description.

1 / 3

Trigger Term Quality

There are no natural keywords a user would say. Terms like 'non-negotiable constraints', 'documented scope', and 'unsupported as' (which appears truncated) are not phrases users would use when seeking help with a task.

1 / 3

Distinctiveness Conflict Risk

The description is extremely generic—confirming objectives and validating scope could apply to virtually any skill. It provides no domain, file type, or task-specific information to distinguish it from other skills.

1 / 3

Total

4

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is extremely verbose and poorly organized, containing extensive explanatory content about medical NLP concepts that Claude already understands. While it provides some concrete CLI commands and code examples, the massive inline content (entity tables, classification rules, temporal extraction details, negation detection categories) should be split into reference files. The skill would benefit enormously from being reduced to ~50-80 lines with clear pointers to detailed reference materials.

Suggestions

Reduce the main SKILL.md to a concise overview (~50-80 lines) covering quick start, basic CLI usage, and workflow steps, moving detailed NER tables, classification rules, temporal extraction, and negation detection into separate files in references/.

Remove all explanatory content about what SOAP notes are, what NER does, and basic medical documentation concepts—Claude already knows these things.

Add explicit validation checkpoints in the workflow (e.g., 'Verify extracted entities before classification', 'Review SOAP classification accuracy before generating final output') given the medical/high-risk nature of the tool.

Fix broken self-references ('See ## Usage above', 'See ## Workflow above') and consolidate the duplicate parameter tables into a single authoritative location.

DimensionReasoningScore

Conciseness

Extremely verbose at ~350+ lines. Contains massive amounts of explanatory content Claude already knows (what SOAP notes are, what NER is, what negation detection is, basic medical documentation concepts). Redundant sections (parameters listed twice, workflow described multiple times), self-referential links to non-existent sections ('See ## Usage above'), and extensive tables explaining basic medical concepts that don't add actionable value.

1 / 3

Actionability

Provides some concrete commands (python scripts/main.py with flags) and code examples, but many code examples are illustrative rather than executable (importing from scripts.soap_generator which may not exist as shown). The audit-ready commands section is good, but much of the 'code' is commented-out expected output rather than runnable code. The relationship between the API examples and the CLI interface is unclear.

2 / 3

Workflow Clarity

The workflow section (steps 1-5) provides a reasonable sequence with a fallback path, and the example run plan in Usage is clear. However, there are no explicit validation checkpoints between steps (e.g., no 'verify entities were correctly extracted before classification'). For a medical documentation tool where errors are high-risk, the lack of intermediate validation steps is a significant gap. The error handling section partially compensates but is separate from the workflow.

2 / 3

Progressive Disclosure

This is a monolithic wall of text with no references to external files despite mentioning 'references/' directory. All content—from basic overview to detailed NER entity tables to temporal extraction to output formats—is inlined in a single massive document. The 'See ## Usage above' and 'See ## Workflow above' self-references are broken/confusing. Content like the detailed entity type tables, classification rules, and format comparisons should be in separate reference files.

1 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.