CtrlK
BlogDocsLog inGet started
Tessl Logo

referral-letter-generator

Generate medical referral letters with patient summary, reason for referral.

46

Quality

33%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/referral-letter-generator/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

40%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a clear and distinct niche (medical referral letters) but is incomplete due to the absence of explicit trigger guidance ('Use when...'). It provides moderate specificity with some relevant keywords but would benefit from more comprehensive action listing and natural trigger terms that users might employ.

Suggestions

Add a 'Use when...' clause with trigger terms like 'referral letter', 'specialist referral', 'GP letter', 'consultation request', or 'refer a patient'.

Expand the list of specific actions, e.g., 'Generate medical referral letters including patient demographics, clinical history, current medications, reason for referral, and urgency level'.

Include common keyword variations such as 'doctor referral', 'specialist letter', 'clinical correspondence', or 'consultation letter' to improve trigger term coverage.

DimensionReasoningScore

Specificity

Names the domain (medical referral letters) and some actions (generate, patient summary, reason for referral), but doesn't list comprehensive specific actions like formatting, template selection, or inclusion of medical history details.

2 / 3

Completeness

Describes what the skill does (generate medical referral letters) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, missing 'Use when...' caps completeness at 2, and the 'what' is also only moderately detailed, placing this at 1.

1 / 3

Trigger Term Quality

Includes relevant keywords like 'medical referral letters', 'patient summary', and 'reason for referral' which users might naturally say, but misses common variations like 'doctor referral', 'specialist referral', 'clinical letter', 'GP letter', or 'consultation request'.

2 / 3

Distinctiveness Conflict Risk

Medical referral letters are a clearly distinct niche that is unlikely to conflict with other skills. The combination of 'medical', 'referral', and 'letters' creates a specific enough domain that accidental triggering is unlikely.

3 / 3

Total

8

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is excessively verbose and poorly organized, with significant redundancy across sections. While it provides some concrete code examples and a useful parameter table, the majority of the content is generic boilerplate that doesn't specifically help Claude generate medical referral letters. The document would benefit enormously from consolidation, removing template boilerplate, and focusing on the actual domain-specific guidance needed.

Suggestions

Consolidate redundant sections: merge 'Example Usage' and 'Usage', merge the two workflow sections, and remove the generic 'When to Use' section that just restates the description.

Remove boilerplate sections that don't add value for Claude (Risk Assessment table, Security Checklist, Lifecycle Status, Evaluation Criteria) or move them to a separate metadata file.

Add a concrete example of a complete generated referral letter output so Claude knows what the target artifact looks like.

Reorganize so Overview comes first, followed by Usage with concrete examples, then Implementation Details — currently the document structure is illogical with Overview appearing midway through.

DimensionReasoningScore

Conciseness

Extremely verbose and repetitive. Multiple sections cover the same ground (e.g., 'When to Use' repeats the description, 'Key Features' restates obvious things, 'Example Usage' and 'Usage' are separate sections with overlapping content). Cross-references like 'See ## Prerequisites above' and 'See ## Workflow above' point to sections that appear later, adding confusion. Boilerplate sections like Risk Assessment, Security Checklist, Lifecycle Status, and Evaluation Criteria add significant token cost with minimal actionable value for Claude.

1 / 3

Actionability

There are concrete code examples (CLI usage, Python API call, JSON input example) and a parameter table, which is helpful. However, much of the guidance is generic and procedural rather than specific to medical referral letter generation. The actual letter generation logic is delegated entirely to scripts/main.py without showing what the output looks like or how to customize the letter content.

2 / 3

Workflow Clarity

There are numbered workflow steps in multiple places (Example Usage run plan, Workflow section), but they are generic and lack specific validation checkpoints for the medical referral domain. No explicit validation of the generated letter content (e.g., checking required medical fields are present, verifying output format). The error handling section mentions fallbacks but doesn't provide concrete recovery steps.

2 / 3

Progressive Disclosure

The content is a monolithic wall of text with numerous sections that could be consolidated or split into separate files. There are references to 'references/' folder and 'assets/' but no clear navigation structure. The document has redundant sections (multiple workflow descriptions, multiple usage sections) that make it hard to navigate. Content is poorly organized with sections appearing in illogical order (Overview appears after Implementation Details and Workflow).

1 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.