CtrlK
BlogDocsLog inGet started
Tessl Logo

lay-summary-gen

Converts complex medical abstracts into plain language summaries for.

43

Quality

30%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/lay-summary-gen/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is clearly truncated ('for.' at the end), which undermines its completeness and usefulness. While it identifies a reasonably specific domain (medical abstracts to plain language), it lacks a 'Use when...' clause, natural trigger term variations, and appears to be missing critical information after 'for.'

Suggestions

Complete the truncated sentence and add a 'Use when...' clause, e.g., 'Use when the user asks to simplify, summarize, or explain medical research, clinical studies, or scientific abstracts in plain language.'

Add natural trigger terms users would say, such as 'research paper,' 'clinical study,' 'simplify medical jargon,' 'layman's terms,' 'ELI5 medical,' or 'patient-friendly summary.'

Expand the capability list beyond just 'converts' — mention specific actions like 'extracts key findings, explains medical terminology, highlights clinical significance, and produces patient-friendly summaries.'

DimensionReasoningScore

Specificity

Names the domain (medical abstracts) and a specific action (converts to plain language summaries), but only describes one action and the description appears truncated ('for.' suggests incomplete text).

2 / 3

Completeness

Provides a partial 'what' (converts medical abstracts to plain language) but has no 'when' clause or explicit trigger guidance. The description also appears truncated ('for.'), making it incomplete. Per rubric, missing 'Use when...' caps completeness at 2, and the truncation further reduces it.

1 / 3

Trigger Term Quality

Includes relevant keywords like 'medical abstracts' and 'plain language summaries,' but misses common variations users might say such as 'research paper,' 'clinical study,' 'ELI5,' 'simplify,' or 'layman's terms.'

2 / 3

Distinctiveness Conflict Risk

The medical abstract domain is somewhat specific, but 'plain language summaries' could overlap with general summarization or simplification skills. The truncated ending also reduces clarity of its niche.

2 / 3

Total

7

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is heavily template-driven boilerplate with very little content specific to medical abstract summarization. The core task—converting complex medical language to plain language—is barely addressed; instead, the skill delegates everything to an opaque script. The document is bloated with generic sections (Risk Assessment, Security Checklist, Lifecycle Status) that consume tokens without teaching Claude anything actionable about lay summary generation.

Suggestions

Remove all boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria) and circular cross-references ('See ## X above') to cut the document by at least 50%.

Add a concrete before/after example showing a medical abstract input and the expected plain-language summary output, demonstrating jargon replacement and reading level targeting.

Replace the generic workflow steps with medical-summarization-specific steps: e.g., identify jargon terms, map to plain equivalents, restructure for narrative flow, verify reading level.

Consolidate the scattered sections (Features, Input Parameters, Output Format) into a single concise reference section near the top, and remove duplicate information.

DimensionReasoningScore

Conciseness

Extremely verbose and repetitive. Multiple sections reference each other circularly ('See ## Features above', 'See ## Prerequisites above', 'See ## Workflow above'). Contains boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria) that add no actionable value for Claude. Generic content like 'Successfully executes main functionality' and 'Standard input → Expected output' wastes tokens without providing real guidance.

1 / 3

Actionability

The input parameters table, output JSON schema, and bash commands provide some concrete guidance. However, the actual medical summarization logic is entirely absent—there's no example of transforming a medical abstract into a lay summary, no demonstration of jargon replacement, and the 'scripts/main.py' is referenced but its behavior is opaque. The skill tells Claude to run a script rather than teaching it how to perform the task.

2 / 3

Workflow Clarity

The Workflow section provides a numbered sequence and the Error Handling section mentions fallback paths, which is good. However, the steps are generic ('Confirm the user objective', 'Validate that the request matches') rather than specific to medical abstract summarization. There are no validation checkpoints for the actual content quality (e.g., verifying reading level, checking jargon elimination). The workflow reads like a template applied to any skill.

2 / 3

Progressive Disclosure

Circular self-references ('See ## Features above', 'See ## Prerequisites above') are confusing and add no value. The document is a monolithic wall of boilerplate sections with no meaningful external references. References to 'references/' directory are vague with no specifics about what's there. Content is poorly organized with related information scattered (e.g., Features listed after Workflow, Prerequisites at the bottom).

1 / 3

Total

6

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.