CtrlK
BlogDocsLog inGet started
Tessl Logo

biomed-outline-generator

Generates structured biomedical outlines for review articles, discussion sections, and thesis proposals. Use when a user provides biomedical keywords, results/discussion text, or a proposal title plus background and needs a directly usable academic writing scaffold.

84

Quality

81%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong description that clearly defines a specific niche (biomedical academic outline generation), lists concrete output types, and provides explicit trigger conditions including the types of inputs a user would provide. It uses proper third-person voice and is concise without being vague.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'structured biomedical outlines for review articles, discussion sections, and thesis proposals.' These are distinct, well-defined output types rather than vague language.

3 / 3

Completeness

Clearly answers both what ('Generates structured biomedical outlines for review articles, discussion sections, and thesis proposals') and when ('Use when a user provides biomedical keywords, results/discussion text, or a proposal title plus background and needs a directly usable academic writing scaffold').

3 / 3

Trigger Term Quality

Includes natural keywords users would say: 'biomedical keywords', 'review articles', 'discussion sections', 'thesis proposals', 'proposal title', 'background', 'academic writing scaffold'. These cover the domain well and match how researchers would phrase requests.

3 / 3

Distinctiveness Conflict Risk

Highly specific niche combining biomedical domain + outline generation + specific academic document types. Unlikely to conflict with general writing skills or non-biomedical academic skills due to the precise scope and input/output descriptions.

3 / 3

Total

12

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides a solid structural framework for biomedical outline generation with clear type classification rules, output contracts, and a well-sequenced workflow with validation steps. Its main weaknesses are moderate verbosity (some redundancy between sections, stating things Claude already knows about academic integrity), lack of a concrete output example showing the expected outline format, and all content being inline rather than using progressive disclosure for the detailed contracts.

Suggestions

Add at least one complete output example showing a finished outline for one type (e.g., Type I) so Claude can see the exact expected format with numeric hierarchy and markdown headings.

Consolidate the 'When to Use' and 'Type Recognition Rules' sections, which substantially overlap, into a single type-detection reference.

Remove or significantly trim guidance Claude already knows (e.g., 'do not fabricate citations' is a core Claude behavior; 'The request is clearly non-biomedical' as a when-not-to-use is self-evident from the skill's purpose).

DimensionReasoningScore

Conciseness

The skill is reasonably well-structured but includes some unnecessary sections and verbosity. The 'Optional Validation Shortcut' feels tacked on, the 'When Not to Use' section states obvious things Claude already knows (don't fabricate results), and the type recognition rules partially duplicate the 'When to Use' section. Several sections could be consolidated.

2 / 3

Actionability

The skill provides clear output contracts (what sections each type must include) and input examples, which is good. However, it lacks a concrete output example showing what a generated outline actually looks like. The guidance is structural rather than executable—Claude knows what sections to include but doesn't see a complete example of the expected output format with the numeric hierarchy mentioned.

2 / 3

Workflow Clarity

The 5-step workflow is clearly sequenced with validation at step 1 (domain/sufficiency check), type detection at step 2, building at step 3, enrichment at step 4, and a final safety pass at step 5. The fallback/refusal contract provides a clear error recovery path. The completion checklist serves as a final validation checkpoint.

3 / 3

Progressive Disclosure

The content is well-organized with clear section headers and logical grouping, but it's a monolithic document (~170 lines) that could benefit from splitting detailed output contracts or examples into separate reference files. There are no references to external files for deeper content (the validate_skill.py script reference doesn't count as content organization).

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.