CtrlK
BlogDocsLog inGet started
Tessl Logo

academic-highlight-generator

Generates submission-ready Elsevier/SCI Highlights from manuscript text or extracted PDF/DOCX/TXT content. Use when a user needs 3-5 concise, evidence-grounded highlight bullets for a research paper, review, meta-analysis, case report, or bioinformatics manuscript.

93

Quality

92%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an excellent skill description that clearly defines a narrow, well-scoped task (generating Elsevier/SCI Highlights), specifies input formats and output expectations, and includes an explicit 'Use when' clause with rich trigger terms spanning multiple manuscript types. It uses proper third-person voice and is concise without being vague.

DimensionReasoningScore

Specificity

Lists multiple concrete actions: generates submission-ready highlights, works from manuscript text or extracted PDF/DOCX/TXT content, produces 3-5 concise evidence-grounded bullet points. Specifies the output format (Elsevier/SCI Highlights) and the types of manuscripts supported.

3 / 3

Completeness

Clearly answers both 'what' (generates submission-ready Elsevier/SCI Highlights from manuscript text or extracted content) and 'when' (explicit 'Use when' clause specifying the user needs 3-5 highlight bullets for various manuscript types). Both components are well-articulated.

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'highlights', 'Elsevier', 'SCI', 'manuscript', 'research paper', 'review', 'meta-analysis', 'case report', 'bioinformatics', 'PDF', 'DOCX', 'TXT', 'submission-ready'. Good coverage of domain-specific terms a researcher would naturally use.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive niche: Elsevier/SCI journal highlights is a very specific academic publishing task. The combination of 'Elsevier', 'SCI', 'Highlights', and 'submission-ready' creates a clear, unique trigger profile that is unlikely to conflict with general writing or summarization skills.

3 / 3

Total

12

/

12

Passed

Implementation

85%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-crafted skill with strong actionability, clear workflow sequencing with validation checkpoints, and good progressive disclosure to reference files. Its main weakness is moderate verbosity—some constraints are repeated across Output Contract, Self-critique, and Quality Checklist sections, and the When to Use/When Not to Use sections could be more concise. Overall, it provides comprehensive, executable guidance for generating academic highlights.

Suggestions

Consolidate the repeated constraints (bullet count, character limits, no fabrication) into a single authoritative section rather than restating them in Output Contract, Self-critique, and Quality Checklist.

DimensionReasoningScore

Conciseness

The skill is reasonably well-structured but includes some sections that could be tightened—'When to Use' and 'When Not to Use' sections contain information that could be condensed, and some rules are restated across multiple sections (e.g., bullet count limits appear in Output Contract, Self-critique, and Quality Checklist). However, it mostly avoids explaining concepts Claude already knows.

2 / 3

Actionability

The skill provides concrete, executable commands (e.g., `python scripts/extract_text.py <file_path>`), exact output format templates, specific character limits (85 chars), explicit coverage priorities per article type, and a structured refusal template. The guidance is specific and copy-paste ready.

3 / 3

Workflow Clarity

The workflow is clearly sequenced in 5 numbered steps with explicit validation at step 1 (source sufficiency check), error handling at step 2 (extraction failure), and a self-critique/refinement loop at step 5. The fallback and refusal contract provides a clear error recovery path, and the quality checklist serves as a final validation checkpoint.

3 / 3

Progressive Disclosure

The skill provides a clear overview with well-signaled references to `references/prompts.md` for detailed prompt templates and `scripts/extract_text.py` for text extraction. Content is appropriately split—the SKILL.md contains the workflow and rules while deferring prompt-specific details to reference files, all at one level deep.

3 / 3

Total

11

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.