CtrlK
BlogDocsLog inGet started
Tessl Logo

abstract-summarizer

Transform lengthy academic papers into concise, structured 250-word abstracts.

46

Quality

33%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/abstract-summarizer/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is concise and identifies a clear task—transforming academic papers into abstracts—but it lacks a 'Use when...' clause, which is critical for skill selection. It also misses common user-facing trigger terms and only describes a single action rather than a richer set of capabilities.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to summarize a research paper, write an abstract, or condense an academic article.'

Include natural trigger term variations such as 'research paper,' 'journal article,' 'manuscript,' 'paper summary,' and 'abstract writing.'

List additional concrete capabilities beyond just transformation, such as 'identifies key findings, methods, and conclusions' or 'adapts to different journal abstract formats.'

DimensionReasoningScore

Specificity

Names the domain (academic papers) and a specific action (transform into 250-word abstracts), but only describes a single action rather than listing multiple concrete capabilities like structure, formatting options, or field-specific handling.

2 / 3

Completeness

Describes what the skill does but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and since the 'when' is entirely absent, this scores a 1.

1 / 3

Trigger Term Quality

Includes relevant keywords like 'academic papers' and 'abstracts,' but misses common variations users might say such as 'research paper,' 'journal article,' 'summary,' 'paper summary,' 'abstract writing,' or 'manuscript.'

2 / 3

Distinctiveness Conflict Risk

The mention of 'academic papers' and '250-word abstracts' provides some specificity, but it could overlap with general summarization or writing skills without clearer trigger boundaries.

2 / 3

Total

7

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill suffers from significant verbosity and structural redundancy, with multiple sections restating the same information in slightly different ways. The domain-specific content (quality checklist, common pitfalls, discipline adaptation table) is genuinely valuable and actionable, but it's buried in boilerplate templated sections that add little value. The skill would benefit greatly from removing generic scaffolding and consolidating its best content into a leaner, better-organized document.

Suggestions

Remove redundant/boilerplate sections: 'Output Requirements', 'Response Template', 'Input Validation', 'Error Handling', and the generic 'Workflow' section all describe things Claude already knows or repeat standard agent behavior. Cut these entirely.

Consolidate the Overview, Key Features, and Core Capabilities sections into a single section that leads with the structured abstract format example and discipline table.

Integrate the Quality Checklist validation steps directly into the workflow as explicit checkpoints (e.g., 'After generating abstract: verify word count ≤ 250, verify all 5 sections present, fact-check all numbers against source').

Remove the duplicate 'Quick Check' and 'Audit-Ready Commands' sections—keep one validation command block at most.

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~300+ lines with significant redundancy. There are duplicate sections (Quick Check and Audit-Ready Commands repeat the same py_compile command), a generic 'Workflow' section that restates obvious steps, boilerplate sections like 'Output Requirements', 'Response Template', 'Input Validation', and 'Error Handling' that explain things Claude already knows. The 'Key Features' and 'Overview' sections overlap heavily. Much of the content is templated filler rather than task-specific guidance.

1 / 3

Actionability

The skill provides Python code examples and CLI commands that appear concrete, but they reference modules (scripts.summarizer, scripts.batch) whose actual implementation is unknown—these look like illustrative API designs rather than verified executable code. The quality checklist and common pitfalls sections provide genuinely useful, specific guidance for abstract writing. However, the core task (summarizing papers) relies heavily on assumed script interfaces that may not exist as shown.

2 / 3

Workflow Clarity

There are two workflow sections—a generic 5-step workflow and a 4-step 'Example run plan'—both of which are vague and lack specific validation checkpoints tied to the actual summarization task. The Quality Checklist provides good validation criteria but is disconnected from the workflow steps. For a task involving accuracy-critical output (scientific abstracts), the workflow should explicitly integrate the verification steps (number checking, word count validation) into the sequence with clear feedback loops.

2 / 3

Progressive Disclosure

The skill references external files in references/ and scripts/ directories with clear listings, which is good. However, the main file itself is monolithic with too much inline content that could be split out (e.g., the detailed discipline table, batch processing examples, full parameter tables). The structure has many sections but they're poorly organized with redundancy between Overview, Key Features, Core Capabilities, and Workflow sections.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.