CtrlK
BlogDocsLog inGet started
Tessl Logo

abstract-summarizer

Transform lengthy academic papers into concise, structured 250-word abstracts.

46

Quality

33%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/abstract-summarizer/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is concise and identifies a clear task—transforming academic papers into structured abstracts—but it lacks a 'Use when...' clause, which is critical for skill selection among many options. It also misses common trigger term variations and only describes a single action rather than enumerating specific capabilities.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to summarize a research paper, generate an abstract, or condense an academic article.'

Include natural trigger term variations such as 'research paper', 'journal article', 'manuscript', 'paper summary', 'abstract generation'.

List additional concrete capabilities beyond just transformation, such as 'identifies key findings, methods, and conclusions' or 'follows structured abstract formats (background, methods, results, conclusion)'.

DimensionReasoningScore

Specificity

Names the domain (academic papers) and a specific action (transform into 250-word abstracts), but only describes a single action rather than listing multiple concrete capabilities like formatting options, citation handling, or section extraction.

2 / 3

Completeness

Describes what the skill does (transform papers into abstracts) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, a missing 'Use when...' clause caps completeness at 2, and the 'when' is entirely absent here.

1 / 3

Trigger Term Quality

Includes relevant keywords like 'academic papers' and 'abstracts' that users might naturally say, but misses common variations such as 'research paper', 'journal article', 'summary', 'paper summary', 'abstract writing', or 'manuscript'.

2 / 3

Distinctiveness Conflict Risk

The combination of 'academic papers' and '250-word abstracts' is fairly specific, but could overlap with general summarization skills or other academic writing tools without clearer trigger boundaries.

2 / 3

Total

7

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill suffers from significant template bloat and redundancy, with many generic sections (Error Handling, Input Validation, Response Template, Output Requirements) that add no domain-specific value. The core content about abstract summarization (Overview, Core Capabilities, Quality Checklist, Common Pitfalls) is reasonably well-structured and contains useful domain knowledge, but it's buried under boilerplate. The code examples appear concrete but may not be truly executable, reducing their actionability.

Suggestions

Remove all generic boilerplate sections (Error Handling, Input Validation, Response Template, Output Requirements, 'When to Use', 'Key Features') that don't contain abstract-summarization-specific guidance — these waste tokens on things Claude already knows.

Consolidate the duplicate workflow sections (Example Usage run plan, Implementation Details, and Workflow) into a single concrete workflow with actual commands and validation steps specific to summarization.

Verify that Python code examples reference real, existing APIs (e.g., `AbstractSummarizer`, `BatchProcessor`) or clearly mark them as illustrative patterns rather than executable code.

Move the detailed Core Capabilities section (structured generation, quantitative preservation, batch processing) to a separate reference file and keep only a concise summary in the main skill file.

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~300+ lines with massive redundancy. There are duplicate sections (Quick Check and Audit-Ready Commands contain the same command), a generic 'When to Use' section that restates the description, boilerplate sections like 'Output Requirements', 'Response Template', 'Input Validation', and 'Error Handling' that explain things Claude already knows. The 'Implementation Details' section says 'See Workflow above' then repeats generic guidance. Much of this is template bloat that wastes tokens.

1 / 3

Actionability

The skill provides Python code examples and CLI commands that appear concrete, but many are likely not executable as-is (e.g., importing from `scripts.summarizer` and `scripts.batch` with specific class APIs that may not exist). The code reads more like aspirational pseudocode dressed up as real code. The CLI examples (`--field`, `--input`) and the parameter table are somewhat actionable but incomplete (missing descriptions for `--input`).

2 / 3

Workflow Clarity

There is a numbered workflow (steps 1-5) and a quality checklist with pre/during/post phases, which is good. However, the workflow steps are generic and abstract ('Confirm the user objective', 'Validate that the request matches documented scope') rather than specific to abstract summarization. The quality checklist provides good validation checkpoints but the main workflow lacks concrete validation commands or feedback loops for error recovery.

2 / 3

Progressive Disclosure

The skill references external files well (references/ directory, scripts/ directory) with clear descriptions, which is good. However, the main file itself is monolithic with too much inline content that could be split out. The Overview, Core Capabilities, Common Pitfalls, and Quality Checklist sections together create a very long document. The structure has clear sections but the sheer volume undermines navigation.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.