Transform lengthy academic papers into concise, structured 250-word abstracts capturing background, methods, results, and conclusions. Optimized for research papers, theses, and technical reports across scientific disciplines.
Install with Tessl CLI
npx tessl i github:aipoch/medical-research-skills --skill abstract-summarizerOverall
score
17%
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
AI-powered academic summarization tool that condenses complex research papers into publication-ready structured abstracts while preserving scientific accuracy and key findings.
Key Capabilities:
✅ Use this skill when:
❌ Do NOT use when:
humanities-text-analyzermath-theorem-simplifierlegal-document-summarizercreative-writing-editorIntegration:
pdf-text-extractor (content extraction), citation-formatter (reference handling)conference-abstract-adaptor (format adjustment), journal-matchmaker (submission prep)Extract and condense key sections into standard format:
from scripts.summarizer import AbstractSummarizer
summarizer = AbstractSummarizer()
# Generate from PDF
abstract = summarizer.summarize(
source="paper.pdf",
format="structured", # structured, plain, or executive
word_limit=250,
discipline="biomedical" # affects terminology handling
)
print(abstract.text)
# Output: Background → Objective → Methods → Results → ConclusionOutput Structure:
**Background**: [Context and problem statement]
**Objective**: [Research goal and hypotheses]
**Methods**: [Study design, sample, key methods]
**Results**: [Primary findings with statistics]
**Conclusion**: [Implications and significance]
---
Word count: 247/250Ensure numbers and statistics are accurately retained:
# Extract and verify quantitative results
quant_results = summarizer.extract_quantitative(
text=paper_content,
priority="high" # keep all numbers vs. representative samples
)
# Validate against original
validation = summarizer.verify_accuracy(
abstract=abstract,
source=paper_content
)Preserves:
Adjust extraction strategy by field:
# Biomedical paper
python scripts/main.py --input paper.pdf --field biomedical
# Physics paper
python scripts/main.py --input paper.pdf --field physics
# Social science paper
python scripts/main.py --input paper.pdf --field social-scienceField-Specific Handling:
| Field | Focus Areas | Special Handling |
|---|---|---|
| Biomedical | Study design, statistical significance, clinical relevance | Preserve P-values, effect sizes |
| Physics | Theoretical framework, experimental setup, precision | Keep measurement uncertainties |
| CS/Engineering | Algorithm performance, benchmarks, complexity | Retain accuracy percentages |
| Social Science | Methodology, sample demographics, theoretical contribution | Preserve effect descriptions |
Summarize multiple papers for systematic reviews:
from scripts.batch import BatchProcessor
batch = BatchProcessor()
# Process directory of papers
summaries = batch.summarize_directory(
directory="literature_review/",
output_format="csv", # or json, markdown
include_metadata=True # title, authors, year
)
# Generate review matrix
matrix = batch.create_summary_matrix(summaries)
matrix.save("review_matrix.csv")Output:
Template for RCTs and clinical studies:
{
"paper_type": "clinical_trial",
"key_elements": [
"Study design (RCT, cohort, case-control)",
"Population (n, inclusion/exclusion)",
"Intervention details",
"Primary endpoint",
"Key results (efficacy, safety)",
"Clinical significance"
],
"emphasis": "P-values, confidence intervals, adverse events"
}Example Output:
**Background**: Current treatments for X disease have limited efficacy.
**Objective**: Evaluate Drug Y's safety and efficacy in patients with X.
**Methods**: Double-blind RCT (n=342) comparing Drug Y vs placebo for 12 weeks.
**Results**: Primary endpoint achieved (67% vs 32% response, p<0.001, OR=4.2).
Adverse events mild (headache 12%, nausea 8%).
**Conclusion**: Drug Y significantly improves outcomes with acceptable safety profile.Template for laboratory/mechanistic studies:
{
"paper_type": "basic_science",
"key_elements": [
"Research question/hypothesis",
"Model system (cell line, animal, in vitro)",
"Key methods (CRISPR, Western blot, etc.)",
"Mechanistic findings",
"Biological significance"
],
"emphasis": "Molecular mechanisms, pathway diagrams"
}Example Output:
**Background**: The role of Protein X in Disease Y progression is unknown.
**Objective**: Determine if Protein X regulates Pathway Z in Disease Y.
**Methods**: CRISPR knockout in cell lines, Western blot analysis, mouse model.
**Results**: Protein X deletion reduced Pathway Z activation by 78% (p<0.01).
In vivo, knockout mice showed 45% less disease progression.
**Conclusion**: Protein X is a critical regulator of Pathway Z and potential therapeutic target.Template for systematic reviews and meta-analyses:
{
"paper_type": "meta_analysis",
"key_elements": [
"Search strategy and databases",
"Number of studies included",
"Total sample size",
"Pooled effect size",
"Heterogeneity assessment",
"Quality of evidence"
],
"emphasis": "I² values, funnel plots, GRADE assessment"
}Example Output:
**Background**: Previous trials of Intervention X show conflicting results.
**Objective**: Systematically evaluate efficacy through meta-analysis.
**Methods**: PRISMA-guided search of PubMed, Embase, Cochrane (through 2024).
23 RCTs (n=4,847) met inclusion criteria.
**Results**: Significant benefit observed (SMD=0.42, 95% CI [0.28, 0.56], p<0.001).
Moderate heterogeneity (I²=45%). Quality: moderate.
**Conclusion**: Intervention X shows modest efficacy with moderate certainty evidence.Template for methods and computational papers:
{
"paper_type": "methodology",
"key_elements": [
"Problem with existing methods",
"Novel approach description",
"Key innovations",
"Performance benchmarks",
"Comparison to state-of-the-art"
],
"emphasis": "Accuracy, speed, scalability metrics"
}Example Output:
**Background**: Current algorithms for Problem X are computationally expensive.
**Objective**: Develop efficient method with improved accuracy.
**Methods**: Novel graph neural network architecture with attention mechanism.
Validated on 5 benchmark datasets.
**Results**: 3.2× faster than current methods with 12% accuracy improvement
(p<0.001). Scales to datasets with 10M+ nodes.
**Conclusion**: Method achieves superior performance with practical computational requirements.From PDF to submission-ready abstract:
# Step 1: Extract text from PDF
python scripts/extract.py --input paper.pdf --output paper.txt
# Step 2: Generate structured abstract
python scripts/main.py \
--input paper.txt \
--field biomedical \
--format structured \
--word-limit 250 \
--output abstract.md
# Step 3: Verify accuracy
python scripts/verify.py \
--abstract abstract.md \
--source paper.txt \
--check-quantitative \
--output verification_report.txt
# Step 4: Adapt for specific journal
python scripts/adapt.py \
--abstract abstract.md \
--journal "nature_medicine" \
--output submission_abstract.txtPython API:
from scripts.summarizer import AbstractSummarizer
from scripts.validator import AccuracyValidator
# Initialize
summarizer = AbstractSummarizer()
validator = AccuracyValidator()
# Summarize
with open("paper.pdf", "rb") as f:
abstract = summarizer.summarize(
source=f,
discipline="clinical",
word_limit=250
)
# Verify numbers are accurate
is_accurate = validator.check_quantitative(
abstract=abstract,
source_pdf="paper.pdf"
)
if is_accurate:
abstract.save("final_abstract.txt")
else:
discrepancies = validator.get_discrepancies()
print(f"Review needed: {discrepancies}")Pre-Summarization:
During Summarization:
Post-Summarization:
Before Use:
Accuracy Issues:
❌ Misrepresenting statistics → "Significant improvement" when p>0.05
❌ Oversimplifying complex findings → "Drug works" vs nuanced efficacy data
❌ Missing adverse events → Only reporting positive results
Structure Issues:
❌ Methods too detailed → Protocol steps in abstract
❌ Results without context → Numbers without interpretation
❌ Conclusion overstates → "Cure for cancer" from preclinical data
Word Count Issues:
❌ Exceeding 250 words → Journal rejection
❌ Too short (<150 words) → Missing key information
Available in references/ directory:
abstract_templates.md - Discipline-specific abstract formatsquantitative_checklist.md - Number verification guidelinesdisciplinary_guidelines.md - Field-specific conventionsjournal_requirements.md - Word limits by publisherexample_abstracts.md - High-quality examples by typeLocated in scripts/ directory:
main.py - CLI interface for summarizationsummarizer.py - Core abstract generation engineextractor.py - PDF and text extractionvalidator.py - Accuracy checking and verificationbatch_processor.py - Multi-document processingadapter.py - Journal-specific formatting📝 Note: This tool generates draft abstracts for efficiency, but all summaries require human review before submission. Always verify that numbers, statistics, and conclusions accurately reflect the original paper.
| Parameter | Type | Default | Description |
|---|---|---|---|
--input | str | Required | |
--text | str | Required | Direct text input |
--url | str | Required | URL to fetch paper from |
--output | str | Required | Output file path |
--format | str | 'structured' | Output format |
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.