Transform lengthy academic papers into concise, structured 250-word abstracts.
46
33%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Academic Writing/abstract-summarizer/SKILL.mdscripts/main.py.references/ for task-specific guidance.Python: 3.10+. Repository baseline for current packaged skills.pypdf2: unspecified. Declared in requirements.txt.requests: unspecified. Declared in requirements.txt.cd "20260318/scientific-skills/Academic Writing/abstract-summarizer"
python -m py_compile scripts/main.py
python scripts/main.py --helpExample run plan:
CONFIG block or documented parameters if the script uses fixed settings.python scripts/main.py with the validated inputs.See ## Workflow above for related details.
scripts/main.py.references/ contains supporting rules, prompts, or checklists.Use this command to verify that the packaged script entry point can be parsed before deeper execution.
python -m py_compile scripts/main.pyUse these concrete commands for validation. They are intentionally self-contained and avoid placeholder paths.
python -m py_compile scripts/main.py
python scripts/main.py --helpAI-powered academic summarization tool that condenses complex research papers into publication-ready structured abstracts while preserving scientific accuracy and key findings.
Key Capabilities:
Extract and condense key sections into standard format:
from scripts.summarizer import AbstractSummarizer
summarizer = AbstractSummarizer()
# Generate from PDF
abstract = summarizer.summarize(
source="paper.pdf",
format="structured", # structured, plain, or executive
word_limit=250,
discipline="biomedical" # affects terminology handling
)
print(abstract.text)
# Output: Background → Objective → Methods → Results → ConclusionOutput Structure:
**Background**: [Context and problem statement]
**Objective**: [Research goal and hypotheses]
**Methods**: [Study design, sample, key methods]
**Results**: [Primary findings with statistics]
**Conclusion**: [Implications and significance]
---
Word count: 247/250Ensure numbers and statistics are accurately retained:
# Extract and verify quantitative results
quant_results = summarizer.extract_quantitative(
text=paper_content,
priority="high" # keep all numbers vs. representative samples
)
# Validate against original
validation = summarizer.verify_accuracy(
abstract=abstract,
source=paper_content
)Preserves:
Adjust extraction strategy by field:
# Biomedical paper
python scripts/main.py --input paper.pdf --field biomedical
# Physics paper
python scripts/main.py --input paper.pdf --field physics
# Social science paper
python scripts/main.py --input paper.pdf --field social-scienceField-Specific Handling:
| Field | Focus Areas | Special Handling |
|---|---|---|
| Biomedical | Study design, statistical significance, clinical relevance | Preserve P-values, effect sizes |
| Physics | Theoretical framework, experimental setup, precision | Keep measurement uncertainties |
| CS/Engineering | Algorithm performance, benchmarks, complexity | Retain accuracy percentages |
| Social Science | Methodology, sample demographics, theoretical contribution | Preserve effect descriptions |
Summarize multiple papers for systematic reviews:
from scripts.batch import BatchProcessor
batch = BatchProcessor()
# Process directory of papers
summaries = batch.summarize_directory(
directory="literature_review/",
output_format="csv", # or json, markdown
include_metadata=True # title, authors, year
)
# Generate review matrix
matrix = batch.create_summary_matrix(summaries)
matrix.save("review_matrix.csv")Output:
Pre-Summarization:
During Summarization:
Post-Summarization:
Before Use:
Accuracy Issues:
❌ Misrepresenting statistics → "Significant improvement" when p>0.05
❌ Oversimplifying complex findings → "Drug works" vs nuanced efficacy data
❌ Missing adverse events → Only reporting positive results
Structure Issues:
❌ Methods too detailed → Protocol steps in abstract
❌ Results without context → Numbers without interpretation
❌ Conclusion overstates → "Cure for cancer" from preclinical data
Word Count Issues:
❌ Exceeding 250 words → Journal rejection
❌ Too short (<150 words) → Missing key information
Available in references/ directory:
abstract_templates.md - Discipline-specific abstract formatsquantitative_checklist.md - Number verification guidelinesdisciplinary_guidelines.md - Field-specific conventionsjournal_requirements.md - Word limits by publisherexample_abstracts.md - High-quality examples by typeLocated in scripts/ directory:
main.py - CLI interface for summarizationsummarizer.py - Core abstract generation engineextractor.py - PDF and text extractionvalidator.py - Accuracy checking and verificationbatch_processor.py - Multi-document processingadapter.py - Journal-specific formatting📝 Note: This tool generates draft abstracts for efficiency, but all summaries require human review before submission. Always verify that numbers, statistics, and conclusions accurately reflect the original paper.
| Parameter | Type | Default | Description |
|---|---|---|---|
--input | str | Required | |
--text | str | Required | Direct text input |
--url | str | Required | URL to fetch paper from |
--output | str | Required | Output file path |
--format | str | 'structured' | Output format |
Every final response should make these items explicit when they are relevant:
scripts/main.py fails, report the failure point, summarize what still can be completed safely, and provide a manual fallback.This skill accepts requests that match the documented purpose of abstract-summarizer and include enough context to complete the workflow safely.
Do not continue the workflow when the request is out of scope, missing a critical input, or would require unsupported assumptions. Instead respond:
abstract-summarizeronly handles its documented workflow. Please provide the missing required inputs or switch to a more suitable skill.
Use the following fixed structure for non-trivial requests:
If the request is simple, you may compress the structure, but still keep assumptions and limits explicit when they affect correctness.
4a48721
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.