This skill should be used when extracting structured data from scientific PDFs for systematic reviews, meta-analyses, or database creation. Use when working with collections of research papers that need to be converted into analyzable datasets with validation metrics.
Install with Tessl CLI
npx tessl i github:brunoasm/my_claude_skills --skill extract-from-pdfsOverall
score
96%
Does it follow best practices?
Validation for skill structure
Extract standardized, structured data from scientific PDF literature using Claude's vision capabilities. Transform PDF collections into validated databases ready for statistical analysis in Python, R, or other frameworks.
Core capabilities:
Use when:
Do not use for:
Read the setup guide for installation and configuration:
cat references/setup_guide.mdKey setup steps:
conda env create -f environment.ymlexport ANTHROPIC_API_KEY='your-key'Ask the user:
Provide 2-3 example PDFs to analyze structure and design schema.
Create custom schema from template:
cp assets/schema_template.json my_schema.jsonCustomize for the specific domain:
objective describing what to extractoutput_schema with field types and descriptionsinstructions for Claudeoutput_example showing desired formatSee assets/example_flower_visitors_schema.json for real-world ecology example.
Run the 6-step pipeline (plus optional validation):
# Step 1: Organize metadata
python scripts/01_organize_metadata.py \
--source-type bibtex \
--source library.bib \
--pdf-dir pdfs/ \
--output metadata.json
# Step 2: Filter papers (optional - recommended)
# Choose backend: anthropic-haiku (cheap), anthropic-sonnet (accurate), ollama (free)
python scripts/02_filter_abstracts.py \
--metadata metadata.json \
--backend anthropic-haiku \
--use-batches \
--output filtered_papers.json
# Step 3: Extract from PDFs
python scripts/03_extract_from_pdfs.py \
--metadata filtered_papers.json \
--schema my_schema.json \
--method batches \
--output extracted_data.json
# Step 4: Repair JSON
python scripts/04_repair_json.py \
--input extracted_data.json \
--schema my_schema.json \
--output cleaned_data.json
# Step 5: Validate with APIs
python scripts/05_validate_with_apis.py \
--input cleaned_data.json \
--apis my_api_config.json \
--output validated_data.json
# Step 6: Export to analysis format
python scripts/06_export_database.py \
--input validated_data.json \
--format python \
--output resultsCalculate extraction quality metrics:
# Step 7: Sample papers for annotation
python scripts/07_prepare_validation_set.py \
--extraction-results cleaned_data.json \
--schema my_schema.json \
--sample-size 20 \
--strategy stratified \
--output validation_set.json
# Step 8: Manually annotate (edit validation_set.json)
# Fill ground_truth field for each sampled paper
# Step 9: Calculate metrics
python scripts/08_calculate_validation_metrics.py \
--annotations validation_set.json \
--output validation_metrics.json \
--report validation_report.txtValidation produces precision, recall, and F1 metrics per field and overall.
Access comprehensive guides in the references/ directory:
Setup and installation:
cat references/setup_guide.mdComplete workflow with examples:
cat references/workflow_guide.mdValidation methodology:
cat references/validation_guide.mdAPI integration details:
cat references/api_reference.mdModify my_schema.json to match the research domain:
Use imperative language in instructions. Be specific about data types, required vs optional fields, and edge cases.
Configure external database validation in my_api_config.json:
Map extracted fields to validation APIs:
gbif_taxonomy - Biological taxonomywfo_plants - Plant names specificallygeonames - Geographic locationsgeocode - Address to coordinatespubchem - Chemical compoundsncbi_gene - Gene identifiersSee assets/example_api_config_ecology.json for ecology-specific example.
Edit filtering criteria in scripts/02_filter_abstracts.py (line 74):
Replace TODO section with domain-specific criteria:
Use conservative criteria (when in doubt, include paper) to avoid false negatives.
Backend selection for filtering (Step 2):
Typical costs for 100 papers:
Optimization strategies:
--use-caching--use-batchesValidation workflow provides:
Use metrics to:
Recommended sample sizes:
See references/validation_guide.md for detailed guidance on interpreting metrics and improving extraction quality.
Data organization:
scripts/01_organize_metadata.py - Standardize PDFs and metadataFiltering:
scripts/02_filter_abstracts.py - Filter by abstract (Haiku/Sonnet/Ollama)Extraction:
scripts/03_extract_from_pdfs.py - Extract from PDFs with Claude visionProcessing:
scripts/04_repair_json.py - Repair and validate JSONscripts/05_validate_with_apis.py - Enrich with external databasesscripts/06_export_database.py - Export to analysis formatsValidation:
scripts/07_prepare_validation_set.py - Sample papers for annotationscripts/08_calculate_validation_metrics.py - Calculate P/R/F1 metricsTemplates:
assets/schema_template.json - Blank extraction schema templateassets/api_config_template.json - API validation configuration templateExamples:
assets/example_flower_visitors_schema.json - Ecology extraction exampleassets/example_api_config_ecology.json - Ecology API validation exampleIf you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.