Use this skill for intelligent document processing and content extraction using LandingAI's Agentic Document Extraction (ADE). Trigger when users need to (1) Parse documents (PDFs, images, spreadsheets, presentations) into structured Markdown with layout understanding, (2) Extract specific structured data from documents using schemas (invoice fields, form data, table data, etc.), (3) Classify and separate multi-document batches by type (invoices vs receipts, statements vs forms, etc.), (4) Process large documents asynchronously (up to 1GB/1000 pages), (5) Get visual grounding (bounding boxes, page numbers) for extracted content — use when users mention bounding boxes, word locations, grounding, highlighting extracted content, or showing where data appears in a document. Use this skill when the task involves understanding document content for a set of documents. In particular this skill can help you write code that run on sets of documents. This will increase speed, and reduce the cost of loading the documents on the Agent context window because you can use a single script to extract the information needed.
LandingAI's Agentic Document Extraction (ADE) is a document processing SaaS that parses, extracts, and classifies documents without requiring templates or training. It provides three main capabilities:
Key Benefits:
Never install packages globally without user approval. Always check for a local Python environment first.
1. .venv/bin/python — uv-managed (this project)
2. venv/bin/python — standard Python venv
3. uv run python — if pyproject.toml exists
4. poetry run python — if poetry.lock exists
5. python3 — system fallback; warn the userUse the local environment to install: landingai-ade, python-dotenv
The user may have already setup a .env file in the same directory as the document-extraction skill with the API key. You MUST check this path first (ls -la .*/skills/document-extraction/.env). Also try checking on the same directory as this SKILL.md file.
If not, provide instructions to create one. The script below will search for .env in common locations and load it.
.venv/bin/python - << 'EOF'
import os
from pathlib import Path
from dotenv import load_dotenv
# Load API key: prefer existing env var, then .env file lookup
if os.environ.get("VISION_AGENT_API_KEY"):
print("API key found in existing environment variable")
else:
def _find_env():
for d in [Path.cwd().resolve(), *Path.cwd().resolve().parents]:
for candidate in [
# ADD the directory where the document-extraction skill is located
d / '.env',
d / 'document-extraction/.env',
d / 'skills/document-extraction/.env',
]:
if candidate.is_file():
return candidate
return None
env = _find_env()
if env:
load_dotenv(env)
print(f"API key loaded from: {env}")
else:
print("Warning: VISION_AGENT_API_KEY not set and no .env found")
EOFIf not key is found instruct the user to get an API key from https://va.landing.ai/settings/api-key
Copy .env-sample to .env and add your API key:
cp .env-sample .envEdit .env and add your key:
VISION_AGENT_API_KEY=your_actual_api_key_hereNote: The .env file is gitignored for security. Advanced users can also set the environment variable directly: export VISION_AGENT_API_KEY=<your-key>
EU Endpoint: If using the EU endpoint, set environment="eu" when initializing the client.
from dotenv import load_dotenv
load_dotenv() # Load API key from .env
from landingai_ade import LandingAIADE
from pathlib import Path
client = LandingAIADE()
# Parse a document
response = client.parse(
document=Path("document.pdf"),
model="dpt-2-latest"
)
# Access results
print(f"Pages: {response.metadata.page_count}")
print(f"Chunks: {len(response.chunks)}")
print("\nMarkdown output:")
print(response.markdown[:500]) # First 500 chars
# Save Markdown for extraction
with open("output.md", "w", encoding="utf-8") as f:
f.write(response.markdown)from dotenv import load_dotenv
load_dotenv()
from landingai_ade import LandingAIADE
from landingai_ade.lib import pydantic_to_json_schema
from pydantic import BaseModel, Field
from pathlib import Path
# Define extraction schema using Pydantic
class Invoice(BaseModel):
invoice_number: str = Field(description="Invoice number")
invoice_date: str = Field(description="Invoice date")
total_amount: float = Field(description="Total amount in USD")
vendor_name: str = Field(description="Vendor name")
# Convert to JSON schema
schema = pydantic_to_json_schema(Invoice)
client = LandingAIADE()
# Extract from parsed markdown
response = client.extract(
schema=schema,
markdown=Path("output.md"), # From parse step
model="extract-latest"
)
# Access extracted data
print(response.extraction)
# Output: {'invoice_number': 'INV-12345', 'invoice_date': '2024-01-15', ...}
# Check extraction metadata (traceability)
print(response.extraction_metadata)from dotenv import load_dotenv
load_dotenv()
from landingai_ade import LandingAIADE
from pathlib import Path
client = LandingAIADE()
response = client.parse(
document=Path("/path/to/document.pdf"),
model="dpt-2-latest"
)
# Work with chunks
for chunk in response.chunks:
print(f"Type: {chunk.type}, Page: {chunk.grounding.page}")
print(f"Content: {chunk.markdown[:100]}...")response = client.parse(
document_url="https://example.com/document.pdf",
model="dpt-2-latest"
)Spreadsheets (CSV, XLSX) return a different response type than documents. Key differences:
| Field | Documents (ParseResponse) | Spreadsheets (SpreadsheetParseResponse) |
|---|---|---|
metadata.page_count | ✓ | ✗ (uses sheet_count, total_rows, total_cells, total_chunks, total_images) |
splits[].pages | ✓ | ✗ (uses sheets — array of sheet indices) |
grounding (top-level) | ✓ | ✗ (not present for spreadsheets) |
| Chunk grounding | Always present | Optional (null for table chunks, present for embedded image chunks) |
response = client.parse(
document=Path("data.xlsx"),
model="dpt-2-latest"
)
# Spreadsheet metadata
print(f"Sheets: {response.metadata.sheet_count}")
print(f"Total rows: {response.metadata.total_rows}")
print(f"Total cells: {response.metadata.total_cells}")
# Splits use 'sheets' instead of 'pages'
for split in response.splits:
print(f"Sheet indices: {split.sheets}")
print(f"Markdown: {split.markdown[:200]}...")Choose the right model for your documents:
| Model | Best For | Chunk Types |
|---|---|---|
| dpt-2-latest | Complex documents with logos, signatures, ID cards | text, table, figure, logo, card, attestation, scan_code, marginalia |
| dpt-2-mini | Simple, digitally-native documents (faster, cheaper) | text, table, figure, marginalia |
| dpt-1 | ⚠️ Deprecated March 31, 2026 — migrate to dpt-2 | text, table, figure, marginalia |
Recommendation: Use dpt-2-latest unless you have simple documents where cost/speed is critical.
Version Pinning: For production, use dated versions (e.g., dpt-2-20260302) for reproducibility.
For files up to 1 GB or 6,000 pages, use Parse Jobs:
import time
from dotenv import load_dotenv
load_dotenv()
from landingai_ade import LandingAIADE
from pathlib import Path
client = LandingAIADE()
# Step 1: Create parse job
job = client.parse_jobs.create(
document=Path("large_document.pdf"),
model="dpt-2-latest"
)
job_id = job.job_id
print(f"Job {job_id} created")
# Step 2: Poll for completion
while True:
response = client.parse_jobs.get(job_id)
if response.status == "completed":
print(f"Job {job_id} completed")
break
print(f"Progress: {response.progress * 100:.0f}%")
time.sleep(5)
# Step 3: Access results
# Results are in response.data (or response.output_url for large results)
if response.data:
print(f"Chunks: {len(response.data.chunks)}")
with open("output.md", "w", encoding="utf-8") as f:
f.write(response.data.markdown)
elif response.output_url:
# Results > 1MB are returned as a presigned URL
print(f"Download results from: {response.output_url}")Job Status Response Fields:
job_id, status (pending, processing, completed, failed, cancelled), progress (0-1)data: The ParseResponse (or SpreadsheetParseResponse) when complete and result < 1MBoutput_url: Presigned S3 URL when result > 1MB or when output_save_url was used. Expires after 1 hour; a new URL is generated on each GET.metadata: Same as sync parse (filename, page_count, duration_ms, etc.)failure_reason: Error message if job failedIf ZDR is enabled for your organization, you must provide an output_save_url where parsed results will be saved. The results will not be returned in the API response. ZDR is not enabled by default. Typically output_save_url is a presigned url with write permissions to your S3 bucket, but you can also use other storage solutions that support file uploads via HTTP PUT requests.
job = client.parse_jobs.create(
document=Path("sensitive_document.pdf"),
model="dpt-2-latest",
output_save_url="https://your-bucket.s3.amazonaws.com/output.json"
)List all async parse jobs with optional pagination and status filtering:
# List recent jobs
jobs_response = client.parse_jobs.list(page=0, page_size=10)
for job in jobs_response.jobs:
print(f"{job.job_id}: {job.status} ({job.progress:.0%})")
# Filter by status
completed = client.parse_jobs.list(status="completed", page_size=5)
print(f"Completed jobs: {len(completed.jobs)}, more: {completed.has_more}")Available status filters: pending, processing, completed, failed, cancelled
Parse returns a ParseResponse with:
markdown: Complete document in Markdown with HTML anchor tagschunks: Array of extracted elements (each with unique ID, type, content, and per-chunk grounding)grounding: Dictionary mapping element IDs to detailed location data (page, bounding box, grounding type, and table cell position). See JSON Response for structure.metadata: Processing info — filename, org_id, page_count, duration_ms, credit_usage (float), job_id, version, failed_pagessplits: Array of split objects grouping chunks. Always present — contains a single "full" split by default, or per-page splits if split="page" was used. Note: Parse splits use a class field (values: "full" or "page"), which is different from the Split API's classification field.Common chunk types: text, table, figure, logo, card, attestation, scan_code, marginalia
For detailed chunk type reference, see references/chunk-types.md
Anchor tag prefix in
chunk.markdown: Every chunk'smarkdownfield is prefixed with an HTML anchor tag embedding the chunk UUID:<a id='abc123...'></a>\n\nActual content…. This is how the full document markdown links back to individual chunks. Strip it before string matching, display, or RAG indexing:import re _ANCHOR_RE = re.compile(r"<a[^>]*></a>\s*", re.IGNORECASE) def chunk_text(ch) -> str: """Return clean chunk markdown without the anchor prefix.""" return _ANCHOR_RE.sub("", ch.markdown or "").strip() # Example: fingerprint match against a section of the full markdown intro_chunks = [ch for ch in response.chunks if chunk_text(ch)[:80] in intro_markdown]
The SDK provides a built-in save_to parameter on parse(), extract(), and split() that automatically saves the JSON response to a folder:
from pathlib import Path
# Parse and auto-save response JSON to output/ folder
response = client.parse(
document=Path("document.pdf"),
model="dpt-2-latest",
save_to="output/" # Creates output/document_parse_output.json
)
# Response is still returned normally for immediate use
print(response.markdown[:200])The save_to parameter:
{input_filename}_{method}_output.json (e.g., document_parse_output.json)client.parse(), client.extract(), and client.split()For manual serialization (e.g., custom filenames or selective saving), use model_dump():
import json
response_dict = response.model_dump()
with open("parse_response.json", "w", encoding="utf-8") as f:
json.dump(response_dict, f, indent=2, ensure_ascii=False)
# Save markdown separately for extraction
with open("document_parsed.md", "w", encoding="utf-8") as f:
f.write(response.markdown)Important: Always use model_dump() to serialize the complete response. Do not manually construct dictionaries with selected fields, as you may miss important data like the splits array or complete grounding information.
response = client.parse(
document=Path("document.pdf"),
model="dpt-2-latest",
split="page", # Optional: organize chunks by page
password="secret", # Optional: decrypt protected files (ZDR only)
save_to="output/", # Optional: auto-save response JSON
)Organizations with Zero Data Retention (ZDR) enabled can parse password-protected files by passing the password parameter. Supported formats: PDF, DOC, DOCX, ODT, PPT, PPTX, XLSX.
# Sync parse
response = client.parse(
document=Path("encrypted.pdf"),
password="document_password",
model="dpt-2-latest"
)
# Async parse jobs
job = client.parse_jobs.create(
document=Path("encrypted.pdf"),
password="document_password",
model="dpt-2-latest"
)Note: Without ZDR the API returns HTTP 422. If the password is wrong the API returns HTTP 422 with a decryption error. The parameter is ignored for unencrypted documents.
Define what to extract using JSON Schema or Pydantic models.
Pydantic approach (recommended for Python):
from pydantic import BaseModel, Field
from landingai_ade.lib import pydantic_to_json_schema
class BankStatement(BaseModel):
account_holder: str = Field(description="Account holder name")
account_number: str = Field(description="Account number")
beginning_balance: float = Field(description="Beginning balance in USD")
ending_balance: float = Field(description="Ending balance in USD")
schema = pydantic_to_json_schema(BankStatement)JSON Schema approach:
schema = {
"type": "object",
"properties": {
"account_holder": {
"type": "string",
"description": "Account holder name"
},
"account_number": {
"type": "string",
"description": "Account number"
},
"beginning_balance": {
"type": "number",
"description": "Beginning balance in USD"
},
"ending_balance": {
"type": "number",
"description": "Ending balance in USD"
}
},
"required": ["account_holder", "account_number"]
}from dotenv import load_dotenv
load_dotenv()
from landingai_ade import LandingAIADE
from pathlib import Path
client = LandingAIADE()
# Step 1: Parse document
parse_response = client.parse(
document=Path("bank_statement.pdf"),
model="dpt-2-latest"
)
# Save markdown
with open("parsed.md", "w", encoding="utf-8") as f:
f.write(parse_response.markdown)
# Step 2: Extract structured data
extract_response = client.extract(
schema=schema, # Your JSON schema
markdown=Path("parsed.md"),
model="extract-latest"
)
# Access extracted data
print(extract_response.extraction)
# Check traceability (which chunks provided each field)
for field, metadata in extract_response.extraction_metadata.items():
print(f"{field}: from chunks {metadata.chunk_ids}")You can extract from a remotely-hosted Markdown file using markdown_url:
extract_response = client.extract(
schema=schema,
markdown_url="https://example.com/parsed_document.md",
model="extract-latest"
)For detailed schema patterns, see references/extraction-schemas.md
Nested objects:
class Address(BaseModel):
street: str
city: str
zip_code: str
class Invoice(BaseModel):
invoice_number: str
billing_address: Address # Nested objectArrays (lists):
class LineItem(BaseModel):
description: str
quantity: int
amount: float
class Invoice(BaseModel):
invoice_number: str
line_items: list[LineItem] # Array of objectsEnums (restricted values):
class BankStatement(BaseModel):
account_type: str = Field(
description="Account type",
enum=["Checking", "Savings"] # Only these values allowed
)Nullable fields:
class Patient(BaseModel):
first_name: str
middle_name: str | None = Field(default=None) # Optional field
last_name: strClassify documents before extracting type-specific fields:
from dotenv import load_dotenv
load_dotenv()
from landingai_ade import LandingAIADE
from pydantic import BaseModel, Field
from landingai_ade.lib import pydantic_to_json_schema
from pathlib import Path
# Step 1: Define classification schema
class DocumentType(BaseModel):
document_type: str = Field(
description="Document classification",
enum=["Invoice", "Receipt", "Bank Statement", "Other"]
)
client = LandingAIADE()
# Step 2: Parse document
parse_response = client.parse(
document=Path("document.pdf"),
model="dpt-2-latest"
)
# Step 3: Classify document
classification_schema = pydantic_to_json_schema(DocumentType)
classification_response = client.extract(
schema=classification_schema,
markdown=parse_response.markdown,
model="extract-latest"
)
doc_type = classification_response.extraction["document_type"]
print(f"Classified as: {doc_type}")
# Step 4: Extract based on type
if doc_type == "Invoice":
schema = pydantic_to_json_schema(InvoiceSchema)
elif doc_type == "Receipt":
schema = pydantic_to_json_schema(ReceiptSchema)
else:
print("Unsupported document type")
exit()
# Extract type-specific fields
extract_response = client.extract(
schema=schema,
markdown=parse_response.markdown,
model="extract-latest"
)Use the Split API when you have multi-document batches on single file that need to be separated:
Define how to classify and separate documents using split_class:
from dotenv import load_dotenv
load_dotenv()
from landingai_ade import LandingAIADE
from pathlib import Path
client = LandingAIADE()
# Step 1: Parse multi-document PDF
parse_response = client.parse(
document=Path("batch.pdf"),
model="dpt-2-latest"
)
# Step 2: Define split classes
split_classes = [
{
"name": "Invoice",
"description": "Commercial invoices with itemized charges",
"identifier": "Invoice Number" # Separate by invoice number
},
{
"name": "Receipt",
"description": "Payment receipts showing transaction details",
"identifier": "Receipt Date"
},
{
"name": "Bank Statement",
"description": "Monthly bank account statements"
}
]
# Step 3: Split document
split_response = client.split(
markdown=parse_response.markdown,
split_class=split_classes
)
# Step 4: Process each split
for split in split_response.splits:
print(f"Type: {split.classification}")
print(f"Identifier: {split.identifier}")
print(f"Pages: {split.pages}")
print(f"Content: {split.markdowns[0][:200]}...")Split Class Components:
Split from URL: You can also split from a remotely-hosted Markdown file:
split_response = client.split(
markdown_url="https://example.com/parsed_document.md",
split_class=split_classes
)ADE converts documents to structured Markdown:
# Document Title
## Section 1
Paragraph text...
| Column 1 | Column 2 |
|----------|----------|
| Data 1 | Data 2 |
<::Caption: Bar chart showing quarterly revenue::>Features:
<::Caption: description::>Parse returns structured JSON with five top-level fields:
{
"markdown": "# Document...",
"chunks": [
{
"id": "7d58c5cf-e4f5-4a7e-ba34-0cd7bc6a6506",
"type": "text",
"markdown": "Content...",
"grounding": {
"page": 0,
"box": { "left": 0.1, "top": 0.2, "right": 0.9, "bottom": 0.3 }
}
}
],
"splits": [
{
"class": "full",
"identifier": "full",
"pages": [0],
"markdown": "# Document...",
"chunks": ["7d58c5cf-e4f5-4a7e-ba34-0cd7bc6a6506"]
}
],
"grounding": {
"7d58c5cf-e4f5-4a7e-ba34-0cd7bc6a6506": {
"box": { "left": 0.1, "top": 0.2, "right": 0.9, "bottom": 0.3 },
"page": 0,
"type": "chunkText",
"confidence": 0.95,
"low_confidence_spans": []
},
"0-1": {
"box": { "left": 0.15, "top": 0.4, "right": 0.85, "bottom": 0.7 },
"page": 0,
"type": "table"
},
"0-2": {
"box": { "left": 0.15, "top": 0.4, "right": 0.5, "bottom": 0.55 },
"page": 0,
"type": "tableCell",
"position": { "row": 0, "col": 0, "rowspan": 1, "colspan": 1,
"chunk_id": "ef24b1ea-..." }
}
},
"metadata": {
"filename": "document.pdf",
"org_id": "org-123",
"page_count": 5,
"duration_ms": 1500,
"credit_usage": 2.0,
"job_id": "abc-123",
"version": "dpt-2-20260302",
"failed_pages": []
}
}Top-level grounding is a dictionary keyed by element ID (UUID for chunks, {page}-{base62} for tables/cells). Each value contains box, page, type, and optionally confidence and low_confidence_spans (see Confidence Scores). Table cell entries also include a position field (see Grounding and Traceability).
Grounding types use a chunk prefix to distinguish them from chunk types. The table and tableCell types are grounding-only (no corresponding chunk type):
| Grounding Type | Chunk Type | Description |
|---|---|---|
chunkText | text | Text content |
chunkTable | table | Table chunk (overall location) |
chunkFigure | figure | Figures and images |
chunkMarginalia | marginalia | Headers, footers, page numbers |
chunkLogo | logo | Company logos (DPT-2) |
chunkCard | card | ID cards, licenses (DPT-2) |
chunkAttestation | attestation | Signatures, stamps (DPT-2) |
chunkScanCode | scan_code | QR codes, barcodes (DPT-2) |
table | (grounding only) | HTML <table> element within a table chunk |
tableCell | (grounding only) | Individual cell within a table |
Extract returns:
{
"extraction": {
"invoice_number": "INV-12345",
"total": 1250.00
},
"extraction_metadata": {
"invoice_number": {
"chunk_ids": ["chunk-uuid-1"]
},
"total": {
"chunk_ids": ["chunk-uuid-2"],
"cell_ids": ["2-a5"]
}
},
"metadata": {
"filename": "markdown.md",
"org_id": "org-123",
"duration_ms": 850,
"credit_usage": 1.0,
"job_id": "abc-456",
"version": "extract-20251024",
"schema_violation_error": null,
"fallback_model_version": null
}
}Extract Metadata Fields:
schema_violation_error: null when extraction matches schema. Contains a detailed error message when the extracted data doesn't fully conform (HTTP 206 response). Extraction still returns partial data and consumes credits.fallback_model_version: null normally. Contains the model version actually used when the initial extraction attempt failed with the requested version and a fallback was used.Every parsed element includes precise location information in the top-level grounding dictionary:
left, top, right, bottom{page}-{base62} for tables and table cells
0-1, 0-2, ..., 0-9, 0-a, ..., 0-z, 0-A, ..., 0-Z, 0-10, etc.1-1)type field using prefixed names (e.g., chunkText, chunkTable). See Grounding Type Mapping.tableCell entries include a position object with row, col (zero-indexed), rowspan, colspan, and chunk_id (the parent table chunk UUID)Per-chunk grounding (on each chunk object) contains only box and page. The top-level grounding dictionary adds type and, for table cells, position.
Example:
# Per-chunk grounding (basic location)
for chunk in response.chunks:
print(f"Chunk {chunk.id} on page {chunk.grounding.page}")
bbox = chunk.grounding.box
print(f"Location: ({bbox.left}, {bbox.top}) to ({bbox.right}, {bbox.bottom})")
# Top-level grounding (detailed, with type and position)
# NOTE: grounding values are Pydantic models — use attribute access, not dict access
for elem_id, info in response.grounding.items():
print(f"{elem_id}: type={info.type}, page={info.page}")
if info.type == "tableCell" and info.position:
print(f" Cell at row={info.position.row}, col={info.position.col}")Important:
response.groundingis aDict[str, Grounding]— the outer container is a dict (so.items(),.get()work), but each value is a Pydantic model. Use attribute access (info.type,info.box.left) not dict access (info["type"]).
Top-level grounding entries may include confidence information:
confidence (float | None): Overall confidence score (0.0–1.0) for the chunk's transcriptionlow_confidence_spans (list | None): Specific text spans with low confidence, each containing:
confidence (float): Span-level confidence scoretext (str): The low-confidence textspan (list): Position markers within the chunk# Access confidence scores from top-level grounding
for elem_id, info in response.grounding.items():
if info.confidence is not None:
print(f"{elem_id}: confidence={info.confidence:.2f}")
for span in info.low_confidence_spans or []:
print(f" Low confidence ({span.confidence:.2f}): "
f"'{span.text}'")Notes:
table/tableCell types may not)dpt-2-20260302)invoice_number not number)For detailed schema patterns, see references/extraction-schemas.md
try:
response = client.parse(document=Path("doc.pdf"), model="dpt-2-latest")
except Exception as e:
print(f"Parse error: {e}")
# Handle error (check file format, file size, API key, etc.)
try:
extract_response = client.extract(schema=schema, markdown=response.markdown)
except Exception as e:
print(f"Extract error: {e}")
# Handle error (check schema validity, markdown format, etc.)Both Parse and Extract APIs can return HTTP 206 (Partial Content) when processing partially succeeds:
Parse 206: Some pages failed to parse. Check metadata.failed_pages:
response = client.parse(document=Path("doc.pdf"), model="dpt-2-latest")
if response.metadata.failed_pages:
print(f"Failed pages: {response.metadata.failed_pages}")
# Remaining pages were parsed successfullyExtract 206: Extraction completed but data doesn't fully match schema. Check metadata.schema_violation_error:
response = client.extract(schema=schema, markdown=markdown)
err = response.metadata.schema_violation_error
if err:
print(f"Schema violation: {err}")
# Extraction still returns partial data; credits are consumedNote: 206 responses still consume credits. The API returns the best results it could produce.
split="page" parameter when you need page-level organizationpassword parameter (requires ZDR). Without ZDR, remove password protection before parsingFor complete file format reference, see references/file-formats.md
See references/use-cases.md for complete worked examples: invoice processing, form data extraction, multi-document classification, table extraction, and figure cropping with PyMuPDF.
See references/troubleshooting.md for HTTP error codes, parse failures, extraction accuracy issues, schema validation errors, and performance guidance.
eeb895b
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.