Guide for implementing Cloudflare R2 - S3-compatible object storage with zero egress fees. Use when implementing file storage, uploads/downloads, data migration to/from R2, configuring buckets, integrating with Workers, or working with R2 APIs and SDKs.
91
Quality
88%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
S3-compatible object storage with zero egress bandwidth fees. Built on Cloudflare's global network for high durability (11 nines) and strong consistency.
Required:
For API access:
For Wrangler CLI:
npm install -g wrangler
wrangler loginWrangler:
wrangler r2 bucket create my-bucketWith location hint:
wrangler r2 bucket create my-bucket --location=wnamLocations: wnam (West NA), enam (East NA), weur (West EU), eeur (East EU), apac (Asia Pacific)
Wrangler:
wrangler r2 object put my-bucket/file.txt --file=./local-file.txtWorkers API:
await env.MY_BUCKET.put('file.txt', fileContents, {
httpMetadata: {
contentType: 'text/plain',
},
});Wrangler:
wrangler r2 object get my-bucket/file.txt --file=./downloaded.txtWorkers API:
const object = await env.MY_BUCKET.get('file.txt');
const contents = await object.text();wrangler.toml:
[[r2_buckets]]
binding = "MY_BUCKET"
bucket_name = "my-bucket"
preview_bucket_name = "my-bucket-preview"Upload with metadata:
await env.MY_BUCKET.put('user-uploads/photo.jpg', imageData, {
httpMetadata: {
contentType: 'image/jpeg',
cacheControl: 'public, max-age=31536000',
},
customMetadata: {
uploadedBy: userId,
uploadDate: new Date().toISOString(),
},
});Download with streaming:
const object = await env.MY_BUCKET.get('large-file.mp4');
if (object === null) {
return new Response('Not found', { status: 404 });
}
return new Response(object.body, {
headers: {
'Content-Type': object.httpMetadata.contentType,
'ETag': object.etag,
},
});List objects:
const listed = await env.MY_BUCKET.list({
prefix: 'user-uploads/',
limit: 100,
});
for (const object of listed.objects) {
console.log(object.key, object.size);
}Delete object:
await env.MY_BUCKET.delete('old-file.txt');Check if object exists:
const object = await env.MY_BUCKET.head('file.txt');
if (object) {
console.log('Exists:', object.size, 'bytes');
}Configure:
aws configure
# Access Key ID: <your-key-id>
# Secret Access Key: <your-secret>
# Region: autoOperations:
# List buckets
aws s3api list-buckets --endpoint-url https://<accountid>.r2.cloudflarestorage.com
# Upload file
aws s3 cp file.txt s3://my-bucket/ --endpoint-url https://<accountid>.r2.cloudflarestorage.com
# Generate presigned URL (expires in 1 hour)
aws s3 presign s3://my-bucket/file.txt --endpoint-url https://<accountid>.r2.cloudflarestorage.com --expires-in 3600import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
const s3 = new S3Client({
region: "auto",
endpoint: `https://${accountId}.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID,
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY,
},
});
await s3.send(new PutObjectCommand({
Bucket: "my-bucket",
Key: "file.txt",
Body: fileContents,
}));import boto3
s3 = boto3.client(
service_name="s3",
endpoint_url=f'https://{account_id}.r2.cloudflarestorage.com',
aws_access_key_id=access_key_id,
aws_secret_access_key=secret_access_key,
region_name="auto",
)
# Upload file
s3.upload_fileobj(file_obj, 'my-bucket', 'file.txt')
# Download file
s3.download_file('my-bucket', 'file.txt', './local-file.txt')Configure:
rclone config
# Select: Amazon S3 → Cloudflare R2
# Enter credentials and endpointUpload with multipart optimization:
# For large files (>100MB)
rclone copy large-video.mp4 r2:my-bucket/ \
--s3-upload-cutoff=100M \
--s3-chunk-size=100MWrangler:
wrangler r2 bucket create my-public-bucket
# Then enable in dashboard: R2 → Bucket → Settings → Public Accessr2.dev (development only, rate-limited):
https://pub-<hash>.r2.dev/file.txtCustom domain (recommended for production):
Required for:
Wrangler:
wrangler r2 bucket cors put my-bucket --rules '[
{
"AllowedOrigins": ["https://example.com"],
"AllowedMethods": ["GET", "PUT", "POST"],
"AllowedHeaders": ["*"],
"ExposeHeaders": ["ETag"],
"MaxAgeSeconds": 3600
}
]'Important: Origins must match exactly (no trailing slash).
For files >100MB or parallel uploads:
Workers API:
const multipart = await env.MY_BUCKET.createMultipartUpload('large-file.mp4');
// Upload parts (5MiB - 5GiB each, max 10,000 parts)
const part1 = await multipart.uploadPart(1, chunk1);
const part2 = await multipart.uploadPart(2, chunk2);
// Complete upload
const object = await multipart.complete([part1, part2]);Constraints:
Best for: Gradual migration, avoiding upfront egress fees
# Enable for bucket
wrangler r2 bucket sippy enable my-bucket \
--provider=aws \
--bucket=source-bucket \
--region=us-east-1 \
--access-key-id=$AWS_KEY \
--secret-access-key=$AWS_SECRETObjects migrate when first requested. Subsequent requests served from R2.
Best for: Complete migration, known object list
Auto-delete or transition storage classes:
Wrangler:
wrangler r2 bucket lifecycle put my-bucket --rules '[
{
"action": {"type": "AbortIncompleteMultipartUpload"},
"filter": {},
"abortIncompleteMultipartUploadDays": 7
},
{
"action": {"type": "Transition", "storageClass": "InfrequentAccess"},
"filter": {"prefix": "archives/"},
"daysFromCreation": 90
}
]'Trigger Workers on bucket events:
Wrangler:
wrangler r2 bucket notification create my-bucket \
--queue=my-queue \
--event-type=object-createSupported events:
object-create - new uploadsobject-delete - deletionsMessage format:
{
"account": "account-id",
"bucket": "my-bucket",
"object": {"key": "file.txt", "size": 1024, "etag": "..."},
"action": "PutObject",
"eventTime": "2024-01-15T12:00:00Z"
}401 Unauthorized:
403 Forbidden:
404 Not Found:
Presigned URLs not working:
Multipart upload failures:
For detailed documentation, see:
references/api-reference.md - Complete API endpoint documentationreferences/sdk-examples.md - SDK examples for all languagesreferences/workers-patterns.md - Advanced Workers integration patternsreferences/pricing-guide.md - Detailed pricing and cost optimizationb1b2fe0
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.