or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

examples

edge-cases.mdreal-world-scenarios.md
index.md
tile.json

edge-cases.mddocs/examples/

Edge Cases and Advanced Scenarios

Advanced scenarios, edge cases, and important considerations when using @cfkit/r2.

Response Body Consumption

The getObject() method returns a Response object whose body can only be consumed once:

const obj = await bucket.getObject('photo.jpg');

// ❌ This will fail - body already consumed
const blob1 = await obj.body.blob();
const blob2 = await obj.body.blob(); // Error!

// ✅ Clone the body to read multiple times
const obj2 = await bucket.getObject('photo.jpg');
const clonedBody = obj2.body.clone();
const blob1 = await obj2.body.blob();
const blob2 = await clonedBody.blob(); // Works!

Handling Non-Existent Objects

Always check existence before operations that throw on missing objects:

// ❌ This throws if object doesn't exist
const obj = await bucket.getObject('missing.jpg'); // Error!

// ✅ Check existence first
if (await bucket.objectExists('photo.jpg')) {
  const obj = await bucket.getObject('photo.jpg');
  // Process object
} else {
  console.log('Object does not exist');
}

Concurrent Uploads to Same Key

Multiple concurrent uploads to the same key result in the last write winning:

// ⚠️ Race condition: Last write wins
await Promise.all([
  bucket.uploadFile('file.jpg', file1, { contentType: 'image/jpeg' }),
  bucket.uploadFile('file.jpg', file2, { contentType: 'image/jpeg' }),
  bucket.uploadFile('file.jpg', file3, { contentType: 'image/jpeg' })
]);
// Only file3 will be stored

// ✅ Use unique keys or implement locking
const timestamp = Date.now();
await Promise.all([
  bucket.uploadFile(`file-${timestamp}-1.jpg`, file1, { contentType: 'image/jpeg' }),
  bucket.uploadFile(`file-${timestamp}-2.jpg`, file2, { contentType: 'image/jpeg' }),
  bucket.uploadFile(`file-${timestamp}-3.jpg`, file3, { contentType: 'image/jpeg' })
]);

Expired Pre-signed URLs

Handle expired URLs gracefully:

// Client-side: Check URL expiration
async function uploadWithExpiredUrlHandling(url: string, file: File) {
  const response = await fetch(url, {
    method: 'PUT',
    headers: { 'Content-Type': file.type },
    body: file
  });
  
  if (response.status === 403) {
    // URL expired, request new one from server
    throw new Error('Upload URL expired, please refresh');
  }
  
  if (!response.ok) {
    throw new Error(`Upload failed: ${response.statusText}`);
  }
}

Metadata Header Requirements

When using pre-signed URLs with metadata, headers must match exactly:

// Server: Generate URL with metadata
const result = await bucket.presignedUploadUrl({
  key: 'file.jpg',
  contentType: 'image/jpeg',
  metadata: {
    'user-id': '123',
    'timestamp': '1234567890'
  }
});

// Client: Must include exact headers (case-sensitive)
await fetch(result.url, {
  method: 'PUT',
  headers: {
    'Content-Type': 'image/jpeg',
    'x-amz-meta-user-id': '123',        // ✅ Exact match
    'x-amz-meta-timestamp': '1234567890' // ✅ Exact match
  },
  body: file
});

// ❌ Missing or incorrect headers cause upload to fail
await fetch(result.url, {
  method: 'PUT',
  headers: {
    'Content-Type': 'image/jpeg'
    // Missing metadata headers - upload will fail!
  },
  body: file
});

ETag Availability

ETag may be undefined in some responses:

const result = await bucket.uploadFile('file.jpg', file, {
  contentType: 'image/jpeg'
});

// ⚠️ ETag may be undefined
if (result.etag) {
  console.log('ETag:', result.etag);
  // Use ETag for conditional operations
} else {
  console.log('ETag not available');
}

Empty Buckets and Objects

Handle empty states gracefully:

// Empty bucket list
const buckets = await r2.listBuckets();
if (buckets.length === 0) {
  console.log('No buckets found');
} else {
  buckets.forEach(b => console.log(`Bucket: ${b.name}`));
}

// Empty file upload
const emptyBlob = new Blob([]);
await bucket.uploadFile('empty.txt', emptyBlob, {
  contentType: 'text/plain'
});
// This works - creates a 0-byte file

String Encoding

String inputs are converted to UTF-8 bytes:

// Upload text content
await bucket.uploadFile('data.txt', 'Hello, 世界!', {
  contentType: 'text/plain; charset=utf-8'
});

// Download and verify encoding
const obj = await bucket.getObject('data.txt');
const text = await obj.body.text();
console.log(text); // "Hello, 世界!"

Bucket Existence Race Conditions

Bucket may be deleted between existence check and operation:

// ⚠️ Race condition possible
if (await bucket.exists()) {
  // Bucket might be deleted here
  await bucket.uploadFile('file.jpg', file, { contentType: 'image/jpeg' });
  // May throw error if bucket was deleted
}

// ✅ Handle errors appropriately
try {
  if (await bucket.exists()) {
    await bucket.uploadFile('file.jpg', file, { contentType: 'image/jpeg' });
  }
} catch (error) {
  if (error instanceof Error && error.message.includes('bucket')) {
    console.error('Bucket no longer exists');
    // Handle bucket deletion
  } else {
    throw error;
  }
}

Large File Handling

No built-in chunking - handle large files carefully:

// ⚠️ Large files may timeout or fail
const largeFile = new File([/* large data */], 'large.bin');
await bucket.uploadFile('large.bin', largeFile, {
  contentType: 'application/octet-stream'
});
// May fail for very large files

// ✅ Consider using direct upload for large files
// ✅ Or implement client-side chunking before upload
// ✅ Or use multipart upload if available

Content Type Validation Limitations

Validation only occurs during URL generation, not enforcement:

// Server: Generate URL with content type restriction
const url = await bucket.presignedUploadUrl({
  key: 'file.jpg',
  contentType: 'image/jpeg',
  allowedContentTypes: ['image/*']
});
// Validation passes: image/jpeg matches image/*

// ⚠️ Client can still upload different content type
// R2 doesn't enforce the restriction
await fetch(url.url, {
  method: 'PUT',
  headers: { 'Content-Type': 'application/pdf' }, // Different type!
  body: pdfFile
});
// This may succeed - R2 doesn't validate

CORS Configuration Requirements

Browser uploads require explicit CORS configuration:

{
  "rules": [
    {
      "allowed": {
        "methods": ["PUT", "GET"],
        "origins": ["https://yourdomain.com"],
        "headers": [
          "content-type",
          "x-amz-meta-user-id",
          "x-amz-meta-timestamp"
        ]
      },
      "exposeHeaders": ["ETag"],
      "maxAgeSeconds": 3600
    }
  ]
}

Important: Cloudflare R2 requires explicit header names - wildcards are not supported in headers array.

Network Failure Handling

Implement retry logic for network failures:

async function uploadWithRetry(
  key: string,
  file: Blob,
  options: UploadFileOptions,
  maxRetries = 3
): Promise<UploadResult> {
  let lastError: Error | null = null;
  
  for (let attempt = 1; attempt <= maxRetries; attempt++) {
    try {
      return await bucket.uploadFile(key, file, options);
    } catch (error) {
      lastError = error instanceof Error ? error : new Error(String(error));
      
      // Don't retry on authentication errors
      if (lastError.message.includes('credential') || 
          lastError.message.includes('authentication')) {
        throw lastError;
      }
      
      if (attempt < maxRetries) {
        // Exponential backoff
        await new Promise(resolve => 
          setTimeout(resolve, Math.pow(2, attempt) * 1000)
        );
      }
    }
  }
  
  throw lastError || new Error('Upload failed after retries');
}

Creation Date Availability

Bucket creation date may be undefined:

const info = await bucket.getInfo();

// ⚠️ creationDate may be undefined
if (info.creationDate) {
  console.log(`Bucket created: ${info.creationDate}`);
} else {
  console.log('Creation date not available');
}

Location Constraint

All R2 buckets return location: "auto":

const info = await bucket.getInfo();
console.log(info.location); // Always "auto"

// R2 uses automatic location - not region-specific
if (info.location === 'auto') {
  console.log('R2 bucket with automatic location');
}