CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/npm-cfkit--r2

High-level Cloudflare R2 storage API wrapper for generating pre-signed URLs and performing object operations

Overview
Eval results
Files

file-operations.mddocs/reference/

Direct File Operations

Direct file operations allow you to upload, download, and delete files from your application code without generating pre-signed URLs. This approach is suitable for server-side operations where credentials can be kept secure.

Key Information for Agents

Core Capabilities:

  • Upload files directly to R2 with server-side credentials
  • Download/retrieve objects with full metadata
  • Delete objects from buckets
  • Support for multiple file input types (Blob, File, ArrayBuffer, string)
  • Metadata attachment during upload
  • ETag retrieval for upload verification

Key Methods:

  • uploadFile(key: string, file: Blob | File | ArrayBuffer | string, options: UploadFileOptions): Promise<UploadResult> - Upload file directly
  • getObject(key: string): Promise<R2Object & { body: Response }> - Retrieve object with metadata and body
  • deleteObject(key: string): Promise<void> - Delete object from bucket

Key Interfaces:

  • UploadFileOptions - contentType: string, metadata?: Record<string, string>
  • UploadResult - key: string, contentType: string, fileSize: number, etag?: string
  • R2Object - key: string, contentType?: string, size: number, lastModified: Date, etag?: string, metadata?: Record<string, string>

Default Behaviors:

  • uploadFile() overwrites existing objects with same key (no versioning)
  • getObject() throws error if object doesn't exist
  • deleteObject() succeeds even if object doesn't exist (idempotent)
  • uploadFile() requires contentType (not optional)
  • metadata is optional in UploadFileOptions (undefined if not provided)
  • etag may be undefined in UploadResult (depends on R2 response)
  • getObject() returns R2Object & { body: Response } - body is a fetch Response object
  • Response body can be consumed once (use .blob(), .text(), or .arrayBuffer())
  • Metadata headers use x-amz-meta- prefix (S3-compatible)

Supported File Types:

  • Blob - Standard Blob object
  • File - File object (extends Blob)
  • ArrayBuffer - Binary data buffer
  • string - Text content (converted to UTF-8 bytes)

Threading Model:

  • All operations are asynchronous (Promise-based)
  • Multiple concurrent uploads to different keys are safe
  • Concurrent uploads to same key may have race conditions (last write wins)
  • getObject() body consumption is not thread-safe (consume once per call)
  • Operations are stateless (no internal locking)

Lifecycle:

  • Uploaded files persist in R2 until explicitly deleted
  • Objects are immutable after upload (cannot modify, only overwrite)
  • Deleted objects are permanently removed (no recovery)
  • Metadata is stored with object and retrieved with getObject()
  • ETag can be used for conditional operations (not implemented in this library)

Common Patterns:

  • Server-side upload: Direct uploadFile() call with credentials
  • Server-side download: getObject() → Read body → Process content
  • Batch operations: Use Promise.all() for multiple operations
  • Error handling: Check existence before getObject() to avoid errors
  • Metadata tracking: Store user ID, timestamp, or other metadata during upload
  • File type validation: Validate file type before upload (not enforced by library)

Integration Points:

  • Uses S3-compatible PutObject API for uploads
  • Uses S3-compatible GetObject API for downloads
  • Uses S3-compatible DeleteObject API for deletes
  • Metadata stored as S3-compatible x-amz-meta-* headers
  • ETag returned from R2 (MD5 hash of object content)

Critical Edge Cases:

  • Non-existent object: getObject() throws error (use objectExists() first)
  • Overwriting existing object: uploadFile() silently overwrites (no warning)
  • Deleting non-existent object: deleteObject() succeeds (idempotent)
  • Empty file: Can upload empty Blob/File/ArrayBuffer (size 0)
  • Large files: No built-in chunking (upload entire file at once)
  • Concurrent uploads: Same key overwrites (last write wins, no locking)
  • Response body consumption: Body can only be read once (clone if needed)
  • Metadata size limits: R2 has limits on metadata header size (check Cloudflare docs)
  • Content type mismatch: Library doesn't validate content type matches file
  • String encoding: String inputs converted to UTF-8 bytes
  • Network failures: Operations throw error (retry logic not built-in)
  • Invalid credentials: All operations throw authentication error
  • Bucket doesn't exist: Operations throw error (check bucket.exists() first)
  • Body cloning: Use obj.body.clone() to read body multiple times
  • ETag undefined: ETag may not be available in all R2 responses

Exception Handling:

  • All operations throw standard JavaScript Error objects
  • getObject() throws error if object doesn't exist
  • uploadFile() throws error on network failure or invalid credentials
  • deleteObject() throws error on network failure (but succeeds if object doesn't exist)
  • Always wrap operations in try-catch blocks
  • Check objectExists() before getObject() to avoid errors

Capabilities

Upload File

Upload a file directly to R2 storage with optional metadata.

/**
 * Upload a file directly to R2
 * @param key - Object key (filename)
 * @param file - File content (Blob, File, ArrayBuffer, or string)
 * @param options - Upload options including content type and optional metadata
 * @returns Upload result with metadata
 */
uploadFile(
  key: string,
  file: Blob | File | ArrayBuffer | string,
  options: UploadFileOptions
): Promise<UploadResult>;

interface UploadFileOptions {
  /** Content type (MIME type) of the file */
  contentType: string;
  /** Optional metadata headers */
  metadata?: Record<string, string>;
}

interface UploadResult {
  /** Object key */
  key: string;
  /** Content type */
  contentType: string;
  /** File size in bytes */
  fileSize: number;
  /** ETag from R2 response */
  etag?: string;
}

Supported file types:

  • Blob
  • File
  • ArrayBuffer
  • string

Usage Examples:

import { R2Client } from '@cfkit/r2';

const r2 = new R2Client({
  accountId: process.env.CLOUDFLARE_ACCOUNT_ID!,
  accessKeyId: process.env.R2_ACCESS_KEY_ID!,
  secretAccessKey: process.env.R2_SECRET_ACCESS_KEY!
});

const bucket = r2.bucket('gallery');

// Upload a File object
const file = new File(['content'], 'photo.jpg', { type: 'image/jpeg' });
const result = await bucket.uploadFile('photo.jpg', file, {
  contentType: 'image/jpeg',
  metadata: {
    'original-filename': 'vacation-photo.jpg'
  }
});

console.log(`Uploaded ${result.key} (${result.fileSize} bytes)`);
console.log(`ETag: ${result.etag}`);

// Upload a Blob
const blob = new Blob(['Hello, world!'], { type: 'text/plain' });
await bucket.uploadFile('hello.txt', blob, {
  contentType: 'text/plain'
});

// Upload a string
await bucket.uploadFile('data.txt', 'Some text content', {
  contentType: 'text/plain',
  metadata: {
    'uploaded-at': new Date().toISOString()
  }
});

// Upload an ArrayBuffer
const buffer = new ArrayBuffer(8);
await bucket.uploadFile('data.bin', buffer, {
  contentType: 'application/octet-stream'
});

Get Object

Retrieve an object from R2 storage, including its metadata and content.

/**
 * Get an object from the bucket
 * @param key - Object key (filename)
 * @returns Object metadata and content
 */
getObject(key: string): Promise<R2Object & { body: Response }>;

interface R2Object {
  /** Object key */
  key: string;
  /** Content type */
  contentType?: string;
  /** File size in bytes */
  size: number;
  /** Last modified date */
  lastModified: Date;
  /** ETag */
  etag?: string;
  /** Custom metadata */
  metadata?: Record<string, string>;
}

Usage Examples:

import { R2Client } from '@cfkit/r2';

const r2 = new R2Client({
  accountId: process.env.CLOUDFLARE_ACCOUNT_ID!,
  accessKeyId: process.env.R2_ACCESS_KEY_ID!,
  secretAccessKey: process.env.R2_SECRET_ACCESS_KEY!
});

const bucket = r2.bucket('gallery');

// Get object and read as blob
const obj = await bucket.getObject('photo.jpg');
const blob = await obj.body.blob();

console.log('Key:', obj.key);           // 'photo.jpg'
console.log('Content Type:', obj.contentType);   // 'image/jpeg'
console.log('Size:', obj.size);         // File size in bytes
console.log('Last Modified:', obj.lastModified); // Date object
console.log('ETag:', obj.etag);         // ETag value
console.log('Metadata:', obj.metadata); // Custom metadata

// Get object and read as text
const textObj = await bucket.getObject('data.txt');
const text = await textObj.body.text();
console.log('Content:', text);

// Get object and read as array buffer
const binObj = await bucket.getObject('data.bin');
const arrayBuffer = await binObj.body.arrayBuffer();

// Access custom metadata
const obj2 = await bucket.getObject('photo.jpg');
if (obj2.metadata) {
  console.log('Original filename:', obj2.metadata['original-filename']);
  console.log('Uploaded by:', obj2.metadata['uploaded-by']);
}

// Check existence before getting (recommended)
if (await bucket.objectExists('photo.jpg')) {
  const obj = await bucket.getObject('photo.jpg');
  const blob = await obj.body.blob();
  // Process blob
} else {
  console.log('Object does not exist');
}

// Clone body to read multiple times
const obj3 = await bucket.getObject('photo.jpg');
const clonedBody = obj3.body.clone();
const blob1 = await obj3.body.blob();
const blob2 = await clonedBody.blob();

Important Notes:

  • Response body can only be consumed once (use .blob(), .text(), or .arrayBuffer())
  • If you need to read the body multiple times, clone it: const clonedBody = obj.body.clone();
  • Body is a standard fetch Response object with all Response methods available

Delete Object

Delete an object from R2 storage.

/**
 * Delete an object from the bucket
 * @param key - Object key (filename)
 * @returns void
 */
deleteObject(key: string): Promise<void>;

Usage Examples:

import { R2Client } from '@cfkit/r2';

const r2 = new R2Client({
  accountId: process.env.CLOUDFLARE_ACCOUNT_ID!,
  accessKeyId: process.env.R2_ACCESS_KEY_ID!,
  secretAccessKey: process.env.R2_SECRET_ACCESS_KEY!
});

const bucket = r2.bucket('gallery');

// Delete an object
await bucket.deleteObject('photo.jpg');
console.log('Object deleted');

// Delete multiple objects
const keysToDelete = ['photo1.jpg', 'photo2.jpg', 'photo3.jpg'];
await Promise.all(
  keysToDelete.map(key => bucket.deleteObject(key))
);

// Delete with existence check
const exists = await bucket.objectExists('photo.jpg');
if (exists) {
  await bucket.deleteObject('photo.jpg');
  console.log('Object deleted');
} else {
  console.log('Object does not exist');
}

// Delete is idempotent (safe to call multiple times)
await bucket.deleteObject('photo.jpg'); // First call
await bucket.deleteObject('photo.jpg'); // Second call succeeds (no error)

Error Handling

All file operations may throw errors. Always wrap them in try-catch blocks:

try {
  const result = await bucket.uploadFile('file.jpg', file, {
    contentType: 'image/jpeg'
  });
  console.log('Upload successful:', result.key);
} catch (error) {
  if (error instanceof Error) {
    console.error('Upload failed:', error.message);
  }
}

Error Handling Patterns:

// Upload with error handling
try {
  const result = await bucket.uploadFile('file.jpg', file, {
    contentType: 'image/jpeg',
    metadata: { 'uploaded-by': 'user-123' }
  });
  console.log('Uploaded:', result.key, result.fileSize, 'bytes');
} catch (error) {
  if (error instanceof Error) {
    console.error('Upload failed:', error.message);
    // Handle error (retry, log, notify user, etc.)
  }
}

// Get object with existence check
try {
  if (await bucket.objectExists('photo.jpg')) {
    const obj = await bucket.getObject('photo.jpg');
    const blob = await obj.body.blob();
    // Process blob
  } else {
    console.log('Object does not exist');
  }
} catch (error) {
  if (error instanceof Error) {
    console.error('Failed to get object:', error.message);
  }
}

// Delete with error handling
try {
  await bucket.deleteObject('photo.jpg');
  console.log('Deleted successfully');
} catch (error) {
  if (error instanceof Error) {
    console.error('Delete failed:', error.message);
  }
}

Common error scenarios:

  • Network failures - Operations throw generic fetch errors
  • Invalid credentials - All operations throw authentication error
  • Bucket does not exist - Operations throw error (check bucket.exists() first)
  • Object does not exist - getObject() throws error (use objectExists() first)
  • Insufficient permissions - Operations throw authorization error
  • Large file uploads - May timeout or fail (no built-in chunking)

Install with Tessl CLI

npx tessl i tessl/npm-cfkit--r2

docs

index.md

tile.json