tessl install github:jezweb/claude-skills --skill cloudflare-kvgithub.com/jezweb/claude-skills
Store key-value data globally with Cloudflare KV's edge network. Use when: caching API responses, storing configuration, managing user preferences, handling TTL expiration, or troubleshooting KV_ERROR, 429 rate limits, eventual consistency, cacheTtl errors, wrangler types issues, or remote binding configuration.
Review Score
87%
Validation Score
12/16
Implementation Score
77%
Activation Score
100%
Status: Production Ready ✅ Last Updated: 2026-01-20 Dependencies: cloudflare-worker-base (for Worker setup) Latest Versions: wrangler@4.59.2, @cloudflare/workers-types@4.20260109.0
Recent Updates (2025):
# Create namespace
npx wrangler kv namespace create MY_NAMESPACE
# Output: [[kv_namespaces]] binding = "MY_NAMESPACE" id = "<UUID>"wrangler.jsonc:
{
"kv_namespaces": [{
"binding": "MY_NAMESPACE", // Access as env.MY_NAMESPACE
"id": "<production-uuid>",
"preview_id": "<preview-uuid>" // Optional: local dev
}]
}Basic Usage:
type Bindings = { MY_NAMESPACE: KVNamespace };
app.post('/set/:key', async (c) => {
await c.env.MY_NAMESPACE.put(c.req.param('key'), await c.req.text());
return c.json({ success: true });
});
app.get('/get/:key', async (c) => {
const value = await c.env.MY_NAMESPACE.get(c.req.param('key'));
return value ? c.json({ value }) : c.json({ error: 'Not found' }, 404);
});// Get single key
const value = await env.MY_KV.get('key'); // string | null
const data = await env.MY_KV.get('key', { type: 'json' }); // object | null
const buffer = await env.MY_KV.get('key', { type: 'arrayBuffer' });
const stream = await env.MY_KV.get('key', { type: 'stream' });
// Get with cache (minimum 60s)
const value = await env.MY_KV.get('key', { cacheTtl: 300 }); // 5 min edge cache
// Bulk read (counts as 1 operation)
const values = await env.MY_KV.get(['key1', 'key2']); // Map<string, string | null>
// With metadata
const { value, metadata } = await env.MY_KV.getWithMetadata('key');
const result = await env.MY_KV.getWithMetadata(['key1', 'key2']); // Bulk with metadata// Basic write (max 1/second per key)
await env.MY_KV.put('key', 'value');
await env.MY_KV.put('user:123', JSON.stringify({ name: 'John' }));
// With expiration
await env.MY_KV.put('session', data, { expirationTtl: 3600 }); // 1 hour
await env.MY_KV.put('token', value, { expiration: Math.floor(Date.now()/1000) + 86400 });
// With metadata (max 1024 bytes)
await env.MY_KV.put('config', 'dark', {
metadata: { updatedAt: Date.now(), version: 2 }
});Critical Limits:
// List with pagination
const result = await env.MY_KV.list({ prefix: 'user:', limit: 1000, cursor });
// result: { keys: [], list_complete: boolean, cursor?: string }
// CRITICAL: Always check list_complete, not keys.length === 0
let cursor: string | undefined;
do {
const result = await env.MY_KV.list({ prefix: 'user:', cursor });
processKeys(result.keys);
cursor = result.list_complete ? undefined : result.cursor;
} while (cursor);// Delete single key
await env.MY_KV.delete('key'); // Always succeeds
// Bulk delete (CLI only, up to 10,000 keys)
// npx wrangler kv bulk delete --binding=MY_KV keys.jsonasync function getCachedData(kv: KVNamespace, key: string, fetchFn: () => Promise<any>, ttl = 300) {
const cached = await kv.get(key, { type: 'json', cacheTtl: ttl });
if (cached) return cached;
const data = await fetchFn();
await kv.put(key, JSON.stringify(data), { expirationTtl: ttl * 2 });
return data;
}Guidelines: Minimum 60s, use for read-heavy workloads (100:1 read/write ratio)
// Store small values (<1024 bytes) in metadata to avoid separate get() calls
await env.MY_KV.put('user:123', '', {
metadata: { status: 'active', plan: 'pro', lastSeen: Date.now() }
});
// list() returns metadata automatically (no additional get() calls)
const users = await env.MY_KV.list({ prefix: 'user:' });
users.keys.forEach(({ name, metadata }) => console.log(name, metadata.status));KV performance varies based on key temperature:
| Type | Response Time | When It Happens |
|---|---|---|
| Hot keys | 6-8ms | Read 2+ times/minute per datacenter |
| Cold keys | 100-300ms | Infrequently accessed, fetched from central storage |
Post-August 2025 Improvements:
Optimization: Use key coalescing to make cold keys benefit from hot key caching:
// ❌ Bad: Many cold keys (300ms each)
await kv.put('user:123:name', 'John');
await kv.put('user:123:email', 'john@example.com');
await kv.put('user:123:plan', 'pro');
// Each read of a cold key: ~100-300ms
const name = await kv.get('user:123:name'); // Cold
const email = await kv.get('user:123:email'); // Cold
const plan = await kv.get('user:123:plan'); // Cold
// ✅ Good: Single hot key (6-8ms)
await kv.put('user:123', JSON.stringify({
name: 'John',
email: 'john@example.com',
plan: 'pro'
}));
// Single read, cached as hot key: ~6-8ms
const user = JSON.parse(await kv.get('user:123'));CacheTtl helps cold keys: For infrequently-read data, cacheTtl reduces cold read latency.
Trade-off: Coalescing requires read-modify-write for updates
async function* paginateKV(kv: KVNamespace, options: { prefix?: string } = {}) {
let cursor: string | undefined;
do {
const result = await kv.list({ ...options, cursor });
yield result.keys;
cursor = result.list_complete ? undefined : result.cursor;
} while (cursor);
}
// Usage
for await (const keys of paginateKV(env.MY_KV, { prefix: 'user:' })) {
processKeys(keys);
}async function putWithRetry(kv: KVNamespace, key: string, value: string, opts?: KVPutOptions) {
let attempts = 0, delay = 1000;
while (attempts < 5) {
try {
await kv.put(key, value, opts);
return;
} catch (error) {
if ((error as Error).message.includes('429')) {
attempts++;
if (attempts >= 5) throw new Error('Max retry attempts');
await new Promise(r => setTimeout(r, delay));
delay *= 2; // Exponential backoff
} else throw error;
}
}
}KV is eventually consistent across Cloudflare's global network (Aug 2025 redesign: hybrid storage, <5ms p99 latency):
How It Works:
Example:
// Tokyo: Write
await env.MY_KV.put('counter', '1');
const value = await env.MY_KV.get('counter'); // "1" ✅ (same POP, RYOW)
// London (within 60s): May be stale ⚠️
const value2 = await env.MY_KV.get('counter'); // Might be old value
// After 60+ seconds: Consistent ✅Read-Your-Own-Write (RYOW) Guarantee: Since August 2025 redesign, requests routed through the same Cloudflare point of presence see their own writes immediately. Global consistency across different POPs still takes up to 60 seconds.
Timestamp Mitigation Pattern (for critical consistency needs):
// Use timestamp in key structure to avoid consistency issues
const timestamp = Date.now();
await kv.put(`user:123:${timestamp}`, userData);
// Find latest using list with prefix
const result = await kv.list({ prefix: 'user:123:' });
const latestKey = result.keys.sort((a, b) =>
parseInt(b.name.split(':')[2]) - parseInt(a.name.split(':')[2])
).at(0);Use KV for: Read-heavy workloads (100:1 ratio), config, feature flags, caching, user preferences Don't use KV for: Financial transactions, strong consistency, >1/second writes per key, critical data
Need strong consistency? Use Durable Objects
Source: Redesigning Workers KV
# Create namespace
npx wrangler kv namespace create MY_NAMESPACE [--preview]
# Manage keys (add --remote flag to access production data)
npx wrangler kv key put --binding=MY_KV "key" "value" [--ttl=3600] [--metadata='{}']
npx wrangler kv key get --binding=MY_KV "key" [--remote]
npx wrangler kv key list --binding=MY_KV [--prefix="user:"] [--remote]
npx wrangler kv key delete --binding=MY_KV "key"
# Bulk operations (up to 10,000 keys)
npx wrangler kv bulk put --binding=MY_KV data.json
npx wrangler kv bulk delete --binding=MY_KV keys.jsonIMPORTANT: CLI commands default to local storage. Add --remote flag to access production/remote data.
Connect local Workers to production KV namespaces during development:
wrangler.jsonc:
{
"kv_namespaces": [{
"binding": "MY_KV",
"id": "production-uuid",
"remote": true // Connect to live KV
}]
}How It Works:
Benefits:
⚠️ Warning: Writes affect production data. Consider using a staging namespace with remote: true instead of production.
Version Support:
Source: Remote bindings architecture
| Feature | Free Plan | Paid Plan |
|---|---|---|
| Reads per day | 100,000 | Unlimited |
| Writes per day (different keys) | 1,000 | Unlimited |
| Writes per key per second | 1 | 1 |
| Operations per Worker invocation | 1,000 | 1,000 |
| Namespaces per account | 1,000 | 1,000 |
| Storage per account | 1 GB | Unlimited |
| Key size | 512 bytes | 512 bytes |
| Metadata size | 1024 bytes | 1024 bytes |
| Value size | 25 MiB | 25 MiB |
| Minimum cacheTtl | 60 seconds | 60 seconds |
Critical: 1 write/second per key (429 if exceeded), bulk operations count as 1 operation, namespace limit increased from 200 → 1,000 (Jan 2025)
Cause: Writing to same key >1/second Solution: Use retry with exponential backoff (see Advanced Patterns)
// ❌ Bad
await env.MY_KV.put('counter', '1');
await env.MY_KV.put('counter', '2'); // 429 error!
// ✅ Good
await putWithRetry(env.MY_KV, 'counter', '2');Cause: Value exceeds 25 MiB Solution: Validate size before writing
if (value.length > 25 * 1024 * 1024) throw new Error('Value exceeds 25 MiB');Cause: Metadata exceeds 1024 bytes when serialized Solution: Validate serialized size
const serialized = JSON.stringify(metadata);
if (serialized.length > 1024) throw new Error('Metadata exceeds 1024 bytes');Cause: cacheTtl <60 seconds Solution: Use minimum 60
// ❌ Error
await env.MY_KV.get('key', { cacheTtl: 30 });
// ✅ Correct
await env.MY_KV.get('key', { cacheTtl: 60 });list() frequentlylist_complete when paginating, not keys.length === 0get() returns null if key doesn't exist)Cause: Writing to same key >1/second Solution: Consolidate writes or use retry with exponential backoff
// ❌ Bad: Rate limit
for (let i = 0; i < 10; i++) await kv.put('counter', String(i));
// ✅ Good: Single write
await kv.put('counter', '9');
// ✅ Good: Retry with backoff
await putWithRetry(kv, 'counter', String(i));Cause: Eventual consistency (~60 seconds propagation) Solution: Accept stale reads, use Durable Objects for strong consistency, or implement app-level cache invalidation
Cause: >1000 KV operations in single Worker invocation Solution: Use bulk operations
// ❌ Bad: 5000 operations
for (const key of 5000keys) await kv.get(key);
// ✅ Good: 1 operation
const values = await kv.get(keys); // Bulk readCause: Deleted/expired keys create "tombstones" that must be iterated through
Solution: Always check list_complete, not keys.length
// ✅ Correct pagination
let cursor: string | undefined;
do {
const result = await kv.list({ cursor });
processKeys(result.keys); // Even if empty
cursor = result.list_complete ? undefined : result.cursor;
} while (cursor);CRITICAL: When using prefix, you must include it in all paginated calls:
// ❌ WRONG - Loses prefix on subsequent pages
let result = await kv.list({ prefix: 'user:' });
result = await kv.list({ cursor: result.cursor }); // Missing prefix!
// ✅ CORRECT - Include prefix on every call
let cursor: string | undefined;
do {
const result = await kv.list({ prefix: 'user:', cursor });
processKeys(result.keys);
cursor = result.list_complete ? undefined : result.cursor;
} while (cursor);Source: List keys documentation
wrangler types Does Not Generate Types for Environment-Nested KV BindingsCause: KV namespaces defined within environment configurations (e.g., [env.feature.kv_namespaces]) are not included in generated TypeScript types
Impact: Loss of TypeScript autocomplete and type checking for KV bindings
Source: GitHub Issue #9709
Example Configuration:
# wrangler.toml
[env.feature]
name = "my-worker-feature"
[[env.feature.kv_namespaces]]
binding = "MY_STORAGE_FEATURE"
id = "xxxxxxxxxxxx"Running npx wrangler types creates type definitions for environment variables but not for the KV namespace bindings.
Workaround:
# Generate types for specific environment
npx wrangler types -e featureOr define KV namespaces at top level instead of nested in environments:
# Top-level (types generated correctly)
[[kv_namespaces]]
binding = "MY_STORAGE"
id = "xxxxxxxxxxxx"Note: Runtime bindings still work correctly; this only affects type generation.
wrangler kv key list Returns Empty Array for Remote DataCause: CLI commands default to local storage, not remote/production KV Impact: Users expect to see production data but get empty array from local storage Source: GitHub Issue #10395
Solution: Use --remote flag to access production/remote data
# ❌ Shows local storage (likely empty)
npx wrangler kv key list --binding=MY_KV
# ✅ Shows remote/production data
npx wrangler kv key list --binding=MY_KV --remoteWhy This Happens: By design, wrangler dev uses local KV storage to avoid interfering with production data. CLI commands follow the same default for consistency.
Applies to: All wrangler kv key commands (get, list, delete, put)
id vs preview_id)cacheTtl values set for reads (min 60s)list_complete check (not keys.length)Last Updated: 2026-01-20 Package Versions: wrangler@4.59.2, @cloudflare/workers-types@4.20260109.0 Changes: Added 6 research findings - hot/cold key performance patterns, remote bindings (Wrangler 4.37+), wrangler types environment issue, CLI --remote flag requirement, RYOW consistency details, prefix persistence in pagination