or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

atomic-operations.mdconfiguration.mddatabase-operations.mddirectory-layer.mdencoders.mdindex.mdkey-selectors.mdrange-queries.mdsubspaces.mdtransactions.mdversionstamps.mdwatches.md
tile.json

tessl/npm-foundationdb

Node.js bindings for the FoundationDB database with ACID transactions, tuple encoding, and directory layer support

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
npmpkg:npm/foundationdb@2.0.x

To install, run

npx @tessl/cli install tessl/npm-foundationdb@2.0.0

index.mddocs/

FoundationDB Node.js Bindings

FoundationDB is a comprehensive Node.js library providing bindings for the FoundationDB distributed database. It enables JavaScript and TypeScript applications to interact with FoundationDB clusters through a Promise-based API with ACID transaction support, automatic conflict resolution, flexible key-value encoding with tuple and JSON support, range queries with various streaming modes, directory layer for efficient key prefix management, watch functionality for reactive programming, and versionstamp support for unique temporal identifiers.

Package Information

  • Package Name: foundationdb
  • Package Type: npm
  • Language: TypeScript (with JavaScript support)

Installation

Prerequisites

The foundationdb package requires the FoundationDB C client library to be installed on your system before installing the npm package. The package uses native bindings and must compile against the C client library.

System Requirements:

  • FoundationDB client library (version 6.0 or later)
  • C++ compiler and build tools
  • Python 3 (for node-gyp)
  • Node.js v16 or higher

Installing FoundationDB Client Library:

Debian/Ubuntu:

# Download and install the foundationdb-clients package
wget https://github.com/apple/foundationdb/releases/download/7.1.27/foundationdb-clients_7.1.27-1_amd64.deb
sudo dpkg -i foundationdb-clients_7.1.27-1_amd64.deb

# Install build tools
sudo apt-get install build-essential python3

macOS:

brew install foundationdb

Other Systems: Download the appropriate client package from https://github.com/apple/foundationdb/releases

Installing the npm Package

Once the prerequisites are installed:

npm install foundationdb

Runtime Requirements

  • A running FoundationDB server (local or remote)
  • For local development, start FoundationDB server or use Docker:
docker run -d -p 4500:4500 foundationdb/foundationdb:7.1.27

Core Imports

import fdb from "foundationdb";

For CommonJS:

const fdb = require("foundationdb");

Named imports:

import {
  setAPIVersion,
  open,
  Database,
  Transaction,
  Subspace,
  directory,
  encoders,
  tuple,
  keySelector,
} from "foundationdb";

Basic Usage

import fdb from "foundationdb";

// Set API version (required before any operations)
fdb.setAPIVersion(620);

// Open database connection
const db = fdb.open();

// Simple key-value operations
await db.set("hello", "world");
const value = await db.get("hello");
console.log(value); // "world"

// Transaction with automatic retry
await db.doTransaction(async (tn) => {
  const val = await tn.get("counter");
  const count = val ? parseInt(val) : 0;
  tn.set("counter", String(count + 1));
});

// Close when done
db.close();

Quick Reference Table

OperationCode ExampleUse Case
Initialize APIfdb.setAPIVersion(620)Required first step before any FDB operations
Open Databaseconst db = fdb.open()Connect to FoundationDB cluster
Simple Readawait db.get("key")Retrieve single value
Simple Writeawait db.set("key", "value")Store single value
Delete Keyawait db.clear("key")Remove key
Transactionawait db.doTransaction(async (tn) => { ... })ACID operations with automatic retry
Snapshot Readawait tn.snapshot().get("key")Non-blocking read without conflicts
Range Queryawait db.getRangeAllStartsWith("prefix:")Query by prefix
Versionstamp Keytn.setVersionstampedKey([tuple.unboundVersionstamp()], val)Time-ordered keys
Atomic Incrementawait db.add("counter", Buffer.from([1,0,0,0]))Lock-free counter
Batch Iteratorfor await (const batch of db.getRangeBatch(...)) { }Memory-efficient iteration
Clear Rangeawait db.clearRange("start", "end")Remove multiple keys
Watch Keyconst watch = await db.getAndWatch("key")React to changes
Directory Createawait directory.createOrOpen(db, ["app", "users"])Tenant isolation
Tuple Keysdb.withKeyEncoding(fdb.encoders.tuple)Composite keys
JSON Valuesdb.withValueEncoding(fdb.encoders.json)Object storage

Quick Start Guide

Step 1: Basic Setup (30 seconds)

import fdb from "foundationdb";

// REQUIRED: Set API version first
fdb.setAPIVersion(620);

// Open database connection
const db = fdb.open();

// Test connection with simple get/set
await db.set("test", "Hello FoundationDB!");
const value = await db.get("test");
console.log("Success:", value?.toString()); // "Hello FoundationDB!"

// Always close when done
db.close();
fdb.stopNetworkSync();

Step 2: First Transaction (1 minute)

import fdb from "foundationdb";

fdb.setAPIVersion(620);
const db = fdb.open();

try {
  // Transactions provide ACID guarantees
  const result = await db.doTransaction(async (tn) => {
    // Read current value
    const currentValue = await tn.get("myCounter");
    const counter = currentValue
      ? parseInt(currentValue.toString())
      : 0;

    // Write new value
    const newValue = counter + 1;
    tn.set("myCounter", newValue.toString());

    // Return result
    return newValue;
  });

  console.log("Counter updated to:", result);
} finally {
  db.close();
  fdb.stopNetworkSync();
}

Step 3: Working with JSON (2 minutes)

import fdb from "foundationdb";

fdb.setAPIVersion(620);
const db = fdb.open().withValueEncoding(fdb.encoders.json);

// Store JavaScript objects directly
await db.set("user:alice", {
  name: "Alice",
  email: "alice@example.com",
  age: 30,
  roles: ["admin", "user"]
});

// Read as JavaScript object
const alice = await db.get("user:alice");
console.log("User:", alice);
console.log("Name:", alice.name);       // "Alice"
console.log("Roles:", alice.roles);     // ["admin", "user"]

// Update with transaction
await db.doTransaction(async (tn) => {
  const user = await tn.get("user:alice");
  if (user) {
    user.email = "alice@newdomain.com";
    tn.set("user:alice", user);
  }
});

Step 4: Atomic Operations (2 minutes)

import fdb from "foundationdb";

fdb.setAPIVersion(620);
const db = fdb.open();

// Atomic counter (fast, no conflicts)
async function incrementCounter(name: string) {
  const delta = Buffer.allocUnsafe(8);
  delta.writeBigInt64LE(1n, 0);
  await db.add(name, delta);
}

async function getCounter(name: string): Promise<number> {
  const value = await db.get(name);
  if (!value) return 0;
  return Number(value.readBigInt64LE(0));
}

// Multiple concurrent increments work without conflicts
await Promise.all([
  incrementCounter("pageViews"),
  incrementCounter("pageViews"),
  incrementCounter("pageViews")
]);

const count = await getCounter("pageViews");
console.log("Page views:", count); // 3

Getting Started

First Transaction Example

Here's a complete example to get you started with FoundationDB transactions.

import fdb from "foundationdb";

async function main() {
  // 1. Set API version (required first step)
  fdb.setAPIVersion(620);

  // 2. Open database connection
  const db = fdb.open();

  try {
    // 3. Perform a simple transaction
    const result = await db.doTransaction(async (tn) => {
      // Read current value
      const currentValue = await tn.get("myCounter");
      const counter = currentValue
        ? parseInt(currentValue.toString())
        : 0;

      // Write new value
      const newValue = counter + 1;
      tn.set("myCounter", newValue.toString());

      // Return the new value
      return newValue;
    });

    console.log("Counter updated to:", result);

    // 4. Read the value outside transaction
    const value = await db.get("myCounter");
    console.log("Current counter:", value?.toString());

  } finally {
    // 5. Clean up
    db.close();
    fdb.stopNetworkSync();
  }
}

main().catch(console.error);

Common Patterns for Beginners

Pattern 1: User Profile Storage

import fdb from "foundationdb";

fdb.setAPIVersion(620);
const db = fdb.open().withValueEncoding(fdb.encoders.json);

// Create user
async function createUser(userId: string, profile: any) {
  await db.set(`user:${userId}`, profile);
  console.log("User created:", userId);
}

// Read user
async function getUser(userId: string) {
  const profile = await db.get(`user:${userId}`);
  return profile;
}

// Update user
async function updateUser(userId: string, updates: any) {
  await db.doTransaction(async (tn) => {
    const current = await tn.get(`user:${userId}`);
    if (current) {
      const updated = { ...current, ...updates };
      tn.set(`user:${userId}`, updated);
    }
  });
}

// Delete user
async function deleteUser(userId: string) {
  await db.clear(`user:${userId}`);
}

// Usage
await createUser("alice", { name: "Alice", email: "alice@example.com" });
const user = await getUser("alice");
console.log("User:", user);

await updateUser("alice", { email: "alice@newdomain.com" });
await deleteUser("alice");

Pattern 2: Shopping Cart

import fdb from "foundationdb";

fdb.setAPIVersion(620);
const db = fdb.open().withValueEncoding(fdb.encoders.json);

class ShoppingCart {
  constructor(private userId: string) {}

  private cartKey() {
    return `cart:${this.userId}`;
  }

  async addItem(itemId: string, quantity: number) {
    await db.doTransaction(async (tn) => {
      const cart = await tn.get(this.cartKey()) || {};
      cart[itemId] = (cart[itemId] || 0) + quantity;
      tn.set(this.cartKey(), cart);
    });
  }

  async removeItem(itemId: string) {
    await db.doTransaction(async (tn) => {
      const cart = await tn.get(this.cartKey()) || {};
      delete cart[itemId];
      tn.set(this.cartKey(), cart);
    });
  }

  async getCart() {
    return await db.get(this.cartKey()) || {};
  }

  async clear() {
    await db.clear(this.cartKey());
  }
}

// Usage
const cart = new ShoppingCart("user123");
await cart.addItem("item1", 2);
await cart.addItem("item2", 1);
console.log("Cart:", await cart.getCart());
await cart.clear();

Pattern 3: Simple Counter

import fdb from "foundationdb";

fdb.setAPIVersion(620);
const db = fdb.open();

async function incrementCounter(name: string): Promise<number> {
  // Using transaction (slow, but simple)
  return await db.doTransaction(async (tn) => {
    const value = await tn.get(name);
    const count = value ? parseInt(value.toString()) : 0;
    const newCount = count + 1;
    tn.set(name, newCount.toString());
    return newCount;
  });
}

async function incrementCounterFast(name: string) {
  // Using atomic operation (fast, no conflicts)
  const delta = Buffer.allocUnsafe(8);
  delta.writeBigInt64LE(1n, 0);
  await db.add(name, delta);
}

async function getCounter(name: string): Promise<number> {
  const value = await db.get(name);
  if (!value) return 0;
  return Number(value.readBigInt64LE(0));
}

// Usage
await incrementCounterFast("pageViews");
await incrementCounterFast("pageViews");
await incrementCounterFast("pageViews");
const count = await getCounter("pageViews");
console.log("Page views:", count); // 3

Pattern 4: Configuration Management

import fdb from "foundationdb";

fdb.setAPIVersion(620);
const db = fdb.open()
  .at("config:")
  .withValueEncoding(fdb.encoders.json);

class ConfigManager {
  async set(key: string, value: any) {
    await db.set(key, value);
  }

  async get(key: string, defaultValue?: any) {
    const value = await db.get(key);
    return value !== undefined ? value : defaultValue;
  }

  async getAll() {
    const configs = await db.getRangeAll("", "\xFF");
    const result: Record<string, any> = {};
    for (const [key, value] of configs) {
      result[key.toString()] = value;
    }
    return result;
  }

  async delete(key: string) {
    await db.clear(key);
  }
}

// Usage
const config = new ConfigManager();
await config.set("theme", "dark");
await config.set("language", "en");
await config.set("notifications", { email: true, push: false });

const theme = await config.get("theme");
console.log("Theme:", theme); // "dark"

const allConfigs = await config.getAll();
console.log("All configs:", allConfigs);

Pattern 5: Task Queue

import fdb, { tuple } from "foundationdb";

fdb.setAPIVersion(620);
const db = fdb.open()
  .withKeyEncoding(fdb.encoders.tuple)
  .withValueEncoding(fdb.encoders.json);

class TaskQueue {
  constructor(private queueName: string) {}

  async enqueue(task: any) {
    await db.doTransaction(async (tn) => {
      const key = [this.queueName, tuple.unboundVersionstamp()];
      tn.setVersionstampedKey(key, task);
    });
  }

  async dequeue(maxTasks = 10): Promise<any[]> {
    return await db.doTransaction(async (tn) => {
      const tasks = await tn.getRangeAll(
        [this.queueName],
        [this.queueName, Buffer.from([0xff])],
        { limit: maxTasks }
      );

      const results = [];
      for (const [key, value] of tasks) {
        results.push(value);
        tn.clear(key);
      }

      return results;
    });
  }

  async getQueueSize(): Promise<number> {
    const tasks = await db.getRangeAll(
      [this.queueName],
      [this.queueName, Buffer.from([0xff])]
    );
    return tasks.length;
  }
}

// Usage
const queue = new TaskQueue("emailQueue");

await queue.enqueue({ to: "user1@example.com", subject: "Welcome" });
await queue.enqueue({ to: "user2@example.com", subject: "Newsletter" });

console.log("Queue size:", await queue.getQueueSize());

const tasks = await queue.dequeue(5);
console.log("Dequeued tasks:", tasks);

Step-by-Step Tutorial

Step 1: Installation and Setup

# Install FoundationDB client library (Ubuntu/Debian)
wget https://github.com/apple/foundationdb/releases/download/7.1.27/foundationdb-clients_7.1.27-1_amd64.deb
sudo dpkg -i foundationdb-clients_7.1.27-1_amd64.deb

# Install npm package
npm install foundationdb

Step 2: Basic Connection

import fdb from "foundationdb";

// Required: Set API version
fdb.setAPIVersion(620);

// Open database connection
const db = fdb.open();

// Test connection
try {
  await db.set("test", "Hello FoundationDB!");
  const value = await db.get("test");
  console.log("Success:", value?.toString());
} finally {
  db.close();
  fdb.stopNetworkSync();
}

Step 3: Working with Transactions

import fdb from "foundationdb";

fdb.setAPIVersion(620);
const db = fdb.open();

// Simple transaction
await db.doTransaction(async (tn) => {
  // All operations are atomic
  tn.set("key1", "value1");
  tn.set("key2", "value2");

  const value = await tn.get("key1");
  console.log("Read in transaction:", value?.toString());
});

// Transaction with return value
const result = await db.doTransaction(async (tn) => {
  tn.set("counter", "10");
  return "Transaction completed";
});
console.log(result);

Step 4: Range Queries

import fdb from "foundationdb";

fdb.setAPIVersion(620);
const db = fdb.open();

// Insert test data
await db.doTransaction(async (tn) => {
  for (let i = 0; i < 10; i++) {
    tn.set(`user:${i}`, `User ${i}`);
  }
});

// Query all users
const users = await db.getRangeAllStartsWith("user:");
console.log("Found users:", users.length);

for (const [key, value] of users) {
  console.log(key.toString(), "=>", value.toString());
}

Step 5: Using Encoders

import fdb from "foundationdb";

fdb.setAPIVersion(620);
const db = fdb.open();

// JSON encoder for objects
const jsonDb = db.withValueEncoding(fdb.encoders.json);

await jsonDb.set("user:alice", {
  name: "Alice",
  age: 30,
  roles: ["admin", "user"]
});

const alice = await jsonDb.get("user:alice");
console.log("User:", alice);
console.log("Name:", alice.name);
console.log("Roles:", alice.roles);

// Tuple encoder for structured keys
const tupleDb = db.withKeyEncoding(fdb.encoders.tuple);

await tupleDb.set(["user", "bob", "profile"], "Bob's profile");
await tupleDb.set(["user", "bob", "settings"], "Bob's settings");

const bobData = await tupleDb.getRangeAllStartsWith(["user", "bob"]);
console.log("Bob's data:", bobData);

Architecture

FoundationDB Node.js bindings are built around several key components:

  • Native Bindings: C++ bindings to the FoundationDB C client library via node-gyp
  • Database Class: Connection manager and transaction orchestrator with automatic retry logic
  • Transaction Class: ACID transaction context with snapshot reads and serializable isolation
  • Subspace System: Hierarchical key prefix management for organizing keyspaces
  • Directory Layer: Dynamic prefix allocation for efficient multi-tenant keyspace management
  • Encoder System: Pluggable transformers for key/value encoding (tuple, JSON, Buffer, custom)
  • Type Safety: Full TypeScript support with generic types throughout the API

Capabilities

API Initialization

Core setup functions for configuring and connecting to FoundationDB.

/**
 * Set the FoundationDB API version (required before any operations)
 * @param version - API version (500-720)
 * @param headerVersion - Optional header version for compatibility
 */
function setAPIVersion(version: number, headerVersion?: number): void;

/**
 * Open a database connection
 * @param clusterFile - Optional path to cluster file (default: uses default cluster file)
 * @param dbOpts - Optional database configuration options
 * @returns Database instance
 */
function open(clusterFile?: string, dbOpts?: DatabaseOptions): Database;

/**
 * @deprecated This method will be removed in a future version. Call open() directly - it is synchronous too.
 * Open a database connection (deprecated alias for open)
 * @param clusterFile - Optional path to cluster file
 * @param dbOpts - Optional database configuration options
 * @returns Database instance
 */
function openSync(clusterFile?: string, dbOpts?: DatabaseOptions): Database;

/**
 * @deprecated FDB clusters have been removed from the API. Call open() directly to connect.
 * Create a cluster connection (deprecated, returns stub object)
 * @param clusterFile - Optional path to cluster file
 * @returns Promise resolving to stub cluster object
 */
function createCluster(
  clusterFile?: string
): Promise<{
  openDatabase(dbName?: "DB", opts?: DatabaseOptions): Promise<Database>;
  openDatabaseSync(dbName?: "DB", opts?: DatabaseOptions): Database;
  close(): void;
}>;

/**
 * @deprecated FDB clusters have been removed from the API. Call open() directly to connect.
 * Create a cluster connection synchronously (deprecated, returns stub object)
 * @param clusterFile - Optional path to cluster file
 * @returns Stub cluster object
 */
function createClusterSync(clusterFile?: string): {
  openDatabase(dbName?: "DB", opts?: DatabaseOptions): Promise<Database>;
  openDatabaseSync(dbName?: "DB", opts?: DatabaseOptions): Database;
  close(): void;
};

/**
 * Configure network settings (must be called before opening database)
 * @param netOpts - Network configuration options
 */
function configNetwork(netOpts: NetworkOptions): void;

/**
 * Synchronously stop the FDB network thread (for clean shutdown)
 */
function stopNetworkSync(): void;

Configuration

Database Operations

Core database operations including CRUD operations, scope management, and transaction execution with automatic retry.

class Database<KeyIn = any, KeyOut = any, ValIn = any, ValOut = any> {
  /**
   * Execute a transaction with automatic retry logic
   * @param body - Transaction function
   * @param opts - Optional transaction options
   * @returns Promise resolving to the transaction function's return value
   */
  doTransaction<T>(
    body: (tn: Transaction<KeyIn, KeyOut, ValIn, ValOut>) => Promise<T>,
    opts?: TransactionOptions
  ): Promise<T>;

  /** Get a value by key */
  get(key: KeyIn): Promise<ValOut | undefined>;

  /** Set a key-value pair */
  set(key: KeyIn, value: ValIn): Promise<void>;

  /** Remove a key */
  clear(key: KeyIn): Promise<void>;

  /** Create a scoped database reference at a prefix */
  at(prefix: KeyIn): Database<KeyIn, KeyOut, ValIn, ValOut>;

  /** Create a database reference with a specific key encoder */
  withKeyEncoding<KI, KO>(
    transformer: Transformer<KI, KO>
  ): Database<KI, KO, ValIn, ValOut>;

  /** Create a database reference with a specific value encoder */
  withValueEncoding<VI, VO>(
    transformer: Transformer<VI, VO>
  ): Database<KeyIn, KeyOut, VI, VO>;

  /** Close the database connection */
  close(): void;
}

Database Operations

Transactions

Transaction management for ACID-compliant operations with snapshot reads, version control, and conflict management.

class Transaction<KeyIn = any, KeyOut = any, ValIn = any, ValOut = any> {
  /** Whether this is a snapshot transaction */
  readonly isSnapshot: boolean;

  /** Get a value by key within the transaction */
  get(key: KeyIn): Promise<ValOut | undefined>;

  /** Set a key-value pair within the transaction */
  set(key: KeyIn, value: ValIn): void;

  /** Remove a key within the transaction */
  clear(key: KeyIn): void;

  /** Create a snapshot transaction reference (for non-blocking reads) */
  snapshot(): Transaction<KeyIn, KeyOut, ValIn, ValOut>;

  /** Set transaction options */
  setOption(opt: TransactionOptions | keyof TransactionOptions, value?: any): void;

  /** Get the transaction's versionstamp (resolves after commit) */
  getVersionstamp(): Promise<Buffer>;
}

Transactions

Range Queries

Range operations for querying multiple key-value pairs with async iterators, batching, and streaming modes.

/**
 * Get all key-value pairs in a range as an array
 * @param start - Start key or key selector
 * @param end - End key or key selector (optional)
 * @param opts - Range options (limit, reverse, streamingMode)
 * @returns Promise resolving to array of [key, value] tuples
 */
getRangeAll(
  start: KeyIn,
  end?: KeyIn,
  opts?: RangeOptions
): Promise<Array<[KeyOut, ValOut]>>;

/**
 * Get all key-value pairs with a prefix
 * @param prefix - Key prefix to match
 * @param opts - Range options
 * @returns Promise resolving to array of [key, value] tuples
 */
getRangeAllStartsWith(
  prefix: KeyIn,
  opts?: RangeOptions
): Promise<Array<[KeyOut, ValOut]>>;

/**
 * Async iterator over key-value pairs in a range
 * @param start - Start key or key selector
 * @param end - End key or key selector (optional)
 * @param opts - Range options
 * @returns AsyncIterableIterator of [key, value] tuples
 */
getRange(
  start: KeyIn,
  end?: KeyIn,
  opts?: RangeOptions
): AsyncIterableIterator<[KeyOut, ValOut]>;

interface RangeOptions {
  /** Maximum number of results */
  limit?: number;
  /** Return results in reverse order */
  reverse?: boolean;
  /** Streaming mode for fetching data */
  streamingMode?: StreamingMode;
  /** Target byte size for results */
  targetBytes?: number;
}

Range Queries

Atomic Operations

Atomic mutations for lock-free concurrent updates including arithmetic, bitwise, and byte operations.

/** Atomic addition (little-endian) */
add(key: KeyIn, oper: Buffer | number): void;

/** Atomic maximum (little-endian comparison) */
max(key: KeyIn, oper: Buffer | number): void;

/** Atomic minimum (little-endian comparison) */
min(key: KeyIn, oper: Buffer | number): void;

/** Atomic bitwise AND */
bitAnd(key: KeyIn, oper: Buffer): void;

/** Atomic bitwise OR */
bitOr(key: KeyIn, oper: Buffer): void;

/** Atomic bitwise XOR */
bitXor(key: KeyIn, oper: Buffer): void;

/** Lexicographic byte minimum */
byteMin(key: KeyIn, value: ValIn): void;

/** Lexicographic byte maximum */
byteMax(key: KeyIn, value: ValIn): void;

/** Generic atomic operation */
atomicOp(op: MutationType, key: KeyIn, oper: ValIn): void;

enum MutationType {
  Add = 2,
  BitAnd = 6,
  BitOr = 7,
  BitXor = 8,
  Max = 12,
  Min = 13,
  ByteMin = 16,
  ByteMax = 17,
}

Atomic Operations

Versionstamps

Versionstamp operations for generating unique temporal identifiers with transaction commit versions.

/**
 * Set a key with an embedded versionstamp
 * @param key - Key with versionstamp placeholder
 * @param value - Value to set
 * @param bakeAfterCommit - Whether to bake the versionstamp after commit
 */
setVersionstampedKey(
  key: KeyIn,
  value: ValIn,
  bakeAfterCommit?: boolean
): void;

/**
 * Set a value with an embedded versionstamp
 * @param key - Key to set
 * @param value - Value with versionstamp placeholder
 * @param bakeAfterCommit - Whether to bake the versionstamp after commit
 */
setVersionstampedValue(
  key: KeyIn,
  value: ValIn,
  bakeAfterCommit?: boolean
): void;

/**
 * Set a key with versionstamp suffix
 * @param key - Base key
 * @param value - Value to set
 * @param suffix - Optional suffix after versionstamp
 */
setVersionstampSuffixedKey(
  key: KeyIn,
  value: ValIn,
  suffix?: Buffer
): void;

/**
 * Set a value with versionstamp prefix
 * @param key - Key to set
 * @param value - Optional base value
 * @param prefix - Optional prefix before versionstamp
 */
setVersionstampPrefixedValue(
  key: KeyIn,
  value?: ValIn,
  prefix?: Buffer
): void;

/**
 * Get a value and extract versionstamp prefix
 * @param key - Key to get
 * @returns Object with versionstamp and value
 */
getVersionstampPrefixedValue(
  key: KeyIn
): Promise<{ stamp: Buffer; value: ValOut } | undefined>;

interface UnboundStamp {
  /** The data containing the versionstamp */
  data: Buffer;
  /** Position of the versionstamp in data */
  stampPos: number;
  /** Position of the transaction code (optional) */
  codePos?: number;
}

Versionstamps

Subspaces

Subspace system for hierarchical key organization with automatic prefix management and encoder inheritance.

class Subspace<KeyIn = any, KeyOut = any, ValIn = any, ValOut = any> {
  /** The subspace prefix */
  readonly prefix: Buffer;

  /**
   * Create a child subspace
   * @param prefix - Prefix for the child subspace
   * @param keyXf - Optional key transformer
   * @param valueXf - Optional value transformer
   */
  at<KI, KO, VI, VO>(
    prefix: KeyIn,
    keyXf?: Transformer<KI, KO>,
    valueXf?: Transformer<VI, VO>
  ): Subspace<KI, KO, VI, VO>;

  /** Create a subspace with a specific key encoder */
  withKeyEncoding<KI, KO>(
    keyXf: Transformer<KI, KO>
  ): Subspace<KI, KO, ValIn, ValOut>;

  /** Create a subspace with a specific value encoder */
  withValueEncoding<VI, VO>(
    valXf: Transformer<VI, VO>
  ): Subspace<KeyIn, KeyOut, VI, VO>;

  /** Pack a key according to the subspace's encoding */
  packKey(key: KeyIn): Buffer;

  /** Unpack a key according to the subspace's encoding */
  unpackKey(key: Buffer): KeyOut;

  /** Check if a key is within this subspace */
  contains(key: Buffer): boolean;
}

/** The root subspace with no prefix or transformers */
const root: Subspace;

Subspaces

Directory Layer

Directory layer for dynamic prefix allocation, multi-tenant keyspace management, and hierarchical directory structures.

class Directory<KeyIn = any, KeyOut = any, ValIn = any, ValOut = any> {
  /** The content subspace for this directory */
  readonly content: Subspace<KeyIn, KeyOut, ValIn, ValOut>;

  /**
   * Create or open a directory
   * @param txnOrDb - Transaction or Database
   * @param path - Directory path (array of strings or single string)
   * @param layer - Optional layer identifier
   * @returns Promise resolving to Directory instance
   */
  createOrOpen(
    txnOrDb: Transaction | Database,
    path: string | string[],
    layer?: string | Buffer
  ): Promise<Directory>;

  /**
   * Open an existing directory (throws if not found)
   * @param txnOrDb - Transaction or Database
   * @param path - Directory path
   * @param layer - Optional expected layer
   */
  open(
    txnOrDb: Transaction | Database,
    path: string | string[],
    layer?: string | Buffer
  ): Promise<Directory>;

  /**
   * Create a new directory (throws if exists)
   * @param txnOrDb - Transaction or Database
   * @param path - Directory path
   * @param layer - Optional layer identifier
   * @param prefix - Optional manual prefix
   */
  create(
    txnOrDb: Transaction | Database,
    path: string | string[],
    layer?: string | Buffer,
    prefix?: Buffer
  ): Promise<Directory>;

  /** List subdirectories as an array */
  listAll(
    txnOrDb: Transaction | Database,
    path?: string | string[]
  ): Promise<string[]>;

  /** Remove a directory */
  remove(txnOrDb: Transaction | Database, path?: string | string[]): Promise<void>;

  /** Check if a directory exists */
  exists(txnOrDb: Transaction | Database, path?: string | string[]): Promise<boolean>;

  /** Get the directory's full path */
  getPath(): string[];

  /** Get the directory's layer */
  getLayer(): string;
}

/** Default directory layer instance */
const directory: DirectoryLayer;

Directory Layer

Encoders and Transformers

Built-in encoder/decoder objects for key and value transformation with support for tuple, JSON, Buffer, and custom encodings.

interface Transformer<In, Out> {
  /** Encode a value to Buffer or string */
  pack(val: In): Buffer | string;
  /** Decode a Buffer to a value */
  unpack(buf: Buffer): Out;
  /** Optional: Pack with unbound versionstamp */
  packUnboundVersionstamp?(val: In): UnboundStamp;
  /** Optional: Bake versionstamp after commit */
  bakeVersionstamp?(val: any, versionstamp: Buffer, code: number): Out;
  /** Optional: Get range for prefix queries */
  range?(prefix: In): { begin: Buffer | string; end: Buffer | string };
}

/** Built-in encoders */
const encoders: {
  /** 32-bit big-endian integer encoder */
  int32BE: Transformer<number, number>;
  /** JSON encoder */
  json: Transformer<any, any>;
  /** UTF-8 string encoder */
  string: Transformer<string, string>;
  /** Buffer pass-through encoder */
  buf: Transformer<Buffer, Buffer>;
  /** Tuple encoder (from fdb-tuple package) */
  tuple: Transformer<TupleItem[], TupleItem[]>;
};

/** Tuple item types */
type TupleItem =
  | null
  | Buffer
  | string
  | number
  | boolean
  | TupleItem[]
  | UnboundStamp;

Encoders and Transformers

Key Selectors

Key selector operations for advanced key queries with relative positioning and range boundaries.

interface KeySelector<Key> {
  /** The reference key */
  key: Key;
  /** Include equal keys */
  orEqual: boolean;
  /** Offset from the reference key */
  offset: number;
}

/**
 * Create a key selector
 */
const keySelector: {
  /** Create a key selector */
  (key: any, orEqual: boolean, offset: number): KeySelector<any>;

  /** Select first key greater than */
  firstGreaterThan(key: any): KeySelector<any>;

  /** Select first key greater than or equal to */
  firstGreaterOrEqual(key: any): KeySelector<any>;

  /** Select last key less than */
  lastLessThan(key: any): KeySelector<any>;

  /** Select last key less than or equal to */
  lastLessOrEqual(key: any): KeySelector<any>;

  /** Add offset to selector */
  add(selector: KeySelector<any>, offset: number): KeySelector<any>;

  /** Next key selector */
  next(selector: KeySelector<any>): KeySelector<any>;

  /** Previous key selector */
  prev(selector: KeySelector<any>): KeySelector<any>;
};

Key Selectors

Watches

Watch operations for reactive programming with key change notifications.

interface Watch {
  /** Promise that resolves when key changes (true) or watch is cancelled (false) */
  promise: Promise<boolean>;
  /** Cancel the watch */
  cancel(): void;
}

interface WatchWithValue<Value> extends Watch {
  /** The current value of the watched key */
  value: Value | undefined;
}

/**
 * Watch a key for changes
 * @param key - Key to watch
 * @param opts - Watch options
 * @returns Watch object with promise and cancel function
 */
watch(key: KeyIn, opts?: WatchOptions): Watch;

/**
 * Get a value and watch it for changes
 * @param key - Key to get and watch
 * @returns Watch object with value, promise, and cancel function
 */
getAndWatch(key: KeyIn): Promise<WatchWithValue<ValOut>>;

/**
 * Set a value and watch it for changes
 * @param key - Key to set
 * @param value - Value to set
 * @returns Watch object
 */
setAndWatch(key: KeyIn, value: ValIn): Promise<Watch>;

interface WatchOptions {
  /** Throw all errors instead of resolving to false */
  throwAllErrors?: boolean;
}

Watches

Configuration Options

Network, database, and transaction configuration options for fine-tuning performance and behavior.

interface NetworkOptions {
  /** Enable trace output to directory */
  trace_enable?: string;
  /** Max trace file size */
  trace_roll_size?: number;
  /** TLS certificate path */
  TLS_cert_path?: string;
  /** TLS key path */
  TLS_key_path?: string;
  /** Directory for external client libraries */
  external_client_directory?: string;
  // ... and many more
}

interface DatabaseOptions {
  /** Location cache size */
  location_cache_size?: number;
  /** Maximum outstanding watches */
  max_watches?: number;
  /** Transaction timeout in milliseconds */
  transaction_timeout?: number;
  /** Transaction retry limit */
  transaction_retry_limit?: number;
  /** Transaction size limit in bytes */
  transaction_size_limit?: number;
  // ... and more
}

interface TransactionOptions {
  /** Timeout in milliseconds */
  timeout?: number;
  /** Retry limit */
  retry_limit?: number;
  /** Disable read-your-writes */
  read_your_writes_disable?: true;
  /** System immediate priority */
  priority_system_immediate?: true;
  /** Batch priority */
  priority_batch?: true;
  /** Access system keys */
  access_system_keys?: true;
  /** Lock-aware transaction */
  lock_aware?: true;
  // ... and many more
}

enum StreamingMode {
  WantAll = -2,
  Iterator = -1,
  Exact = 0,
  Small = 1,
  Medium = 2,
  Large = 3,
  Serial = 4,
}

Configuration

Error Handling

Error classes and predicates for handling FoundationDB errors.

class FDBError extends Error {
  /** The FDB error code */
  code: number;
  /** Error description */
  message: string;
}

class DirectoryError extends Error {
  /** Error description */
  message: string;
}

enum ErrorPredicate {
  /** Error is retryable */
  Retryable = 50000,
  /** Transaction may have committed */
  MaybeCommitted = 50001,
  /** Error is retryable and not committed */
  RetryableNotCommitted = 50002,
}

Utility Functions

Utility functions for key manipulation and type checking.

/** Utility object */
const util: {
  /** Increment a string/Buffer to get the next key in lexicographic order */
  strInc(val: string | Buffer): Buffer;
};

/**
 * Module type constant indicating the native binding implementation
 * Always 'napi' for N-API based native bindings
 */
const modType: "napi";

Tuple Package

The package re-exports the fdb-tuple package for tuple encoding/decoding:

/** Tuple encoding/decoding functions (from fdb-tuple package) */
const tuple: {
  /** Pack tuple items into a Buffer */
  pack(items: TupleItem[]): Buffer;
  /** Unpack a Buffer into tuple items */
  unpack(buffer: Buffer): TupleItem[];
  /** Create an unbound versionstamp marker */
  unboundVersionstamp(code?: number): UnboundStamp;
  /** Get range for tuple prefix */
  range(items: TupleItem[]): { begin: Buffer; end: Buffer };
  // ... and more tuple functions
};

Error Handling

FoundationDB operations can throw FDBError with specific error codes. The library provides automatic retry logic for retryable errors when using doTransaction():

import { FDBError, ErrorPredicate } from "foundationdb";

try {
  await db.get("mykey");
} catch (error) {
  if (error instanceof FDBError) {
    console.log("FDB Error code:", error.code);
    console.log("FDB Error message:", error.message);
  }
}

Type Safety

The library provides full TypeScript support with generic types for key and value transformations:

// Database with specific key/value types
const db: Database<string, string, string, string> = fdb
  .open()
  .withKeyEncoding(fdb.encoders.string)
  .withValueEncoding(fdb.encoders.string);

// Tuple-encoded keys with JSON values
const tupleDb: Database<TupleItem[], TupleItem[], any, any> = fdb
  .open()
  .withKeyEncoding(fdb.encoders.tuple)
  .withValueEncoding(fdb.encoders.json);

Test Framework Integration

Jest Integration

Test FoundationDB operations with Jest's async/await support and lifecycle hooks.

import fdb from "foundationdb";
import { describe, beforeAll, afterAll, beforeEach, test, expect } from "@jest/globals";

describe("FoundationDB Tests", () => {
  let db: ReturnType<typeof fdb.open>;

  beforeAll(() => {
    fdb.setAPIVersion(620);
    db = fdb.open();
  });

  afterAll(() => {
    db.close();
    fdb.stopNetworkSync();
  });

  beforeEach(async () => {
    // Clear test data
    await db.clearRangeStartsWith("test:");
  });

  test("should perform basic operations", async () => {
    await db.set("test:key", "value");
    const result = await db.get("test:key");
    expect(result?.toString()).toBe("value");
  });

  test("should handle transactions", async () => {
    const count = await db.doTransaction(async (tn) => {
      tn.set("test:counter", "5");
      const val = await tn.get("test:counter");
      return parseInt(val?.toString() || "0");
    });
    expect(count).toBe(5);
  });

  test("should handle errors gracefully", async () => {
    await expect(async () => {
      await db.doTransaction(async (tn) => {
        throw new Error("Test error");
      });
    }).rejects.toThrow("Test error");
  });
});

Mocha Integration

Use Mocha with Promise-based tests and proper cleanup.

import fdb from "foundationdb";
import { describe, before, after, beforeEach, it } from "mocha";
import { expect } from "chai";

describe("FoundationDB Operations", function() {
  this.timeout(10000); // 10 second timeout

  let db: ReturnType<typeof fdb.open>;

  before(() => {
    fdb.setAPIVersion(620);
    db = fdb.open();
  });

  after(() => {
    db.close();
    fdb.stopNetworkSync();
  });

  beforeEach(async () => {
    await db.clearRangeStartsWith("test:");
  });

  it("should read and write data", async () => {
    await db.set("test:mocha", "data");
    const value = await db.get("test:mocha");
    expect(value?.toString()).to.equal("data");
  });

  it("should handle concurrent operations", async () => {
    const operations = Array.from({ length: 10 }, (_, i) =>
      db.set(`test:item:${i}`, `value${i}`)
    );
    await Promise.all(operations);

    const items = await db.getRangeAllStartsWith("test:item:");
    expect(items).to.have.lengthOf(10);
  });

  it("should support atomic operations", async () => {
    const delta = Buffer.allocUnsafe(8);
    delta.writeBigInt64LE(1n, 0);

    await Promise.all([
      db.add("test:atomic", delta),
      db.add("test:atomic", delta),
      db.add("test:atomic", delta)
    ]);

    const result = await db.get("test:atomic");
    expect(result?.readBigInt64LE(0)).to.equal(3n);
  });
});

Vitest Integration

Leverage Vitest's fast execution and modern testing features.

import fdb from "foundationdb";
import { describe, beforeAll, afterAll, beforeEach, test, expect } from "vitest";

describe("FoundationDB with Vitest", () => {
  let db: ReturnType<typeof fdb.open>;

  beforeAll(() => {
    fdb.setAPIVersion(620);
    db = fdb.open();
  });

  afterAll(() => {
    db.close();
    fdb.stopNetworkSync();
  });

  beforeEach(async () => {
    await db.clearRangeStartsWith("test:");
  });

  test("concurrent writes with snapshot isolation", async () => {
    const writes = Array.from({ length: 100 }, (_, i) =>
      db.set(`test:concurrent:${i}`, `value${i}`)
    );

    await Promise.all(writes);

    const results = await db.getRangeAllStartsWith("test:concurrent:");
    expect(results).toHaveLength(100);
  });

  test("transaction retry logic", async () => {
    let attempts = 0;

    const result = await db.doTransaction(async (tn) => {
      attempts++;
      const value = await tn.get("test:retry");
      tn.set("test:retry", (parseInt(value?.toString() || "0") + 1).toString());
      return attempts;
    });

    expect(result).toBeGreaterThanOrEqual(1);
  });

  test("range queries with limits", async () => {
    for (let i = 0; i < 50; i++) {
      await db.set(`test:range:${i.toString().padStart(3, "0")}`, `value${i}`);
    }

    const page1 = await db.getRangeAll("test:range:", "test:range:~", { limit: 10 });
    const page2 = await db.getRangeAll("test:range:", "test:range:~", {
      limit: 10,
      reverse: true
    });

    expect(page1).toHaveLength(10);
    expect(page2).toHaveLength(10);
    expect(page1[0][0].toString()).not.toBe(page2[0][0].toString());
  });
});

Test Utilities

Helper functions for common test scenarios.

import fdb, { FDBError } from "foundationdb";

export class FDBTestHelper {
  private db: ReturnType<typeof fdb.open>;
  private testPrefix: string;

  constructor(testPrefix = "test:") {
    fdb.setAPIVersion(620);
    this.db = fdb.open();
    this.testPrefix = testPrefix;
  }

  getDB() {
    return this.db;
  }

  async cleanup() {
    await this.db.clearRangeStartsWith(this.testPrefix);
  }

  async close() {
    this.db.close();
  }

  async withTransaction<T>(
    fn: (tn: any) => Promise<T>,
    opts?: any
  ): Promise<T> {
    return await this.db.doTransaction(fn, opts);
  }

  async expectError(
    fn: () => Promise<any>,
    errorCode?: number
  ): Promise<FDBError> {
    try {
      await fn();
      throw new Error("Expected operation to throw");
    } catch (error) {
      if (error instanceof FDBError) {
        if (errorCode !== undefined && error.code !== errorCode) {
          throw new Error(`Expected error code ${errorCode}, got ${error.code}`);
        }
        return error;
      }
      throw error;
    }
  }

  async populateTestData(count: number, prefix?: string) {
    const pfx = prefix || this.testPrefix;
    const operations = Array.from({ length: count }, (_, i) =>
      this.db.set(`${pfx}${i}`, `value${i}`)
    );
    await Promise.all(operations);
  }

  async assertKeyExists(key: string): Promise<Buffer> {
    const value = await this.db.get(key);
    if (value === undefined) {
      throw new Error(`Expected key "${key}" to exist`);
    }
    return value;
  }

  async assertKeyNotExists(key: string): Promise<void> {
    const value = await this.db.get(key);
    if (value !== undefined) {
      throw new Error(`Expected key "${key}" to not exist`);
    }
  }
}

// Usage in tests
import { describe, beforeAll, afterAll, beforeEach, test, expect } from "vitest";

describe("Using FDBTestHelper", () => {
  let helper: FDBTestHelper;

  beforeAll(() => {
    helper = new FDBTestHelper("test:helper:");
  });

  afterAll(async () => {
    await helper.cleanup();
    await helper.close();
  });

  beforeEach(async () => {
    await helper.cleanup();
  });

  test("populate and verify test data", async () => {
    await helper.populateTestData(10);
    await helper.assertKeyExists("test:helper:0");
    await helper.assertKeyExists("test:helper:9");
    await helper.assertKeyNotExists("test:helper:10");
  });

  test("transaction helper", async () => {
    const result = await helper.withTransaction(async (tn) => {
      tn.set("test:helper:tx", "value");
      return "success";
    });
    expect(result).toBe("success");
  });
});

Advanced Usage Patterns

Connection Pooling Pattern

Manage database connections efficiently across your application.

import fdb from "foundationdb";

class FDBConnectionPool {
  private static instance: FDBConnectionPool;
  private db: ReturnType<typeof fdb.open> | null = null;
  private initialized = false;

  private constructor() {}

  static getInstance(): FDBConnectionPool {
    if (!FDBConnectionPool.instance) {
      FDBConnectionPool.instance = new FDBConnectionPool();
    }
    return FDBConnectionPool.instance;
  }

  async initialize(opts?: { clusterFile?: string; trace?: string }) {
    if (this.initialized) return;

    try {
      fdb.setAPIVersion(620);

      if (opts?.trace) {
        fdb.configNetwork({
          trace_enable: opts.trace,
          trace_format: "json",
        });
      }

      this.db = fdb.open(opts?.clusterFile);
      this.db.setNativeOptions({
        transaction_timeout: 10000,
        transaction_retry_limit: 100,
        max_watches: 20000,
      });

      this.initialized = true;
    } catch (error) {
      console.error("Failed to initialize FDB connection:", error);
      throw error;
    }
  }

  getDatabase(): ReturnType<typeof fdb.open> {
    if (!this.db) {
      throw new Error("Database not initialized. Call initialize() first.");
    }
    return this.db;
  }

  async shutdown() {
    if (this.db) {
      this.db.close();
      fdb.stopNetworkSync();
      this.db = null;
      this.initialized = false;
    }
  }

  isInitialized(): boolean {
    return this.initialized;
  }
}

// Usage across application
export const fdbPool = FDBConnectionPool.getInstance();

// In application startup
await fdbPool.initialize({ trace: "./fdb-traces" });

// In any module
import { fdbPool } from "./fdb-pool";

async function getData(key: string) {
  const db = fdbPool.getDatabase();
  return await db.get(key);
}

// Graceful shutdown
process.on("SIGTERM", async () => {
  await fdbPool.shutdown();
  process.exit(0);
});

Retry with Exponential Backoff

Implement robust retry logic for transient failures.

import fdb, { FDBError } from "foundationdb";

async function withRetry<T>(
  operation: () => Promise<T>,
  maxRetries = 5,
  baseDelay = 100
): Promise<T> {
  let lastError: Error | undefined;

  for (let attempt = 0; attempt <= maxRetries; attempt++) {
    try {
      return await operation();
    } catch (error) {
      lastError = error as Error;

      // Don't retry on non-retryable errors
      if (error instanceof FDBError && error.code === 1007) {
        throw error; // Not retryable
      }

      if (attempt < maxRetries) {
        const delay = baseDelay * Math.pow(2, attempt);
        const jitter = Math.random() * delay * 0.1;
        await new Promise((resolve) => setTimeout(resolve, delay + jitter));
      }
    }
  }

  throw new Error(`Operation failed after ${maxRetries} retries: ${lastError?.message}`);
}

// Usage
fdb.setAPIVersion(620);
const db = fdb.open();

const result = await withRetry(async () => {
  return await db.doTransaction(async (tn) => {
    const value = await tn.get("critical:data");
    tn.set("critical:data", "updated");
    return value;
  });
});

Circuit Breaker Pattern

Protect against cascading failures with circuit breaker.

import fdb, { FDBError } from "foundationdb";

class CircuitBreaker {
  private failures = 0;
  private lastFailureTime = 0;
  private state: "closed" | "open" | "half-open" = "closed";

  constructor(
    private threshold = 5,
    private timeout = 60000,
    private resetTimeout = 30000
  ) {}

  async execute<T>(operation: () => Promise<T>): Promise<T> {
    if (this.state === "open") {
      if (Date.now() - this.lastFailureTime > this.resetTimeout) {
        this.state = "half-open";
      } else {
        throw new Error("Circuit breaker is OPEN");
      }
    }

    try {
      const result = await operation();
      this.onSuccess();
      return result;
    } catch (error) {
      this.onFailure();
      throw error;
    }
  }

  private onSuccess() {
    this.failures = 0;
    this.state = "closed";
  }

  private onFailure() {
    this.failures++;
    this.lastFailureTime = Date.now();

    if (this.failures >= this.threshold) {
      this.state = "open";
    }
  }

  getState() {
    return this.state;
  }
}

// Usage
fdb.setAPIVersion(620);
const db = fdb.open();
const breaker = new CircuitBreaker(5, 60000, 30000);

async function safeQuery(key: string): Promise<Buffer | undefined> {
  return await breaker.execute(async () => {
    return await db.get(key);
  });
}

Batch Processing with Chunking

Process large datasets efficiently with automatic chunking.

import fdb from "foundationdb";

async function processBatchInChunks<T>(
  db: ReturnType<typeof fdb.open>,
  prefix: string,
  processor: (batch: Array<[Buffer, Buffer]>) => Promise<T[]>,
  chunkSize = 100
): Promise<T[]> {
  const results: T[] = [];
  let startKey = prefix;

  while (true) {
    const chunk = await db.getRangeAll(
      startKey,
      prefix + "~",
      { limit: chunkSize }
    );

    if (chunk.length === 0) break;

    const chunkResults = await processor(chunk);
    results.push(...chunkResults);

    if (chunk.length < chunkSize) break;

    // Continue from after last key
    startKey = chunk[chunk.length - 1][0].toString() + "\x00";
  }

  return results;
}

// Usage
fdb.setAPIVersion(620);
const db = fdb.open();

const processed = await processBatchInChunks(
  db,
  "users:",
  async (batch) => {
    return batch.map(([key, value]) => ({
      key: key.toString(),
      parsed: JSON.parse(value.toString()),
    }));
  },
  50
);

Caching Layer with Invalidation

Implement application-level caching with FDB watches.

import fdb from "foundationdb";

class FDBCache<T> {
  private cache = new Map<string, { value: T; watch: any }>();

  constructor(private db: ReturnType<typeof fdb.open>) {}

  async get(key: string, parser: (buf: Buffer) => T): Promise<T | undefined> {
    // Check cache first
    const cached = this.cache.get(key);
    if (cached) {
      return cached.value;
    }

    // Fetch and cache with watch
    const watch = await this.db.getAndWatch(key);

    if (watch.value === undefined) {
      return undefined;
    }

    const value = parser(watch.value);
    this.cache.set(key, { value, watch });

    // Invalidate on change
    watch.promise.then(() => {
      this.cache.delete(key);
    });

    return value;
  }

  async set(key: string, value: T, serializer: (val: T) => Buffer) {
    await this.db.set(key, serializer(value));
    this.cache.delete(key); // Invalidate cache
  }

  clear() {
    this.cache.forEach(({ watch }) => watch.cancel());
    this.cache.clear();
  }
}

// Usage
fdb.setAPIVersion(620);
const db = fdb.open();
const cache = new FDBCache(db);

const user = await cache.get(
  "user:123",
  (buf) => JSON.parse(buf.toString())
);

await cache.set(
  "user:123",
  { name: "Alice", email: "alice@example.com" },
  (val) => Buffer.from(JSON.stringify(val))
);

Distributed Lock Implementation

Implement distributed locking for coordination.

import fdb from "foundationdb";

class DistributedLock {
  private lockKey: string;
  private lockValue: string;

  constructor(
    private db: ReturnType<typeof fdb.open>,
    lockName: string,
    private ttl = 30000
  ) {
    this.lockKey = `locks:${lockName}`;
    this.lockValue = `${Date.now()}-${Math.random()}`;
  }

  async acquire(timeout = 10000): Promise<boolean> {
    const startTime = Date.now();

    while (Date.now() - startTime < timeout) {
      try {
        const acquired = await this.db.doTransaction(async (tn) => {
          const existing = await tn.get(this.lockKey);

          if (existing === undefined) {
            // Lock is free
            tn.set(this.lockKey, this.lockValue);
            return true;
          }

          // Check if lock expired
          const lockData = existing.toString();
          const lockTime = parseInt(lockData.split("-")[0]);

          if (Date.now() - lockTime > this.ttl) {
            // Lock expired, take it
            tn.set(this.lockKey, this.lockValue);
            return true;
          }

          return false;
        });

        if (acquired) return true;
      } catch (error) {
        // Transaction conflict, retry
      }

      // Wait before retry
      await new Promise((resolve) => setTimeout(resolve, 100));
    }

    return false;
  }

  async release(): Promise<void> {
    await this.db.doTransaction(async (tn) => {
      const existing = await tn.get(this.lockKey);

      if (existing?.toString() === this.lockValue) {
        tn.clear(this.lockKey);
      }
    });
  }

  async withLock<T>(fn: () => Promise<T>, timeout = 10000): Promise<T> {
    const acquired = await this.acquire(timeout);
    if (!acquired) {
      throw new Error(`Failed to acquire lock: ${this.lockKey}`);
    }

    try {
      return await fn();
    } finally {
      await this.release();
    }
  }
}

// Usage
fdb.setAPIVersion(620);
const db = fdb.open();

const lock = new DistributedLock(db, "resource:123", 30000);

await lock.withLock(async () => {
  // Critical section - only one process can execute this at a time
  const value = await db.get("shared:resource");
  await db.set("shared:resource", "updated");
});

Event Sourcing Pattern

Implement event sourcing with ordered event storage.

import fdb, { tuple } from "foundationdb";

interface Event {
  type: string;
  data: any;
  timestamp: number;
  version?: Buffer;
}

class EventStore {
  private db: ReturnType<typeof fdb.open>;

  constructor() {
    fdb.setAPIVersion(620);
    this.db = fdb.open()
      .withKeyEncoding(fdb.encoders.tuple)
      .withValueEncoding(fdb.encoders.json);
  }

  async appendEvent(
    streamId: string,
    eventType: string,
    data: any
  ): Promise<Buffer> {
    return await this.db.doTransaction(async (tn) => {
      const key = ["events", streamId, tuple.unboundVersionstamp()];

      const event: Event = {
        type: eventType,
        data,
        timestamp: Date.now(),
      };

      tn.setVersionstampedKey(key, event);

      const versionstamp = tn.getVersionstamp();
      return versionstamp.promise;
    });
  }

  async getEvents(
    streamId: string,
    fromVersion?: Buffer,
    limit?: number
  ): Promise<Event[]> {
    const start = fromVersion
      ? ["events", streamId, fromVersion]
      : ["events", streamId];

    const events = await this.db.getRangeAll(
      start,
      ["events", streamId, Buffer.from([0xff])],
      { limit }
    );

    return events.map(([key, value]) => ({
      ...(value as any),
      version: key[2] as Buffer,
    }));
  }

  async replay(
    streamId: string,
    handler: (event: Event) => void | Promise<void>
  ): Promise<void> {
    const events = await this.getEvents(streamId);

    for (const event of events) {
      await handler(event);
    }
  }

  async getSnapshot(streamId: string): Promise<any> {
    const events = await this.getEvents(streamId);

    // Rebuild state from events
    let state: any = {};

    for (const event of events) {
      state = this.applyEvent(state, event);
    }

    return state;
  }

  private applyEvent(state: any, event: Event): any {
    // Apply event to state based on event type
    switch (event.type) {
      case "created":
        return { ...event.data, created: true };
      case "updated":
        return { ...state, ...event.data };
      case "deleted":
        return { ...state, deleted: true };
      default:
        return state;
    }
  }
}

// Usage
const store = new EventStore();

// Append events
await store.appendEvent("order:123", "OrderCreated", {
  items: ["item1", "item2"],
  total: 100,
});

await store.appendEvent("order:123", "ItemAdded", {
  item: "item3",
});

await store.appendEvent("order:123", "OrderConfirmed", {
  confirmedAt: Date.now(),
});

// Replay events
await store.replay("order:123", (event) => {
  console.log(`Event: ${event.type}`, event.data);
});

// Get current snapshot
const currentState = await store.getSnapshot("order:123");
console.log("Current state:", currentState);

Error Handling

Common FDB Errors

Understanding and handling common FoundationDB error codes.

import fdb, { FDBError } from "foundationdb";

fdb.setAPIVersion(620);
const db = fdb.open();

async function handleFDBErrors() {
  try {
    await db.doTransaction(async (tn) => {
      tn.set("key", "value");
    });
  } catch (error) {
    if (error instanceof FDBError) {
      switch (error.code) {
        case 1007: // Transaction too old
          console.error("Transaction took too long - increase timeout");
          break;

        case 1009: // Request for future version
          console.error("Clock skew detected - check system time");
          break;

        case 1020: // Not committed (transaction may have succeeded)
          console.error("Commit status unknown - check if data was written");
          break;

        case 1021: // Transaction cancelled
          console.error("Transaction was cancelled");
          break;

        case 1025: // Transaction timed out
          console.error("Operation exceeded timeout limit");
          break;

        case 2017: // Transaction too large
          console.error("Transaction size exceeds limit - split into smaller transactions");
          break;

        default:
          console.error(`FDB Error ${error.code}: ${error.message}`);
      }
    } else {
      console.error("Non-FDB error:", error);
    }
  }
}

Transaction Conflict Errors

Handle transaction conflicts with proper retry logic.

import fdb, { FDBError } from "foundationdb";

fdb.setAPIVersion(620);
const db = fdb.open();

async function handleConflicts() {
  let attempts = 0;
  const maxAttempts = 5;

  while (attempts < maxAttempts) {
    try {
      const result = await db.doTransaction(async (tn) => {
        attempts++;

        const value = await tn.get("counter");
        const count = parseInt(value?.toString() || "0");

        // Simulate some processing
        await new Promise(resolve => setTimeout(resolve, 10));

        tn.set("counter", (count + 1).toString());
        return count + 1;
      });

      console.log(`Success after ${attempts} attempts:`, result);
      return result;
    } catch (error) {
      if (error instanceof FDBError && error.code === 1007) {
        console.log(`Attempt ${attempts} failed with conflict, retrying...`);
        if (attempts >= maxAttempts) {
          throw new Error(`Failed after ${maxAttempts} attempts`);
        }
        // doTransaction handles retry automatically, but showing manual retry for illustration
      } else {
        throw error;
      }
    }
  }
}

Timeout Errors

Handle and prevent timeout errors effectively.

import fdb, { FDBError } from "foundationdb";

fdb.setAPIVersion(620);
const db = fdb.open();

async function handleTimeouts() {
  try {
    await db.doTransaction(async (tn) => {
      // Long-running operation
      for await (const batch of tn.getRangeBatch("data:", "data:~")) {
        // Process batch
        await processBatch(batch);
      }
    }, {
      timeout: 30000, // 30 second timeout
    });
  } catch (error) {
    if (error instanceof FDBError && error.code === 1025) {
      console.error("Transaction timed out");

      // Strategies to fix:
      // 1. Increase timeout
      // 2. Split into smaller transactions
      // 3. Use snapshot reads where possible
      // 4. Optimize query performance
    }
    throw error;
  }
}

async function processBatch(batch: any) {
  // Batch processing logic
}

Network Errors

Handle network-related errors and connectivity issues.

import fdb, { FDBError } from "foundationdb";

async function handleNetworkErrors() {
  fdb.setAPIVersion(620);

  try {
    const db = fdb.open("/path/to/fdb.cluster");

    await db.doTransaction(async (tn) => {
      tn.set("key", "value");
    });
  } catch (error) {
    if (error instanceof FDBError) {
      if (error.code === 1031) {
        console.error("Cannot connect to cluster - check network and cluster file");
      } else if (error.code === 1032) {
        console.error("Cluster file invalid or corrupted");
      } else if (error.code === 2501) {
        console.error("No coordinators available - check FDB cluster health");
      }
    }
    throw error;
  }
}

Directory Layer Errors

Handle directory-specific errors.

import fdb, { directory, DirectoryError } from "foundationdb";

fdb.setAPIVersion(620);
const db = fdb.open();

async function handleDirectoryErrors() {
  try {
    // Try to create existing directory
    const dir = await directory.create(db, ["myapp", "users"]);
  } catch (error) {
    if (error instanceof DirectoryError) {
      console.error("Directory operation failed:", error.message);

      // Check specific error messages
      if (error.message.includes("already exists")) {
        console.log("Directory exists, opening instead");
        const dir = await directory.open(db, ["myapp", "users"]);
      } else if (error.message.includes("does not exist")) {
        console.log("Directory missing, creating");
        const dir = await directory.create(db, ["myapp", "users"]);
      } else if (error.message.includes("layer mismatch")) {
        console.error("Directory layer type mismatch");
      }
    }
    throw error;
  }
}

Graceful Error Recovery

Implement graceful error recovery patterns.

import fdb, { FDBError } from "foundationdb";

fdb.setAPIVersion(620);
const db = fdb.open();

async function safeOperation<T>(
  operation: () => Promise<T>,
  fallback?: T
): Promise<T | undefined> {
  try {
    return await operation();
  } catch (error) {
    if (error instanceof FDBError) {
      console.error(`FDB Error ${error.code}:`, error.message);

      // Return fallback for specific errors
      if (error.code === 1025 || error.code === 1007) {
        console.log("Using fallback value due to transient error");
        return fallback;
      }
    }

    // Re-throw non-recoverable errors
    throw error;
  }
}

// Usage
const value = await safeOperation(
  async () => await db.get("config:setting"),
  Buffer.from("default-value")
);

// With retry wrapper
async function withGracefulRetry<T>(
  operation: () => Promise<T>,
  maxRetries = 3
): Promise<T> {
  let lastError: Error;

  for (let i = 0; i < maxRetries; i++) {
    try {
      return await operation();
    } catch (error) {
      lastError = error as Error;

      if (error instanceof FDBError) {
        // Don't retry fatal errors
        if ([1031, 1032, 2501].includes(error.code)) {
          throw error;
        }
      }

      // Exponential backoff
      await new Promise(resolve =>
        setTimeout(resolve, Math.pow(2, i) * 100)
      );
    }
  }

  throw new Error(`Failed after ${maxRetries} retries: ${lastError!.message}`);
}

Troubleshooting

Performance Issues

Diagnose and fix common performance problems.

Issue: Slow Transactions

import fdb from "foundationdb";

fdb.setAPIVersion(620);
const db = fdb.open();

// Problem: Large transaction with many operations
async function slowTransaction() {
  await db.doTransaction(async (tn) => {
    // Thousands of operations in single transaction
    for (let i = 0; i < 10000; i++) {
      tn.set(`key:${i}`, `value:${i}`);
    }
  });
}

// Solution: Split into smaller transactions
async function fastTransactions() {
  const batchSize = 100;

  for (let i = 0; i < 10000; i += batchSize) {
    await db.doTransaction(async (tn) => {
      for (let j = i; j < i + batchSize && j < 10000; j++) {
        tn.set(`key:${j}`, `value:${j}`);
      }
    });
  }
}

// Solution: Use appropriate transaction options
await db.doTransaction(async (tn) => {
  // Process data
}, {
  timeout: 30000,
  size_limit: 50000000,
});

Issue: High Conflict Rate

// Problem: Many transactions competing for same keys
async function highConflict() {
  await Promise.all(
    Array.from({ length: 100 }, () =>
      db.doTransaction(async (tn) => {
        const value = await tn.get("counter");
        const count = parseInt(value?.toString() || "0");
        tn.set("counter", (count + 1).toString());
      })
    )
  );
}

// Solution: Use atomic operations
async function lowConflict() {
  const delta = Buffer.allocUnsafe(8);
  delta.writeBigInt64LE(1n, 0);

  await Promise.all(
    Array.from({ length: 100 }, () =>
      db.add("counter", delta)
    )
  );
}

// Solution: Shard hot keys
class ShardedCounter {
  constructor(private db: typeof db, private shards = 10) {}

  async increment() {
    const shard = Math.floor(Math.random() * this.shards);
    const delta = Buffer.allocUnsafe(8);
    delta.writeBigInt64LE(1n, 0);
    await this.db.add(`counter:${shard}`, delta);
  }

  async getTotal(): Promise<number> {
    let total = 0;
    for (let i = 0; i < this.shards; i++) {
      const value = await this.db.get(`counter:${i}`);
      if (value) {
        total += Number(value.readBigInt64LE(0));
      }
    }
    return total;
  }
}

Issue: Memory Usage

// Problem: Loading too much data at once
async function highMemory() {
  const allData = await db.getRangeAll("data:", "data:~");
  // Process huge dataset - may cause OOM
}

// Solution: Use streaming with batches
async function lowMemory() {
  for await (const batch of db.getRangeBatch("data:", "data:~")) {
    // Process one batch at a time
    await processBatch(batch);
  }
}

// Solution: Use appropriate streaming mode
import { StreamingMode } from "foundationdb";

async function optimizedStreaming() {
  for await (const batch of db.getRangeBatch("data:", "data:~", {
    streamingMode: StreamingMode.Small, // Smaller batches
  })) {
    await processBatch(batch);
  }
}

async function processBatch(batch: any) {
  // Process batch
}

Connection Problems

Diagnose and resolve connection issues.

Issue: Cannot Connect to Cluster

import fdb from "foundationdb";

// Check 1: Verify cluster file
console.log("Checking cluster file...");
try {
  const clusterFile = "/etc/foundationdb/fdb.cluster";
  const fs = require("fs");
  const content = fs.readFileSync(clusterFile, "utf8");
  console.log("Cluster file content:", content);
} catch (error) {
  console.error("Cannot read cluster file:", error);
}

// Check 2: Test connection
fdb.setAPIVersion(620);
try {
  const db = fdb.open();
  await db.get("test");
  console.log("Connection successful");
} catch (error) {
  console.error("Connection failed:", error);
}

// Check 3: Verify network configuration
fdb.configNetwork({
  trace_enable: "./fdb-traces",
  trace_format: "json",
});

// Check traces for connection errors

Issue: Transaction Timeouts

// Diagnostic: Check transaction size
await db.doTransaction(async (tn) => {
  // Perform operations

  const size = tn.getApproximateSize();
  console.log("Transaction size:", size);

  if (size > 5000000) {
    console.warn("Transaction size large, may timeout");
  }
});

// Solution: Increase timeout or split transaction
db.setNativeOptions({
  transaction_timeout: 30000, // 30 seconds
});

// Or per-transaction
await db.doTransaction(async (tn) => {
  // Operations
}, {
  timeout: 60000, // 60 seconds
});

Issue: Watch Not Triggering

// Problem: Watch created outside transaction
// const watch = db.watch("key"); // ERROR

// Solution: Create watch in transaction
const watch = await db.doTransaction(async (tn) => {
  return tn.watch("key");
});

// Problem: Awaiting watch inside transaction
await db.doTransaction(async (tn) => {
  const watch = tn.watch("key");
  // await watch.promise; // DEADLOCK!
  return watch;
}).then(async (watch) => {
  await watch.promise; // Correct
});

// Problem: Too many watches
db.setNativeOptions({
  max_watches: 20000, // Increase limit
});

Data Corruption Issues

Prevent and diagnose data integrity problems.

Issue: Encoding Mismatch

import fdb from "foundationdb";

fdb.setAPIVersion(620);
const db = fdb.open();

// Problem: Writing with one encoder, reading with another
await db.withValueEncoding(fdb.encoders.json)
  .set("key", { value: 123 });

const wrong = await db.withValueEncoding(fdb.encoders.string)
  .get("key"); // Wrong encoding!

// Solution: Consistent encoder usage
const jsonDb = db.withValueEncoding(fdb.encoders.json);

await jsonDb.set("key", { value: 123 });
const correct = await jsonDb.get("key"); // { value: 123 }

// Solution: Document encoding choices
class UserStore {
  private db: ReturnType<typeof fdb.open>;

  constructor(db: ReturnType<typeof fdb.open>) {
    this.db = db
      .at("users:")
      .withKeyEncoding(fdb.encoders.string)
      .withValueEncoding(fdb.encoders.json);
  }

  async save(id: string, user: any) {
    await this.db.set(id, user);
  }

  async load(id: string) {
    return await this.db.get(id);
  }
}

Issue: Lost Updates

// Problem: Not using transactions properly
async function lostUpdate() {
  const value = await db.get("counter");
  const count = parseInt(value?.toString() || "0");
  // Another process might update here!
  await db.set("counter", (count + 1).toString());
}

// Solution: Use transactions
async function safeUpdate() {
  await db.doTransaction(async (tn) => {
    const value = await tn.get("counter");
    const count = parseInt(value?.toString() || "0");
    tn.set("counter", (count + 1).toString());
  });
}

// Better: Use atomic operations
const delta = Buffer.allocUnsafe(8);
delta.writeBigInt64LE(1n, 0);
await db.add("counter", delta);

Issue: Directory Conflicts

import fdb, { directory } from "foundationdb";

fdb.setAPIVersion(620);
const db = fdb.open();

// Problem: Racing directory creation
async function racingCreation() {
  try {
    const dir = await directory.create(db, ["myapp", "tenant"]);
  } catch (error) {
    // Second process fails
  }
}

// Solution: Use createOrOpen
async function safeCreation() {
  const dir = await directory.createOrOpen(db, ["myapp", "tenant"]);
  // Works for all processes
}

// Solution: Handle errors gracefully
async function robustCreation() {
  try {
    const dir = await directory.create(db, ["myapp", "tenant"]);
  } catch (error) {
    if (error.message.includes("already exists")) {
      const dir = await directory.open(db, ["myapp", "tenant"]);
      return dir;
    }
    throw error;
  }
}

Debugging Tips

Tools and techniques for debugging issues.

Enable Tracing

import fdb from "foundationdb";

// Enable detailed tracing
fdb.configNetwork({
  trace_enable: "./fdb-traces",
  trace_format: "json",
  trace_log_group: "myapp",
});

fdb.setAPIVersion(620);
const db = fdb.open();

// Check traces at ./fdb-traces/*.json
// Look for: errors, warnings, slow_task events

Transaction Debugging

import fdb, { TransactionOptionCode } from "foundationdb";

fdb.setAPIVersion(620);
const db = fdb.open();

await db.doTransaction(async (tn) => {
  // Enable transaction logging
  tn.setOption(TransactionOptionCode.DebugTransactionIdentifier, "my-tx-123");
  tn.setOption(TransactionOptionCode.LogTransaction);

  // Perform operations
  tn.set("key", "value");
}, {
  debug_transaction_identifier: "test-transaction",
  log_transaction: true,
});

// Check traces for transaction details

Performance Profiling

import fdb from "foundationdb";

fdb.setAPIVersion(620);
const db = fdb.open();

async function profileOperation() {
  const start = Date.now();

  await db.doTransaction(async (tn) => {
    const opStart = Date.now();
    const value = await tn.get("key");
    console.log(`Get took ${Date.now() - opStart}ms`);

    const setStart = Date.now();
    tn.set("key", "value");
    console.log(`Set took ${Date.now() - setStart}ms`);
  });

  console.log(`Total transaction: ${Date.now() - start}ms`);
}

// Monitor transaction sizes
await db.doTransaction(async (tn) => {
  for (let i = 0; i < 1000; i++) {
    tn.set(`key:${i}`, `value:${i}`);

    if (i % 100 === 0) {
      console.log(`Size at ${i}:`, tn.getApproximateSize());
    }
  }
});

Best Practices

Transaction Management

1. Keep Transactions Short

Minimize transaction duration to reduce conflicts and avoid timeouts.

import fdb from "foundationdb";

fdb.setAPIVersion(620);
const db = fdb.open();

// Bad: Long-running computation in transaction
await db.doTransaction(async (tn) => {
  const data = await tn.get("data");

  // Expensive computation
  const result = await expensiveComputation(data);

  tn.set("result", result);
});

// Good: Compute outside transaction
const data = await db.get("data");
const result = await expensiveComputation(data);

await db.doTransaction(async (tn) => {
  tn.set("result", result);
});

2. Use Atomic Operations for Counters

Avoid read-modify-write patterns for counters.

// Bad: Read-modify-write
await db.doTransaction(async (tn) => {
  const value = await tn.get("counter");
  const count = parseInt(value?.toString() || "0");
  tn.set("counter", (count + 1).toString());
});

// Good: Atomic add
const delta = Buffer.allocUnsafe(8);
delta.writeBigInt64LE(1n, 0);
await db.add("counter", delta);

3. Use Snapshot Reads When Possible

Reduce conflicts by using snapshot reads for non-critical data.

await db.doTransaction(async (tn) => {
  // Critical read (causes conflicts)
  const critical = await tn.get("critical:data");

  // Non-critical read (no conflicts)
  const metadata = await tn.snapshot().get("metadata");

  // Write based on critical data
  tn.set("result", processData(critical));
});

4. Batch Related Operations

Group related operations in single transactions.

// Bad: Multiple transactions
await db.set("user:alice:name", "Alice");
await db.set("user:alice:email", "alice@example.com");
await db.set("user:alice:age", "30");

// Good: Single transaction
await db.doTransaction(async (tn) => {
  tn.set("user:alice:name", "Alice");
  tn.set("user:alice:email", "alice@example.com");
  tn.set("user:alice:age", "30");
});

5. Handle Large Datasets with Chunking

Split large operations into manageable chunks.

async function processLargeDataset() {
  let startKey = "data:";
  const chunkSize = 1000;

  while (true) {
    const chunk = await db.getRangeAll(
      startKey,
      "data:~",
      { limit: chunkSize }
    );

    if (chunk.length === 0) break;

    await db.doTransaction(async (tn) => {
      for (const [key, value] of chunk) {
        // Process item
        tn.set(key.toString() + ":processed", "true");
      }
    });

    if (chunk.length < chunkSize) break;
    startKey = chunk[chunk.length - 1][0].toString() + "\x00";
  }
}

Key Design Patterns

6. Use Hierarchical Key Structure

Organize keys hierarchically for efficient queries.

import fdb from "foundationdb";

fdb.setAPIVersion(620);
const db = fdb.open();

// Good: Hierarchical structure
await db.set("app:users:alice:profile", "...");
await db.set("app:users:alice:settings", "...");
await db.set("app:users:bob:profile", "...");
await db.set("app:orders:12345:items", "...");

// Query all user data
const aliceData = await db.getRangeAllStartsWith("app:users:alice:");

// Query all orders
const orders = await db.getRangeAllStartsWith("app:orders:");

7. Use Tuple Encoding for Composite Keys

Leverage tuple encoding for structured keys.

import fdb from "foundationdb";

fdb.setAPIVersion(620);
const db = fdb.open().withKeyEncoding(fdb.encoders.tuple);

// Store with composite keys
await db.set(["user", "alice", "profile"], "data");
await db.set(["user", "alice", "settings"], "data");
await db.set(["order", 12345, "items"], "data");

// Query by prefix
const aliceData = await db.getRangeAllStartsWith(["user", "alice"]);
const orders = await db.getRangeAllStartsWith(["order"]);

8. Use Directories for Multi-Tenancy

Isolate tenant data with directory layer.

import fdb, { directory } from "foundationdb";

fdb.setAPIVersion(620);
const db = fdb.open();

// Create tenant directories
const tenant1 = await directory.createOrOpen(db, ["tenants", "acme"]);
const tenant2 = await directory.createOrOpen(db, ["tenants", "techcorp"]);

// Scoped databases
const acmeDb = db.at(tenant1);
const techcorpDb = db.at(tenant2);

// Isolated data
await acmeDb.set("data", "Acme data");
await techcorpDb.set("data", "Techcorp data");

Error Handling and Resilience

9. Implement Retry Logic with Backoff

Handle transient failures gracefully.

async function withRetry<T>(
  operation: () => Promise<T>,
  maxRetries = 5,
  baseDelay = 100
): Promise<T> {
  for (let attempt = 0; attempt <= maxRetries; attempt++) {
    try {
      return await operation();
    } catch (error) {
      if (attempt === maxRetries) throw error;

      const delay = baseDelay * Math.pow(2, attempt);
      await new Promise(resolve => setTimeout(resolve, delay));
    }
  }
  throw new Error("Should not reach here");
}

const result = await withRetry(() => db.get("key"));

10. Monitor Transaction Metrics

Track performance and errors for optimization.

class MetricsCollector {
  private metrics = {
    transactionCount: 0,
    errorCount: 0,
    avgDuration: 0,
  };

  async trackTransaction<T>(
    operation: () => Promise<T>
  ): Promise<T> {
    const start = Date.now();

    try {
      const result = await operation();
      this.metrics.transactionCount++;

      const duration = Date.now() - start;
      this.metrics.avgDuration =
        (this.metrics.avgDuration * (this.metrics.transactionCount - 1) + duration) /
        this.metrics.transactionCount;

      return result;
    } catch (error) {
      this.metrics.errorCount++;
      throw error;
    }
  }

  getMetrics() {
    return { ...this.metrics };
  }
}

const metrics = new MetricsCollector();

await metrics.trackTransaction(() =>
  db.doTransaction(async (tn) => {
    tn.set("key", "value");
  })
);

console.log("Metrics:", metrics.getMetrics());

Resource Management

11. Close Connections Properly

Ensure clean shutdown of database connections.

import fdb from "foundationdb";

fdb.setAPIVersion(620);
const db = fdb.open();

// Use try-finally for cleanup
try {
  await db.set("key", "value");
} finally {
  db.close();
  fdb.stopNetworkSync();
}

// Handle graceful shutdown
process.on("SIGTERM", () => {
  console.log("Shutting down...");
  db.close();
  fdb.stopNetworkSync();
  process.exit(0);
});

process.on("SIGINT", () => {
  console.log("Interrupted, cleaning up...");
  db.close();
  fdb.stopNetworkSync();
  process.exit(0);
});

12. Use Connection Pooling

Reuse database connections across application.

class DatabasePool {
  private static db: ReturnType<typeof fdb.open> | null = null;

  static initialize() {
    if (!this.db) {
      fdb.setAPIVersion(620);
      this.db = fdb.open();
      this.db.setNativeOptions({
        transaction_timeout: 10000,
        max_watches: 20000,
      });
    }
  }

  static getDatabase() {
    if (!this.db) {
      throw new Error("Database not initialized");
    }
    return this.db;
  }

  static shutdown() {
    if (this.db) {
      this.db.close();
      fdb.stopNetworkSync();
      this.db = null;
    }
  }
}

// Initialize once at startup
DatabasePool.initialize();

// Use throughout application
const db = DatabasePool.getDatabase();
await db.set("key", "value");

// Shutdown on exit
process.on("exit", () => {
  DatabasePool.shutdown();
});