CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/npm-openai

The official TypeScript library for the OpenAI API

Pending

Quality

Pending

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

Overview
Eval results
Files

client-configuration.mddocs/

Client Configuration

Client configuration in the OpenAI Node.js SDK covers initialization options for the OpenAI and AzureOpenAI clients, per-request customization through RequestOptions, and environment variable setup for both standard and Azure deployments.

Overview

The SDK provides flexible configuration at multiple levels:

  1. Client-level configuration - Set when instantiating OpenAI or AzureOpenAI
  2. Request-level configuration - Override per-request via RequestOptions
  3. Environment variables - Automatic discovery of credentials and settings

This approach allows you to balance convenience (environment variables for simple scripts) with flexibility (request-level overrides for complex applications).

OpenAI Client

Constructor

class OpenAI {
  constructor(options?: ClientOptions): void;
}

The main API client for standard OpenAI services.

Basic usage:

import { OpenAI } from 'openai';

const client = new OpenAI({
  apiKey: 'sk-...',
});

withOptions Method

withOptions(options: Partial<ClientOptions>): this;

Create a new client instance re-using the same options given to the current client with optional overriding. This method is useful for creating derived clients with different configurations while preserving most settings from the parent client.

Usage:

import { OpenAI } from 'openai';

// Base client with default settings
const baseClient = new OpenAI({
  apiKey: 'sk-...',
  organization: 'org-...',
  timeout: 60000,
});

// Create a new client with modified timeout, keeping other settings
const slowClient = baseClient.withOptions({
  timeout: 120000,
});

// Create a new client with different organization
const otherOrgClient = baseClient.withOptions({
  organization: 'org-other',
});

Common use cases:

  • Per-tenant clients: Create clients with different organization/project IDs
  • Variable timeouts: Create clients with different timeout settings for long-running operations
  • Custom headers: Add request-specific headers while preserving base configuration
  • Testing: Create test clients with modified settings without duplicating configuration

ClientOptions Interface

/**
 * API key setter function that returns a promise resolving to a string token.
 * Invoked before each request to allow dynamic key rotation or refresh.
 */
type ApiKeySetter = () => Promise<string>;

interface ClientOptions {
  apiKey?: string | ApiKeySetter | undefined;
  organization?: string | null | undefined;
  project?: string | null | undefined;
  baseURL?: string | null | undefined;
  timeout?: number | undefined;
  maxRetries?: number | undefined;
  defaultHeaders?: HeadersLike | undefined;
  defaultQuery?: Record<string, string | undefined> | undefined;
  dangerouslyAllowBrowser?: boolean | undefined;
  fetchOptions?: MergedRequestInit | undefined;
  fetch?: Fetch | undefined;
  logLevel?: LogLevel | undefined;
  logger?: Logger | undefined;
  webhookSecret?: string | null | undefined;
}

Configuration Options

apiKey (string | ApiKeySetter)

API key for authentication. Can be a static string or an async function that returns a token.

  • Defaults to process.env['OPENAI_API_KEY']
  • Function form enables runtime credential rotation
  • Function must return non-empty string; throws OpenAIError otherwise
  • When function throws, error wrapped in OpenAIError with original as cause
// Static string
const client = new OpenAI({ apiKey: 'sk-...' });

// Dynamic token provider
const client = new OpenAI({
  apiKey: async () => {
    const token = await getAccessToken();
    return token;
  }
});

organization (string | null)

OpenAI organization ID for API requests.

  • Defaults to process.env['OPENAI_ORG_ID']
  • Set to null to explicitly disable
  • Sent in OpenAI-Organization header

project (string | null)

OpenAI project ID for API requests.

  • Defaults to process.env['OPENAI_PROJECT_ID']
  • Set to null to explicitly disable
  • Sent in OpenAI-Project header

baseURL (string)

Override the default base URL for API requests.

  • Defaults to process.env['OPENAI_BASE_URL'] or https://api.openai.com/v1
  • Must be a valid HTTPS URL
  • Useful for proxies, custom deployments, or testing

timeout (number)

Maximum time in milliseconds to wait for a response.

  • Defaults to 600000 (10 minutes)
  • Retryable timeouts (408, 429, 5xx) will retry up to maxRetries times
  • Total wait could exceed timeout in worst case
  • Per-request override available via RequestOptions

maxRetries (number)

Maximum number of retry attempts for transient failures.

  • Defaults to 2
  • Applies to: timeouts (408), locks (409), rate limits (429), server errors (5xx)
  • Uses exponential backoff with jitter (0.5s to 8s)
  • Per-request override available via RequestOptions

defaultHeaders (HeadersLike)

Default HTTP headers sent with every request.

  • Applied to all requests unless overridden per-request
  • Set header to null in request options to remove
  • Useful for custom headers (tracing, correlation IDs, etc.)

defaultQuery (Record<string, string | undefined>)

Default query parameters added to all requests.

  • Applied to all requests unless overridden per-request
  • Set parameter to undefined in request options to remove
  • Useful for API keys in query strings or experiment flags

dangerouslyAllowBrowser (boolean)

Enable browser execution of the client.

  • Defaults to false
  • Disabled to protect against accidental credential exposure
  • Only set true if credentials are properly protected (e.g., API gateway, proxy)
  • Passing azureADTokenProvider in AzureOpenAI automatically enables

fetchOptions (MergedRequestInit)

Additional options passed to fetch calls.

  • Common use cases: custom Agent, SSL certificates, keepalive settings
  • Properties overridden by per-request fetchOptions
  • Platform-specific options may not be available in all environments

fetch (Fetch)

Custom fetch implementation.

  • Defaults to platform-native fetch (Node.js, browser, Cloudflare)
  • Use for: testing, custom HTTP handling, request/response interception
  • Must be compatible with standard Fetch API

logLevel (LogLevel)

Control logging verbosity.

type LogLevel = 'off' | 'error' | 'warn' | 'info' | 'debug';
  • Defaults to process.env['OPENAI_LOG'] or 'off'
  • Useful for debugging request/response issues
  • Filters log messages by level: 'off' disables logging, 'error' only shows errors, etc.

logger (Logger)

Custom logger implementation.

interface Logger {
  error: (message: string, ...rest: unknown[]) => void;
  warn: (message: string, ...rest: unknown[]) => void;
  info: (message: string, ...rest: unknown[]) => void;
  debug: (message: string, ...rest: unknown[]) => void;
}
  • Defaults to globalThis.console
  • Implement interface with debug(), info(), warn(), error() methods
  • Use with logLevel to control verbosity

webhookSecret (string)

Secret for verifying webhook signatures.

  • Defaults to process.env['OPENAI_WEBHOOK_SECRET']
  • Used with client.webhooks.verifySignature()

AzureOpenAI Client

Constructor

class AzureOpenAI extends OpenAI {
  constructor(options?: AzureClientOptions): void;
}

Azure-specific OpenAI client with Azure authentication and deployment support.

Basic usage:

import { AzureOpenAI } from 'openai';

const client = new AzureOpenAI({
  endpoint: 'https://my-resource.openai.azure.com/',
  apiKey: 'your-azure-api-key',
  apiVersion: '2024-08-01-preview',
});

AzureClientOptions Interface

interface AzureClientOptions extends ClientOptions {
  apiVersion?: string | undefined;
  endpoint?: string | undefined;
  deployment?: string | undefined;
  apiKey?: string | undefined;
  azureADTokenProvider?: (() => Promise<string>) | undefined;
}

Azure-Specific Options

apiVersion (string)

Azure OpenAI API version for requests.

  • Defaults to process.env['OPENAI_API_VERSION']
  • Required - throws OpenAIError if missing
  • Format typically: '2024-08-01-preview'
  • Sent as api-version query parameter

endpoint (string)

Azure OpenAI endpoint URL including resource name.

  • Format: https://{resource-name}.openai.azure.com/
  • Defaults to process.env['AZURE_OPENAI_ENDPOINT']
  • Mutually exclusive with baseURL
  • Automatically appends /openai to construct base URL

deployment (string)

Model deployment name for Azure.

  • If provided, constructs paths as /deployments/{deployment}
  • Overridden per-request by model parameter if not specified
  • Cannot be used with Assistants APIs
  • Simplifies requests to single deployment

apiKey (string)

Azure API key for authentication.

  • Defaults to process.env['AZURE_OPENAI_API_KEY']
  • Mutually exclusive with azureADTokenProvider
  • Sent in api-key header

azureADTokenProvider (() => Promise<string>)

Function providing Microsoft Entra access tokens.

  • Called before each request
  • Enables: managed identity, service principal, user credentials
  • Mutually exclusive with apiKey
  • Automatically enables dangerouslyAllowBrowser

Example with Microsoft Entra:

import { AzureOpenAI } from 'openai';
import { DefaultAzureCredential } from '@azure/identity';

const credential = new DefaultAzureCredential();

const client = new AzureOpenAI({
  endpoint: 'https://my-resource.openai.azure.com/',
  apiVersion: '2024-08-01-preview',
  azureADTokenProvider: async () => {
    const token = await credential.getToken('https://cognitiveservices.azure.com/.default');
    return token.token;
  }
});

RequestOptions

Per-request configuration for individual API calls.

type RequestOptions = {
  method?: HTTPMethod;
  path?: string;
  query?: object | undefined | null;
  body?: unknown;
  headers?: HeadersLike;
  maxRetries?: number;
  timeout?: number;
  fetchOptions?: MergedRequestInit;
  signal?: AbortSignal | undefined | null;
  idempotencyKey?: string;
  defaultBaseURL?: string | undefined;
  stream?: boolean | undefined;
}

Common Per-Request Options

headers (HeadersLike)

Request-specific HTTP headers.

  • Merged with defaultHeaders from client config
  • Set to null to remove default header
  • Overrides defaults for same header name

maxRetries (number)

Retry count for this specific request.

  • Overrides client-level maxRetries
  • Useful for one-off requests needing different retry behavior

timeout (number)

Timeout in milliseconds for this request.

  • Overrides client-level timeout
  • Per-request timeouts enable fine-grained control

signal (AbortSignal)

Abort signal for canceling request.

  • Passed to fetch call
  • Automatically canceled on client-level timeout
  • Enables: CancellationToken pattern, request cancellation UI
const controller = new AbortController();
const timeout = setTimeout(() => controller.abort(), 5000);

try {
  const response = await client.chat.completions.create({
    model: 'gpt-4',
    messages: [...],
  }, { signal: controller.signal });
} finally {
  clearTimeout(timeout);
}

idempotencyKey (string)

Unique key for idempotency tracking.

  • Required for POST requests if client configured with idempotency header
  • Prevents duplicate requests with same key
  • Format: any unique string (UUID recommended)

Environment Variables

The SDK automatically reads these environment variables if not provided in options:

OPENAI_API_KEY (required for standard OpenAI)

API key for OpenAI platform.

export OPENAI_API_KEY='sk-...'

OPENAI_ORG_ID (optional)

OpenAI organization ID.

export OPENAI_ORG_ID='org-...'

OPENAI_PROJECT_ID (optional)

OpenAI project ID.

export OPENAI_PROJECT_ID='proj_...'

OPENAI_BASE_URL (optional)

Custom base URL for requests.

export OPENAI_BASE_URL='https://api.example.com/v1'

OPENAI_LOG (optional)

Logging level: 'debug', 'info', 'warn', 'error'

export OPENAI_LOG='debug'

OPENAI_WEBHOOK_SECRET (optional)

Secret for webhook signature verification.

export OPENAI_WEBHOOK_SECRET='whsec_...'

Azure-specific environment variables:

  • AZURE_OPENAI_API_KEY - Azure API key
  • AZURE_OPENAI_ENDPOINT - Azure endpoint URL
  • OPENAI_API_VERSION - Azure API version

Configuration Examples

Basic Setup

import { OpenAI } from 'openai';

// Uses OPENAI_API_KEY from environment
const client = new OpenAI();

// Explicit API key
const client = new OpenAI({
  apiKey: 'sk-...',
});

Azure Setup

import { AzureOpenAI } from 'openai';

// Using API key
const client = new AzureOpenAI({
  endpoint: 'https://my-resource.openai.azure.com/',
  apiKey: 'your-key',
  apiVersion: '2024-08-01-preview',
});

// Using deployment
const client = new AzureOpenAI({
  endpoint: 'https://my-resource.openai.azure.com/',
  apiKey: 'your-key',
  apiVersion: '2024-08-01-preview',
  deployment: 'gpt-4-deployment',
});

Custom Base URL

import { OpenAI } from 'openai';

// Use proxy or custom API
const client = new OpenAI({
  apiKey: 'sk-...',
  baseURL: 'https://proxy.example.com/openai/v1',
});

// Per-request override
const response = await client.chat.completions.create({
  model: 'gpt-4',
  messages: [...],
}, {
  defaultBaseURL: 'https://alternate-proxy.example.com/v1',
});

Proxy Configuration

import { OpenAI } from 'openai';
import { HttpProxyAgent, HttpsProxyAgent } from 'http-proxy-agent';

const httpAgent = new HttpProxyAgent('http://proxy.example.com:8080');
const httpsAgent = new HttpsProxyAgent('https://proxy.example.com:8080');

const client = new OpenAI({
  apiKey: 'sk-...',
  fetchOptions: {
    agent: httpAgent, // Node.js HTTP agent
  },
});

Timeouts and Retries

import { OpenAI } from 'openai';

// Custom timeout and retries
const client = new OpenAI({
  apiKey: 'sk-...',
  timeout: 30000,      // 30 seconds
  maxRetries: 3,       // Retry up to 3 times
});

// Per-request override
const response = await client.chat.completions.create({
  model: 'gpt-4',
  messages: [...],
}, {
  timeout: 60000,      // 60 seconds for this request
  maxRetries: 1,       // Only 1 retry for this request
});

Default Headers

import { OpenAI } from 'openai';

const client = new OpenAI({
  apiKey: 'sk-...',
  defaultHeaders: {
    'X-Custom-Header': 'value',
    'X-Correlation-ID': generateTraceId(),
  },
});

// Remove default header for specific request
const response = await client.chat.completions.create({
  model: 'gpt-4',
  messages: [...],
}, {
  headers: {
    'X-Custom-Header': null,  // Remove this header
  },
});

Browser Usage

import { OpenAI } from 'openai';

// Enable browser usage (only with appropriate security measures)
const client = new OpenAI({
  apiKey: 'sk-...',
  dangerouslyAllowBrowser: true,
});

// WARNING: This exposes your API key to the client browser.
// Only use with:
// - Proxies that hide the real API key
// - API gateways that enforce auth
// - Time-limited tokens

Per-Request Options

import { OpenAI } from 'openai';

const client = new OpenAI({ apiKey: 'sk-...' });

// Override headers, timeout, retries per request
const response = await client.chat.completions.create({
  model: 'gpt-4',
  messages: [{ role: 'user', content: 'Hello' }],
}, {
  headers: {
    'X-Request-ID': generateRequestId(),
  },
  timeout: 120000,
  maxRetries: 0,  // Don't retry this request
  signal: abortController.signal,
});

Dynamic API Keys

import { OpenAI } from 'openai';

async function getAccessToken() {
  // Fetch from your token service
  return 'sk-...';
}

const client = new OpenAI({
  apiKey: getAccessToken,  // Function called before each request
});

// Keys automatically refreshed on each request
const response = await client.chat.completions.create({
  model: 'gpt-4',
  messages: [...],
});

API Key Management Best Practices

Security

  1. Never commit API keys to version control

    • Use .gitignore for .env files
    • Use environment variables or secrets management
  2. Protect keys in browser environments

    • Only use dangerouslyAllowBrowser: true with backend proxy
    • Never expose keys directly to clients
    • Implement server-side authentication layer
  3. Rotate credentials regularly

    • Use dynamic token providers for service accounts
    • Implement key rotation schedules
    • Monitor key usage and disable unused keys

Environment Setup

# Development (.env file - add to .gitignore)
OPENAI_API_KEY=sk-...
OPENAI_ORG_ID=org-...

# Production (use secrets management)
# AWS: AWS Secrets Manager, Parameter Store
# Azure: Key Vault
# Google Cloud: Secret Manager
# GitHub: Secrets (for CI/CD)

Secrets Management Example

import { OpenAI } from 'openai';
import { SecretsManagerClient, GetSecretValueCommand } from '@aws-sdk/client-secrets-manager';

async function initializeClient() {
  const secretsClient = new SecretsManagerClient({ region: 'us-east-1' });

  const response = await secretsClient.send(
    new GetSecretValueCommand({ SecretId: 'openai-api-key' })
  );

  const apiKey = response.SecretString;

  return new OpenAI({ apiKey });
}

const client = await initializeClient();

Monitoring and Auditing

  1. Log all API requests (without exposing keys)
  2. Monitor quota and usage
  3. Set rate limits and alarms
  4. Audit key access in organization
import { OpenAI } from 'openai';

const client = new OpenAI({
  apiKey: 'sk-...',
  logLevel: 'info',  // Log requests for audit trail
});

Organization and Project Scoping

Use organization and project IDs to scope API access:

const client = new OpenAI({
  apiKey: 'sk-...',
  organization: 'org-...',  // Restrict to organization
  project: 'proj_...',      // Further restrict to project
});

This enables:

  • Cost allocation per project
  • Access control per team
  • Audit logs per project
  • Quota management

Related Documentation

  • ChatCompletions - Per-request parameters
  • Assistants - Specialized configuration
  • Realtime - WebSocket-specific options

Install with Tessl CLI

npx tessl i tessl/npm-openai

docs

assistants.md

audio.md

batches-evals.md

chat-completions.md

client-configuration.md

containers.md

conversations.md

embeddings.md

files-uploads.md

fine-tuning.md

helpers-audio.md

helpers-zod.md

images.md

index.md

realtime.md

responses-api.md

vector-stores.md

videos.md

tile.json