or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

assistants.mdaudio.mdbatches-evals.mdchat-completions.mdclient-configuration.mdcontainers.mdconversations.mdembeddings.mdfiles-uploads.mdfine-tuning.mdhelpers-audio.mdhelpers-zod.mdimages.mdindex.mdrealtime.mdresponses-api.mdvector-stores.mdvideos.md

index.mddocs/

0

# OpenAI Node.js Library

1

2

The official TypeScript/JavaScript client library for interacting with the OpenAI API. This library provides comprehensive access to OpenAI's AI models including GPT-4, GPT-3.5, DALL-E, Whisper, and more, with full TypeScript support, streaming capabilities, and both Node.js and edge runtime compatibility.

3

4

## Package Information

5

6

- **Package Name**: openai

7

- **Package Type**: npm

8

- **Language**: TypeScript

9

- **Installation**: `npm install openai`

10

- **Repository**: https://github.com/openai/openai-node

11

- **Documentation**: https://platform.openai.com/docs

12

13

## Core Imports

14

15

```typescript

16

import OpenAI from "openai";

17

```

18

19

For CommonJS:

20

21

```javascript

22

const OpenAI = require("openai");

23

```

24

25

Named imports for specific functionality:

26

27

```typescript

28

import OpenAI, { AzureOpenAI, toFile } from "openai";

29

```

30

31

## Basic Usage

32

33

```typescript

34

import OpenAI from "openai";

35

36

// Initialize client

37

const client = new OpenAI({

38

apiKey: process.env.OPENAI_API_KEY, // This is the default and can be omitted

39

});

40

41

// Create a chat completion

42

const completion = await client.chat.completions.create({

43

model: "gpt-4",

44

messages: [{ role: "user", content: "Say hello!" }],

45

});

46

47

console.log(completion.choices[0].message.content);

48

```

49

50

## Architecture

51

52

The OpenAI library is organized around several key components:

53

54

- **Client Classes**: `OpenAI` (main client) and `AzureOpenAI` (Azure-specific client)

55

- **Resource Hierarchy**: API organized by resource (chat, images, audio, etc.) with sub-resources where appropriate

56

- **Type Safety**: Comprehensive TypeScript types for all operations, parameters, and responses

57

- **Streaming Support**: First-class streaming via Server-Sent Events for real-time responses

58

- **Pagination**: Async iteration support for all list operations

59

- **Error Handling**: Granular error classes for different HTTP status codes

60

- **Request Configuration**: Flexible per-request options (headers, timeouts, retries)

61

62

## Capabilities

63

64

### Text Completions (Legacy)

65

66

The legacy text completions endpoint for non-chat use cases. This API is deprecated in favor of Chat Completions for new projects.

67

68

```typescript { .api }

69

function completions.create(

70

params: CompletionCreateParams

71

): Promise<Completion> | Stream<Completion>;

72

73

interface CompletionCreateParams {

74

model: string;

75

prompt: string | string[];

76

max_tokens?: number;

77

temperature?: number;

78

stream?: boolean;

79

// ... additional parameters

80

}

81

```

82

83

**Note:** This API is deprecated. Use [Chat Completions](./chat-completions.md) for all new projects. The Chat Completions API provides better performance, supports conversations, and includes newer features like function calling.

84

85

### Chat Completions

86

87

The standard interface for conversational AI using GPT models. Supports function calling, vision, streaming, and stored completions.

88

89

```typescript { .api }

90

// Create completion

91

function create(

92

params: ChatCompletionCreateParams

93

): Promise<ChatCompletion> | Stream<ChatCompletionChunk>;

94

95

// Type-safe parsing with JSON schemas or Zod

96

function parse<Params extends ChatCompletionParseParams, ParsedT>(

97

params: Params

98

): Promise<ParsedChatCompletion<ParsedT>>;

99

100

// Streaming helper

101

function stream<Params extends ChatCompletionCreateParamsStreaming, ParsedT>(

102

body: Params,

103

options?: RequestOptions

104

): ChatCompletionStream<ParsedT>;

105

106

// Tool calling automation

107

function runTools<Params extends ChatCompletionToolRunnerParams<any>, ParsedT>(

108

body: Params,

109

options?: RunnerOptions

110

): ChatCompletionRunner<ParsedT> | ChatCompletionStreamingRunner<ParsedT>;

111

112

// Stored completions management

113

function retrieve(completionID: string): Promise<ChatCompletion>;

114

function update(

115

completionID: string,

116

params: ChatCompletionUpdateParams

117

): Promise<ChatCompletion>;

118

function list(params?: ChatCompletionListParams): Promise<ChatCompletionsPage>;

119

function delete(completionID: string): Promise<ChatCompletionDeleted>;

120

121

// Stored completion messages

122

function messages.list(

123

completionID: string,

124

params?: MessageListParams

125

): Promise<ChatCompletionStoreMessagesPage>;

126

127

interface MessageListParams {

128

order?: 'asc' | 'desc'; // Sort by timestamp

129

limit?: number;

130

after?: string;

131

before?: string;

132

}

133

134

interface ChatCompletionCreateParams {

135

model: string;

136

messages: ChatCompletionMessageParam[];

137

stream?: boolean;

138

temperature?: number;

139

max_tokens?: number;

140

tools?: ChatCompletionTool[];

141

store?: boolean; // Set to true to enable retrieval later

142

// ... additional parameters

143

}

144

```

145

146

[Chat Completions](./chat-completions.md)

147

148

### Responses API

149

150

OpenAI's primary interface for multi-turn conversations with advanced features including persistent sessions, tool use, and computer/web capabilities.

151

152

```typescript { .api }

153

// Response management

154

function create(

155

params: ResponseCreateParams

156

): Promise<Response> | Stream<ResponseStreamEvent>;

157

158

function retrieve(responseID: string): Promise<Response>;

159

160

function cancel(responseID: string): Promise<Response>;

161

162

function delete(responseID: string): Promise<void>;

163

164

// Type-safe parsing helper

165

function parse<Params extends ResponseParseParams, ParsedT>(

166

body: Params

167

): Promise<ParsedResponse<ParsedT>>;

168

169

// Streaming helper

170

function stream<Params extends ResponseCreateParamsStreaming, ParsedT>(

171

body: Params,

172

options?: RequestOptions

173

): ResponseStream<ParsedT>;

174

175

// Input items - list input items for a response

176

function inputItems.list(responseID: string, params?: InputItemListParams): Promise<ResponseItemsPage>;

177

178

// Input tokens - count input tokens before creating a response

179

function inputTokens.count(params?: InputTokenCountParams): Promise<InputTokenCountResponse>;

180

181

interface ResponseCreateParams {

182

model: string;

183

input: ResponseInput;

184

instructions?: string;

185

tools?: Tool[];

186

// ... additional parameters

187

}

188

```

189

190

[Responses API](./responses-api.md)

191

192

### Embeddings

193

194

Generate vector embeddings from text for semantic search, clustering, and recommendations.

195

196

```typescript { .api }

197

function create(

198

params: EmbeddingCreateParams

199

): Promise<CreateEmbeddingResponse>;

200

201

interface EmbeddingCreateParams {

202

model: string;

203

input: string | string[];

204

encoding_format?: "float" | "base64";

205

}

206

```

207

208

[Embeddings](./embeddings.md)

209

210

### Files and Uploads

211

212

File management for fine-tuning, assistants, and other API features. Supports both simple and multipart uploads.

213

214

```typescript { .api }

215

// Files API

216

function create(params: FileCreateParams): Promise<FileObject>;

217

function retrieve(fileID: string): Promise<FileObject>;

218

function list(params?: FileListParams): Promise<FileObjectsPage>;

219

function delete(fileID: string): Promise<FileDeleted>;

220

function content(fileID: string): Promise<Response>;

221

222

// Wait for file processing to complete

223

function waitForProcessing(

224

id: string,

225

options?: { pollInterval?: number; maxWait?: number }

226

): Promise<FileObject>;

227

228

// Uploads API for large files (multipart upload)

229

function uploads.create(params: UploadCreateParams): Promise<Upload>;

230

function uploads.cancel(uploadID: string): Promise<Upload>;

231

function uploads.complete(uploadID: string, params: UploadCompleteParams): Promise<Upload>;

232

function uploads.parts.create(uploadID: string, params: PartCreateParams): Promise<UploadPart>;

233

```

234

235

[Files and Uploads](./files-uploads.md)

236

237

### Images

238

239

Generate, edit, and create variations of images using DALL-E models.

240

241

```typescript { .api }

242

function generate(params: ImageGenerateParams): Promise<ImagesResponse>;

243

function edit(params: ImageEditParams): Promise<ImagesResponse>;

244

function createVariation(

245

params: ImageCreateVariationParams

246

): Promise<ImagesResponse>;

247

```

248

249

[Images](./images.md)

250

251

### Audio

252

253

Speech-to-text transcription, translation, and text-to-speech generation.

254

255

```typescript { .api }

256

// Text-to-speech

257

function speech.create(params: SpeechCreateParams): Promise<Response>;

258

259

// Speech-to-text transcription

260

function transcriptions.create(

261

params: TranscriptionCreateParams

262

): Promise<Transcription>;

263

264

// Translation to English

265

function translations.create(

266

params: TranslationCreateParams

267

): Promise<Translation>;

268

```

269

270

[Audio](./audio.md)

271

272

### Assistants API

273

274

Build AI assistants with persistent threads, code interpreter, file search, and function calling capabilities. Complete CRUD operations for assistants, threads, messages, and runs with streaming support.

275

276

```typescript { .api }

277

// Assistant management

278

function beta.assistants.create(params: AssistantCreateParams): Promise<Assistant>;

279

function beta.assistants.retrieve(assistantID: string): Promise<Assistant>;

280

function beta.assistants.update(assistantID: string, params: AssistantUpdateParams): Promise<Assistant>;

281

function beta.assistants.list(query?: AssistantListParams): PagePromise<AssistantsPage, Assistant>;

282

function beta.assistants.delete(assistantID: string): Promise<AssistantDeleted>;

283

284

// Thread management

285

function beta.threads.create(params?: ThreadCreateParams): Promise<Thread>;

286

function beta.threads.retrieve(threadID: string): Promise<Thread>;

287

function beta.threads.update(threadID: string, params: ThreadUpdateParams): Promise<Thread>;

288

function beta.threads.delete(threadID: string): Promise<ThreadDeleted>;

289

function beta.threads.createAndRun(params: ThreadCreateAndRunParams): Promise<Run | Stream<AssistantStreamEvent>>;

290

291

// Message management

292

function beta.threads.messages.create(threadID: string, params: MessageCreateParams): Promise<Message>;

293

function beta.threads.messages.retrieve(threadID: string, messageID: string): Promise<Message>;

294

function beta.threads.messages.update(threadID: string, messageID: string, params: MessageUpdateParams): Promise<Message>;

295

function beta.threads.messages.list(threadID: string, params?: MessageListParams): PagePromise<MessagesPage, Message>;

296

function beta.threads.messages.delete(threadID: string, messageID: string): Promise<MessageDeleted>;

297

298

// Run management

299

function beta.threads.runs.create(threadID: string, params: RunCreateParams): Promise<Run | Stream<AssistantStreamEvent>>;

300

function beta.threads.runs.retrieve(threadID: string, runID: string): Promise<Run>;

301

function beta.threads.runs.update(threadID: string, runID: string, params: RunUpdateParams): Promise<Run>;

302

function beta.threads.runs.list(threadID: string, params?: RunListParams): PagePromise<RunsPage, Run>;

303

function beta.threads.runs.cancel(threadID: string, runID: string): Promise<Run>;

304

function beta.threads.runs.submitToolOutputs(threadID: string, runID: string, params: RunSubmitToolOutputsParams): Promise<Run | Stream<AssistantStreamEvent>>;

305

306

// Run steps

307

function beta.threads.runs.steps.retrieve(threadID: string, runID: string, stepID: string): Promise<RunStep>;

308

function beta.threads.runs.steps.list(threadID: string, runID: string, params?: StepListParams): PagePromise<RunStepsPage, RunStep>;

309

```

310

311

[Assistants API](./assistants.md)

312

313

### ChatKit (Beta)

314

315

Build conversational interfaces with OpenAI's ChatKit framework. ChatKit provides session management and thread-based conversations with workflow support.

316

317

```typescript { .api }

318

// Create ChatKit session

319

function beta.chatkit.sessions.create(

320

params: SessionCreateParams

321

): Promise<ChatSession>;

322

323

// Cancel session

324

function beta.chatkit.sessions.cancel(sessionID: string): Promise<ChatSession>;

325

326

// Manage threads

327

function beta.chatkit.threads.retrieve(threadID: string): Promise<ChatKitThread>;

328

function beta.chatkit.threads.list(

329

params?: ThreadListParams

330

): Promise<ChatKitThreadsPage>;

331

function beta.chatkit.threads.delete(

332

threadID: string

333

): Promise<ThreadDeleteResponse>;

334

function beta.chatkit.threads.listItems(

335

threadID: string,

336

params?: ThreadListItemsParams

337

): Promise<ChatKitThreadItemListDataPage>;

338

339

interface SessionCreateParams {

340

user: string;

341

workflow: ChatSessionWorkflowParam;

342

configuration?: ChatSessionChatKitConfigurationParam;

343

expires_after?: ChatSessionExpiresAfterParam;

344

rate_limits?: ChatSessionRateLimitsParam;

345

}

346

```

347

348

**Note:** ChatKit is a beta feature. The API may change. Threads are created implicitly through the session workflow; there is no direct `threads.create()` method. See [Assistants API](./assistants.md) for related thread and message management.

349

350

### Realtime API

351

352

WebSocket-based real-time voice conversations with low latency and streaming audio. Includes SIP call management, client secret creation, and beta session management.

353

354

```typescript { .api }

355

// Client secrets for WebSocket auth

356

function realtime.clientSecrets.create(

357

params: ClientSecretCreateParams

358

): Promise<ClientSecretCreateResponse>;

359

360

// SIP call management

361

function realtime.calls.accept(callID: string, params: CallAcceptParams): Promise<void>;

362

function realtime.calls.hangup(callID: string): Promise<void>;

363

function realtime.calls.refer(callID: string, params: CallReferParams): Promise<void>;

364

function realtime.calls.reject(callID: string, params: CallRejectParams): Promise<void>;

365

366

// Beta: Ephemeral session management

367

function beta.realtime.sessions.create(params: SessionCreateParams): Promise<SessionCreateResponse>;

368

function beta.realtime.transcriptionSessions.create(params: TranscriptionSessionCreateParams): Promise<TranscriptionSession>;

369

370

// WebSocket clients

371

import { OpenAIRealtimeWebSocket } from "openai/realtime/websocket"; // Browser

372

import { OpenAIRealtimeWS } from "openai/realtime/ws"; // Node.js with 'ws'

373

```

374

375

[Realtime API](./realtime.md)

376

377

### Fine-tuning

378

379

Create and manage fine-tuning jobs to customize models on your own data. Supports supervised learning, DPO, and reinforcement learning with checkpoint and permission management.

380

381

```typescript { .api }

382

// Job management

383

function fineTuning.jobs.create(params: FineTuningJobCreateParams): Promise<FineTuningJob>;

384

function fineTuning.jobs.retrieve(jobID: string): Promise<FineTuningJob>;

385

function fineTuning.jobs.list(params?: FineTuningJobListParams): Promise<FineTuningJobsPage>;

386

function fineTuning.jobs.cancel(jobID: string): Promise<FineTuningJob>;

387

function fineTuning.jobs.pause(jobID: string): Promise<FineTuningJob>;

388

function fineTuning.jobs.resume(jobID: string): Promise<FineTuningJob>;

389

function fineTuning.jobs.listEvents(jobID: string, params?: JobEventListParams): Promise<FineTuningJobEventsPage>;

390

391

// Checkpoint access

392

function fineTuning.jobs.checkpoints.list(jobID: string, params?: CheckpointListParams): Promise<FineTuningJobCheckpointsPage>;

393

394

// Checkpoint permissions (nested under fineTuning.checkpoints)

395

function fineTuning.checkpoints.permissions.create(checkpointID: string, body: PermissionCreateParams): Promise<PermissionCreateResponsesPage>;

396

function fineTuning.checkpoints.permissions.retrieve(checkpointID: string, params?: PermissionRetrieveParams): Promise<PermissionRetrieveResponse>;

397

function fineTuning.checkpoints.permissions.delete(permissionID: string, params: PermissionDeleteParams): Promise<PermissionDeleteResponse>;

398

399

// Alpha features - grader validation

400

function fineTuning.alpha.graders.run(body: GraderRunParams): Promise<GraderRunResponse>;

401

function fineTuning.alpha.graders.validate(body: GraderValidateParams): Promise<GraderValidateResponse>;

402

403

// Methods resource (type definitions for fine-tuning methods)

404

// Note: fineTuning.methods provides TypeScript type definitions for supervised, DPO, and reinforcement learning configurations

405

interface Methods {

406

supervised: SupervisedMethod;

407

dpo: DpoMethod;

408

reinforcement: ReinforcementMethod;

409

}

410

```

411

412

[Fine-tuning](./fine-tuning.md)

413

414

### Vector Stores

415

416

Store and search embeddings for retrieval-augmented generation (RAG) with the Assistants API. Includes file and file batch management for vector stores.

417

418

```typescript { .api }

419

// Vector store management

420

function vectorStores.create(params: VectorStoreCreateParams): Promise<VectorStore>;

421

function vectorStores.retrieve(vectorStoreID: string): Promise<VectorStore>;

422

function vectorStores.update(vectorStoreID: string, params: VectorStoreUpdateParams): Promise<VectorStore>;

423

function vectorStores.list(params?: VectorStoreListParams): Promise<VectorStoresPage>;

424

function vectorStores.delete(vectorStoreID: string): Promise<VectorStoreDeleted>;

425

function vectorStores.search(storeID: string, params: VectorStoreSearchParams): Promise<VectorStoreSearchResponsesPage>;

426

427

// File management within vector stores

428

function vectorStores.files.create(vectorStoreID: string, body: FileCreateParams): Promise<VectorStoreFile>;

429

function vectorStores.files.retrieve(fileID: string, params: FileRetrieveParams): Promise<VectorStoreFile>;

430

function vectorStores.files.update(fileID: string, params: FileUpdateParams): Promise<VectorStoreFile>;

431

function vectorStores.files.list(vectorStoreID: string, params?: FileListParams): Promise<VectorStoreFilesPage>;

432

function vectorStores.files.delete(fileID: string, params: FileDeleteParams): Promise<VectorStoreFileDeleted>;

433

function vectorStores.files.content(fileID: string, params: FileContentParams): Promise<FileContentResponsesPage>;

434

435

// Batch file operations

436

function vectorStores.fileBatches.create(vectorStoreID: string, body: FileBatchCreateParams): Promise<VectorStoreFileBatch>;

437

function vectorStores.fileBatches.retrieve(batchID: string, params: FileBatchRetrieveParams): Promise<VectorStoreFileBatch>;

438

function vectorStores.fileBatches.cancel(batchID: string, params: FileBatchCancelParams): Promise<VectorStoreFileBatch>;

439

function vectorStores.fileBatches.listFiles(batchID: string, params: FileBatchListFilesParams): Promise<VectorStoreFilesPage>;

440

```

441

442

[Vector Stores](./vector-stores.md)

443

444

### Batches and Evaluations

445

446

Batch processing for cost-effective async operations and evaluation framework for testing model performance.

447

448

```typescript { .api }

449

// Batches

450

function batches.create(params: BatchCreateParams): Promise<Batch>;

451

function batches.retrieve(batchID: string): Promise<Batch>;

452

function batches.list(params?: BatchListParams): Promise<BatchesPage>;

453

function batches.cancel(batchID: string): Promise<Batch>;

454

455

// Evaluations

456

function evals.create(params: EvalCreateParams): Promise<EvalCreateResponse>;

457

function evals.retrieve(evalID: string): Promise<EvalRetrieveResponse>;

458

function evals.update(

459

evalID: string,

460

params: EvalUpdateParams

461

): Promise<EvalUpdateResponse>;

462

function evals.list(params?: EvalListParams): Promise<EvalListResponsesPage>;

463

function evals.delete(evalID: string): Promise<EvalDeleteResponse>;

464

465

// Evaluation Runs

466

function evals.runs.create(

467

evalID: string,

468

params: RunCreateParams

469

): Promise<RunCreateResponse>;

470

function evals.runs.retrieve(

471

runID: string,

472

params: RunRetrieveParams

473

): Promise<RunRetrieveResponse>;

474

function evals.runs.cancel(

475

runID: string,

476

params: RunCancelParams

477

): Promise<RunCancelResponse>;

478

function evals.runs.list(

479

evalID: string,

480

params?: RunListParams

481

): Promise<RunListResponsesPage>;

482

function evals.runs.delete(

483

runID: string,

484

params: RunDeleteParams

485

): Promise<RunDeleteResponse>;

486

487

// Output items from evaluation runs

488

function evals.runs.outputItems.retrieve(

489

itemID: string,

490

params: OutputItemRetrieveParams

491

): Promise<OutputItemRetrieveResponse>;

492

function evals.runs.outputItems.list(

493

runID: string,

494

params?: OutputItemListParams

495

): Promise<OutputItemListResponsesPage>;

496

```

497

498

[Batches and Evaluations](./batches-evals.md)

499

500

### Graders

501

502

Type definitions for evaluation grading. The graders resource provides structured type definitions used with the Evaluations API for automated assessment of model outputs.

503

504

**Note:** The `client.graders.graderModels` resource exists but contains no methods - it only provides TypeScript type definitions for use with the Evals API.

505

506

```typescript { .api }

507

// Grader type definitions used in Evaluations API:

508

509

interface LabelModelGrader {

510

input: Array<LabelModelGraderInput>;

511

labels: string[];

512

model: string;

513

name: string;

514

passing_labels: string[];

515

type: 'label_model';

516

}

517

518

interface ScoreModelGrader {

519

input: Array<ScoreModelGraderInput>;

520

model: string;

521

name: string;

522

passing_threshold: number;

523

score_range: [number, number];

524

type: 'score_model';

525

}

526

527

interface PythonGrader {

528

code: string;

529

name: string;

530

type: 'python';

531

}

532

533

interface StringCheckGrader {

534

expected_strings: string[];

535

name: string;

536

type: 'string_check';

537

}

538

539

interface TextSimilarityGrader {

540

model: string;

541

name: string;

542

similarity_threshold: number;

543

type: 'text_similarity';

544

}

545

546

interface MultiGrader {

547

graders: Array<LabelModelGrader | ScoreModelGrader | PythonGrader | StringCheckGrader | TextSimilarityGrader>;

548

name: string;

549

type: 'multi';

550

}

551

```

552

553

These type definitions are used with the Evaluations API for automated grading of evaluation runs. See [Batches and Evaluations](./batches-evals.md) for usage examples.

554

555

### Moderations

556

557

Classify content for safety violations using OpenAI's moderation models.

558

559

```typescript { .api }

560

function moderations.create(

561

params: ModerationCreateParams

562

): Promise<ModerationCreateResponse>;

563

564

interface ModerationCreateParams {

565

model?: string;

566

input: string | string[] | ModerationMultiModalInput[];

567

}

568

```

569

570

Content categories checked: hate, hate/threatening, harassment, harassment/threatening, self-harm, self-harm/intent, self-harm/instructions, sexual, sexual/minors, violence, violence/graphic, illicit, illicit/violent.

571

572

### Models

573

574

List and retrieve available models, and delete fine-tuned models.

575

576

```typescript { .api }

577

function models.retrieve(model: string): Promise<Model>;

578

function models.list(): Promise<ModelsPage>;

579

function models.delete(model: string): Promise<ModelDeleted>;

580

```

581

582

## Client Configuration

583

584

Initialize and configure the OpenAI client with various options.

585

586

```typescript { .api }

587

class OpenAI {

588

constructor(options?: ClientOptions);

589

}

590

591

type ApiKeySetter = () => Promise<string>;

592

593

interface ClientOptions {

594

apiKey?: string | ApiKeySetter;

595

organization?: string;

596

project?: string;

597

webhookSecret?: string;

598

baseURL?: string;

599

timeout?: number;

600

maxRetries?: number;

601

defaultHeaders?: Record<string, string>;

602

defaultQuery?: Record<string, string>;

603

dangerouslyAllowBrowser?: boolean;

604

}

605

```

606

607

[Client Configuration](./client-configuration.md)

608

609

## Error Handling

610

611

The library provides granular error classes for different failure scenarios:

612

613

```typescript { .api }

614

class OpenAIError extends Error {}

615

class APIError extends OpenAIError {}

616

class APIConnectionError extends OpenAIError {}

617

class APIConnectionTimeoutError extends APIConnectionError {}

618

class APIUserAbortError extends APIConnectionError {}

619

class RateLimitError extends APIError {} // HTTP 429

620

class BadRequestError extends APIError {} // HTTP 400

621

class AuthenticationError extends APIError {} // HTTP 401

622

class PermissionDeniedError extends APIError {} // HTTP 403

623

class NotFoundError extends APIError {} // HTTP 404

624

class ConflictError extends APIError {} // HTTP 409

625

class UnprocessableEntityError extends APIError {} // HTTP 422

626

class InternalServerError extends APIError {} // HTTP 5xx

627

class InvalidWebhookSignatureError extends OpenAIError {} // Webhook signature verification failed

628

```

629

630

## Streaming

631

632

Many endpoints support streaming responses via Server-Sent Events:

633

634

```typescript

635

// Chat completion streaming

636

const stream = await client.chat.completions.create({

637

model: "gpt-4",

638

messages: [{ role: "user", content: "Count to 10" }],

639

stream: true,

640

});

641

642

for await (const chunk of stream) {

643

const content = chunk.choices[0]?.delta?.content || "";

644

process.stdout.write(content);

645

}

646

```

647

648

## Pagination

649

650

List methods return paginated results with async iteration:

651

652

```typescript

653

// Iterate all items

654

for await (const file of client.files.list()) {

655

console.log(file);

656

}

657

658

// Iterate pages

659

for await (const page of client.files.list().iterPages()) {

660

console.log(page.data);

661

}

662

663

// Manual pagination

664

const page = await client.files.list();

665

if (page.hasNextPage()) {

666

const nextPage = await page.getNextPage();

667

}

668

```

669

670

## Azure OpenAI

671

672

Use the `AzureOpenAI` client for Azure-specific deployments:

673

674

```typescript

675

import { AzureOpenAI } from "openai";

676

677

const client = new AzureOpenAI({

678

apiKey: process.env.AZURE_OPENAI_API_KEY,

679

endpoint: process.env.AZURE_OPENAI_ENDPOINT,

680

apiVersion: "2024-02-01",

681

deployment: "gpt-4", // Your deployment name

682

});

683

```

684

685

## Type Definitions

686

687

### Core Response Types

688

689

```typescript { .api }

690

interface APIPromise<T> extends Promise<T> {

691

withResponse(): Promise<{ data: T; response: Response }>;

692

asResponse(): Promise<Response>;

693

}

694

695

class PagePromise<PageClass, Item> extends APIPromise<PageClass> implements AsyncIterable<Item> {

696

// Enables async iteration over paginated items

697

[Symbol.asyncIterator](): AsyncIterator<Item>;

698

}

699

700

interface Stream<T> extends AsyncIterable<T> {

701

abort(): void;

702

done(): boolean;

703

tee(): [Stream<T>, Stream<T>];

704

}

705

```

706

707

### Common Parameter Types

708

709

```typescript { .api }

710

interface RequestOptions {

711

headers?: Record<string, string>;

712

maxRetries?: number;

713

timeout?: number;

714

query?: Record<string, unknown>;

715

signal?: AbortSignal;

716

}

717

718

type Uploadable = File | Response | FsReadStream | BunFile;

719

720

interface Metadata {

721

[key: string]: string;

722

}

723

```

724

725

### Shared Types

726

727

The SDK exports several utility types from the `shared` module for use across different API resources:

728

729

```typescript { .api }

730

// Model type definitions

731

type AllModels = ChatModel | string;

732

733

type ChatModel =

734

| 'gpt-5.1'

735

| 'gpt-5'

736

| 'gpt-4.1'

737

| 'gpt-4o'

738

| 'gpt-4o-mini'

739

| 'gpt-4-turbo'

740

| 'gpt-4'

741

| 'gpt-3.5-turbo'

742

| 'o4-mini'

743

| 'o3'

744

| 'o3-mini'

745

| 'o1'

746

| 'o1-preview'

747

| 'o1-mini'

748

// ... and many more model identifiers

749

;

750

751

type ResponsesModel = string; // Model identifier for Responses API

752

753

// Filter types for vector store search and other operations

754

interface ComparisonFilter {

755

key: string;

756

op: 'eq' | 'ne' | 'gt' | 'gte' | 'lt' | 'lte' | 'in' | 'nin';

757

value: string | number | boolean | Array<string | number>;

758

}

759

760

interface CompoundFilter {

761

and?: Array<ComparisonFilter | CompoundFilter>;

762

or?: Array<ComparisonFilter | CompoundFilter>;

763

}

764

765

// Tool and function definitions

766

type CustomToolInputFormat = CustomToolInputFormat.Text | CustomToolInputFormat.Grammar;

767

768

interface FunctionDefinition {

769

name: string;

770

description?: string;

771

parameters?: FunctionParameters;

772

strict?: boolean;

773

}

774

775

type FunctionParameters = { [key: string]: unknown };

776

777

// Response format types

778

interface ResponseFormatJSONObject {

779

type: 'json_object';

780

}

781

782

interface ResponseFormatJSONSchema {

783

json_schema: ResponseFormatJSONSchema.JSONSchema;

784

type: 'json_schema';

785

}

786

787

interface ResponseFormatText {

788

type: 'text';

789

}

790

791

interface ResponseFormatTextGrammar {

792

grammar: string;

793

type: 'text_grammar';

794

}

795

796

interface ResponseFormatTextPython {

797

type: 'text_python';

798

}

799

800

// Reasoning configuration

801

interface Reasoning {

802

type: 'default' | 'extended' | 'internal';

803

effort?: ReasoningEffort;

804

content?: 'enabled' | 'disabled';

805

}

806

807

type ReasoningEffort = 'none' | 'minimal' | 'low' | 'medium' | 'high' | null;

808

809

// Error structure

810

interface ErrorObject {

811

code: string | null;

812

message: string;

813

param: string | null;

814

type: string;

815

}

816

```

817

818

**Common use cases:**

819

820

- **Model types**: Use `ChatModel` or `AllModels` for type-safe model selection

821

- **Filters**: Use `ComparisonFilter` and `CompoundFilter` for vector store search queries

822

- **Functions**: Use `FunctionDefinition` and `FunctionParameters` when defining custom tools

823

- **Response formats**: Use `ResponseFormat*` types to specify desired output formats

824

- **Reasoning**: Use `Reasoning` and `ReasoningEffort` for reasoning models configuration

825

- **Error handling**: Use `ErrorObject` type for API error responses

826

827

## Webhooks

828

829

Utilities for verifying webhook signatures from OpenAI:

830

831

```typescript { .api }

832

function webhooks.verifySignature(

833

payload: string,

834

headers: Record<string, string>,

835

secret?: string,

836

tolerance?: number

837

): void;

838

839

function webhooks.unwrap(

840

payload: string,

841

headers: Record<string, string>,

842

secret?: string,

843

tolerance?: number

844

): WebhookEvent;

845

```

846

847

Webhook events include: response completed/failed/cancelled, batch completed/failed, fine-tuning job status updates, eval run results, and realtime call events.

848

849

## File Upload Utilities

850

851

### toFile

852

853

Converts various data types into File objects suitable for upload operations. This is a top-level export from the main package.

854

855

```typescript { .api }

856

function toFile(

857

value: ToFileInput | PromiseLike<ToFileInput>,

858

name?: string | null | undefined,

859

options?: FilePropertyBag | undefined,

860

): Promise<File>;

861

862

type ToFileInput =

863

| Uploadable

864

| Exclude<BlobPart, string>

865

| AsyncIterable<BlobPart>

866

| Iterable<BlobPart>;

867

```

868

869

The `toFile` function handles conversion from various input types (streams, buffers, paths, etc.) into File objects that can be uploaded via the API. It automatically handles file reading, buffering, and proper MIME type detection.

870

871

For detailed documentation, usage examples, and platform-specific behavior, see [Files and Uploads - toFile Helper](./files-uploads.md#tofile-helper).

872

873

## Helper Functions

874

875

The SDK provides specialized helper modules that are imported via subpaths rather than the main `openai` package.

876

877

### Zod Integration Helpers

878

879

Type-safe parsing and validation using Zod schemas. Imported from `openai/helpers/zod`.

880

881

```typescript { .api }

882

// Convert Zod schemas to response formats with auto-parsing

883

function zodResponseFormat<T>(

884

zodObject: z.ZodType<T>,

885

name: string,

886

props?: object

887

): AutoParseableResponseFormat<T>;

888

889

// Create function tools with Zod validation

890

function zodFunction<P>(options: {

891

name: string;

892

parameters: z.ZodType<P>;

893

function?: (args: P) => unknown | Promise<unknown>;

894

description?: string;

895

}): AutoParseableTool;

896

897

// Text format variant for Responses API

898

function zodTextFormat<T>(

899

zodObject: z.ZodType<T>,

900

name: string,

901

props?: object

902

): AutoParseableTextFormat<T>;

903

904

// Function tool variant for Responses API

905

function zodResponsesFunction<P>(options: {

906

name: string;

907

parameters: z.ZodType<P>;

908

function?: (args: P) => unknown | Promise<unknown>;

909

description?: string;

910

}): AutoParseableResponseTool;

911

```

912

913

[Zod Helpers Documentation](./helpers-zod.md)

914

915

**Quick Example:**

916

917

```typescript

918

import { zodResponseFormat } from "openai/helpers/zod";

919

import { z } from "zod";

920

921

const schema = z.object({

922

name: z.string(),

923

age: z.number(),

924

});

925

926

const completion = await client.chat.completions.parse({

927

model: "gpt-4o-2024-08-06",

928

messages: [{ role: "user", content: "Generate user data" }],

929

response_format: zodResponseFormat(schema, "UserSchema"),

930

});

931

932

// Type-safe access to parsed data

933

const data = completion.choices[0].message.parsed;

934

console.log(data.name, data.age); // Fully typed

935

```

936

937

### Audio Helpers (Node.js Only)

938

939

Play and record audio using ffmpeg/ffplay. Imported from `openai/helpers/audio`.

940

941

```typescript { .api }

942

/**

943

* Play audio from a stream, Response, or File

944

* Requires ffplay to be installed

945

* Node.js only - throws error in browser

946

*/

947

function playAudio(

948

input: NodeJS.ReadableStream | Response | File

949

): Promise<void>;

950

951

/**

952

* Record audio from system input device

953

* Requires ffmpeg to be installed

954

* Node.js only - throws error in browser

955

*/

956

function recordAudio(options?: {

957

signal?: AbortSignal;

958

device?: number;

959

timeout?: number;

960

}): Promise<File>;

961

```

962

963

[Audio Helpers Documentation](./helpers-audio.md)

964

965

**Quick Example:**

966

967

```typescript

968

import { playAudio, recordAudio } from "openai/helpers/audio";

969

970

// Record 5 seconds of audio

971

const audioFile = await recordAudio({ timeout: 5000 });

972

973

// Transcribe it

974

const transcription = await client.audio.transcriptions.create({

975

file: audioFile,

976

model: "whisper-1",

977

});

978

979

// Generate a response

980

const speech = await client.audio.speech.create({

981

model: "tts-1",

982

voice: "alloy",

983

input: transcription.text,

984

});

985

986

// Play the response

987

await playAudio(speech);

988

```

989

990

## Advanced Features

991

992

### Polling Helpers

993

994

Several resources provide polling utilities for waiting on async operations:

995

996

```typescript

997

// Wait for file processing

998

const file = await client.files.waitForProcessing(fileId);

999

1000

// Create run and poll until completion

1001

const run = await client.beta.threads.runs.createAndPoll(threadId, {

1002

assistant_id: assistantId,

1003

});

1004

```

1005

1006

### Multiple Runtime Support

1007

1008

The library works across multiple JavaScript runtimes:

1009

1010

- Node.js (v14+)

1011

- Deno

1012

- Bun

1013

- Cloudflare Workers

1014

- Vercel Edge Functions

1015

- Browser environments (with `dangerouslyAllowBrowser: true`)

1016

1017

## Additional Resources

1018

1019

### Videos

1020

1021

Generate and manipulate videos using OpenAI's video models with Sora.

1022

1023

```typescript { .api }

1024

function videos.create(params: VideoCreateParams): Promise<Video>;

1025

function videos.retrieve(videoID: string): Promise<Video>;

1026

function videos.list(params?: VideoListParams): Promise<VideosPage>;

1027

function videos.delete(videoID: string): Promise<VideoDeleteResponse>;

1028

function videos.downloadContent(

1029

videoID: string,

1030

params?: VideoDownloadContentParams

1031

): Promise<Response>;

1032

function videos.remix(

1033

videoID: string,

1034

params: VideoRemixParams

1035

): Promise<Video>;

1036

```

1037

1038

[Videos](./videos.md)

1039

1040

### Containers

1041

1042

Manage isolated execution containers for code interpreter and other tools.

1043

1044

```typescript { .api }

1045

function containers.create(

1046

params: ContainerCreateParams

1047

): Promise<ContainerCreateResponse>;

1048

function containers.retrieve(

1049

containerID: string

1050

): Promise<ContainerRetrieveResponse>;

1051

function containers.list(

1052

params?: ContainerListParams

1053

): Promise<ContainerListResponsesPage>;

1054

function containers.delete(containerID: string): Promise<void>;

1055

1056

// Container file operations

1057

function containers.files.create(

1058

containerID: string,

1059

params: FileCreateParams

1060

): Promise<FileCreateResponse>;

1061

function containers.files.retrieve(

1062

fileID: string,

1063

params: FileRetrieveParams

1064

): Promise<FileRetrieveResponse>;

1065

function containers.files.list(

1066

containerID: string,

1067

params?: FileListParams

1068

): Promise<FileListResponsesPage>;

1069

function containers.files.delete(

1070

fileID: string,

1071

params: FileDeleteParams

1072

): Promise<void>;

1073

function containers.files.content.retrieve(

1074

fileID: string,

1075

params: ContentRetrieveParams

1076

): Promise<Response>;

1077

```

1078

1079

[Containers](./containers.md)

1080

1081

### Conversations

1082

1083

Manage persistent conversation state independently of threads.

1084

1085

```typescript { .api }

1086

function conversations.create(

1087

params: ConversationCreateParams

1088

): Promise<Conversation>;

1089

function conversations.retrieve(conversationID: string): Promise<Conversation>;

1090

function conversations.update(

1091

conversationID: string,

1092

params: ConversationUpdateParams

1093

): Promise<Conversation>;

1094

function conversations.delete(

1095

conversationID: string

1096

): Promise<ConversationDeletedResource>;

1097

1098

// Conversation item operations

1099

function conversations.items.create(

1100

conversationID: string,

1101

params: ConversationItemCreateParams

1102

): Promise<ConversationItemList>;

1103

function conversations.items.retrieve(

1104

itemID: string,

1105

params: ItemRetrieveParams

1106

): Promise<ConversationItem>;

1107

function conversations.items.list(

1108

conversationID: string,

1109

params?: ItemListParams

1110

): Promise<ConversationItemsPage>;

1111

function conversations.items.delete(

1112

itemID: string,

1113

params: ItemDeleteParams

1114

): Promise<Conversation>;

1115

```

1116

1117

[Conversations](./conversations.md)

1118