or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

assistants.mdaudio.mdbatches-evals.mdchat-completions.mdclient-configuration.mdcontainers.mdconversations.mdembeddings.mdfiles-uploads.mdfine-tuning.mdhelpers-audio.mdhelpers-zod.mdimages.mdindex.mdrealtime.mdresponses-api.mdvector-stores.mdvideos.md

chat-completions.mddocs/

0

# Chat Completions

1

2

The Chat Completions API provides conversational AI capabilities for generating model responses in chat format. It supports text, image, and audio inputs with extensive configuration options including streaming, function calling, vision, and audio generation.

3

4

## Package Information

5

6

- **Package Name**: openai

7

- **Version**: 6.9.1

8

- **Language**: TypeScript

9

- **Access Path**: `client.chat.completions`

10

11

## Core Imports

12

13

```typescript

14

import OpenAI from "openai";

15

16

const client = new OpenAI({

17

apiKey: process.env.OPENAI_API_KEY,

18

});

19

```

20

21

## Basic Usage

22

23

```typescript

24

// Simple chat completion

25

const completion = await client.chat.completions.create({

26

model: "gpt-4o",

27

messages: [

28

{ role: "developer", content: "You are a helpful assistant." },

29

{ role: "user", content: "Hello!" },

30

],

31

});

32

33

console.log(completion.choices[0].message.content);

34

```

35

36

## Architecture

37

38

The Chat Completions API is organized around several key components:

39

40

- **Main Resource**: `client.chat.completions` - Primary interface for creating and managing chat completions

41

- **Sub-resource**: `client.chat.completions.messages` - Access to stored completion messages

42

- **Streaming Support**: Real-time response streaming with Server-Sent Events

43

- **Helper Methods**: Convenience functions for parsing, tool execution, and streaming

44

- **Type System**: Comprehensive TypeScript types for all API signatures

45

46

## Capabilities

47

48

### Create Chat Completion

49

50

Creates a model response for a chat conversation with support for text, images, audio, and tool calling.

51

52

```typescript { .api }

53

/**

54

* Creates a model response for the given chat conversation

55

* @param body - Completion creation parameters

56

* @param options - Request options (headers, timeout, etc.)

57

* @returns Promise resolving to ChatCompletion or Stream<ChatCompletionChunk>

58

*/

59

create(

60

body: ChatCompletionCreateParamsNonStreaming,

61

options?: RequestOptions

62

): APIPromise<ChatCompletion>;

63

64

create(

65

body: ChatCompletionCreateParamsStreaming,

66

options?: RequestOptions

67

): APIPromise<Stream<ChatCompletionChunk>>;

68

```

69

70

**Basic Example:**

71

72

```typescript

73

const completion = await client.chat.completions.create({

74

model: "gpt-4o",

75

messages: [

76

{ role: "user", content: "What is the capital of France?" },

77

],

78

});

79

80

console.log(completion.choices[0].message.content);

81

// Output: "The capital of France is Paris."

82

```

83

84

**With Multiple Messages:**

85

86

```typescript

87

const completion = await client.chat.completions.create({

88

model: "gpt-4o",

89

messages: [

90

{ role: "developer", content: "You are a helpful coding assistant." },

91

{ role: "user", content: "How do I reverse a string in JavaScript?" },

92

{

93

role: "assistant",

94

content: "You can use the reverse() method on an array...",

95

},

96

{ role: "user", content: "Can you show me an example?" },

97

],

98

});

99

```

100

101

### Retrieve Stored Completion

102

103

Get a stored chat completion by ID. Only completions created with `store: true` are available.

104

105

```typescript { .api }

106

/**

107

* Get a stored chat completion

108

* @param completionID - The ID of the completion to retrieve

109

* @param options - Request options

110

* @returns Promise resolving to ChatCompletion

111

*/

112

retrieve(

113

completionID: string,

114

options?: RequestOptions

115

): APIPromise<ChatCompletion>;

116

```

117

118

**Example:**

119

120

```typescript

121

const completion = await client.chat.completions.retrieve("chatcmpl_abc123");

122

123

console.log(completion.choices[0].message.content);

124

```

125

126

### Update Stored Completion

127

128

Modify metadata for a stored chat completion.

129

130

```typescript { .api }

131

/**

132

* Modify a stored chat completion

133

* @param completionID - The ID of the completion to update

134

* @param body - Update parameters (currently only metadata)

135

* @param options - Request options

136

* @returns Promise resolving to updated ChatCompletion

137

*/

138

update(

139

completionID: string,

140

body: ChatCompletionUpdateParams,

141

options?: RequestOptions

142

): APIPromise<ChatCompletion>;

143

```

144

145

**Example:**

146

147

```typescript

148

const updated = await client.chat.completions.update("chatcmpl_abc123", {

149

metadata: {

150

user_id: "user_123",

151

session_id: "session_456",

152

},

153

});

154

```

155

156

### List Stored Completions

157

158

List all stored chat completions with pagination support.

159

160

```typescript { .api }

161

/**

162

* List stored Chat Completions

163

* @param query - List filtering parameters

164

* @param options - Request options

165

* @returns PagePromise for iterating through results

166

*/

167

list(

168

query?: ChatCompletionListParams,

169

options?: RequestOptions

170

): PagePromise<ChatCompletionsPage, ChatCompletion>;

171

```

172

173

**Example:**

174

175

```typescript

176

// Iterate through all stored completions

177

for await (const completion of client.chat.completions.list()) {

178

console.log(completion.id, completion.created);

179

}

180

181

// Filter by model and metadata

182

for await (const completion of client.chat.completions.list({

183

model: "gpt-4o",

184

metadata: { user_id: "user_123" },

185

order: "desc",

186

})) {

187

console.log(completion.id);

188

}

189

```

190

191

### Delete Stored Completion

192

193

Delete a stored chat completion.

194

195

```typescript { .api }

196

/**

197

* Delete a stored chat completion

198

* @param completionID - The ID of the completion to delete

199

* @param options - Request options

200

* @returns Promise resolving to deletion confirmation

201

*/

202

delete(

203

completionID: string,

204

options?: RequestOptions

205

): APIPromise<ChatCompletionDeleted>;

206

```

207

208

**Example:**

209

210

```typescript

211

const deleted = await client.chat.completions.delete("chatcmpl_abc123");

212

213

console.log(deleted.deleted); // true

214

```

215

216

### List Stored Messages

217

218

Get messages from a stored completion.

219

220

```typescript { .api }

221

/**

222

* Get messages in a stored chat completion

223

* @param completionID - The ID of the completion

224

* @param query - List parameters (pagination, ordering)

225

* @param options - Request options

226

* @returns PagePromise for iterating through messages

227

*/

228

client.chat.completions.messages.list(

229

completionID: string,

230

query?: MessageListParams,

231

options?: RequestOptions

232

): PagePromise<ChatCompletionStoreMessagesPage, ChatCompletionStoreMessage>;

233

```

234

235

**Example:**

236

237

```typescript

238

// Iterate through all messages

239

for await (const message of client.chat.completions.messages.list(

240

"chatcmpl_abc123"

241

)) {

242

console.log(message.role, message.content);

243

}

244

```

245

246

### Parse with Auto-validation

247

248

Create a chat completion with automatic response parsing and validation.

249

250

```typescript { .api }

251

/**

252

* Create a completion with automatic parsing

253

* @param body - Completion parameters with response_format

254

* @param options - Request options

255

* @returns Promise resolving to parsed completion

256

*/

257

parse<Params extends ChatCompletionParseParams, ParsedT>(

258

body: Params,

259

options?: RequestOptions

260

): APIPromise<ParsedChatCompletion<ParsedT>>;

261

```

262

263

**Example with Zod:**

264

265

```typescript

266

import { zodResponseFormat } from "openai/helpers/zod";

267

import { z } from "zod";

268

269

const CalendarEventSchema = z.object({

270

name: z.string(),

271

date: z.string(),

272

participants: z.array(z.string()),

273

});

274

275

const completion = await client.chat.completions.parse({

276

model: "gpt-4o",

277

messages: [

278

{

279

role: "user",

280

content: "Create a team meeting event for next Monday with Alice and Bob",

281

},

282

],

283

response_format: zodResponseFormat(CalendarEventSchema, "event"),

284

});

285

286

const event = completion.choices[0].message.parsed;

287

// Fully typed and validated

288

console.log(event.name, event.date, event.participants);

289

```

290

291

### Run Tools (Automated Function Calling)

292

293

Convenience helper that automatically calls JavaScript functions and sends results back to the model.

294

295

```typescript { .api }

296

/**

297

* Automated function calling loop

298

* @param body - Completion parameters with tools

299

* @param options - Runner options

300

* @returns ChatCompletionRunner or ChatCompletionStreamingRunner

301

*/

302

runTools<Params extends ChatCompletionToolRunnerParams<any>>(

303

body: Params,

304

options?: RunnerOptions

305

): ChatCompletionRunner<ParsedT>;

306

307

runTools<Params extends ChatCompletionStreamingToolRunnerParams<any>>(

308

body: Params,

309

options?: RunnerOptions

310

): ChatCompletionStreamingRunner<ParsedT>;

311

```

312

313

**Example:**

314

315

```typescript

316

const runner = await client.chat.completions.runTools({

317

model: "gpt-4o",

318

messages: [

319

{ role: "user", content: "What's the weather in Boston and San Francisco?" },

320

],

321

tools: [

322

{

323

type: "function",

324

function: {

325

name: "get_weather",

326

description: "Get the current weather for a location",

327

parameters: {

328

type: "object",

329

properties: {

330

location: { type: "string" },

331

},

332

required: ["location"],

333

},

334

function: async ({ location }) => {

335

// Your implementation

336

return { temperature: 72, condition: "sunny" };

337

},

338

},

339

},

340

],

341

});

342

343

const finalContent = await runner.finalContent();

344

console.log(finalContent);

345

```

346

347

### Stream Chat Completion

348

349

Create a streaming chat completion with helper methods for easy consumption.

350

351

```typescript { .api }

352

/**

353

* Create a chat completion stream

354

* @param body - Streaming completion parameters

355

* @param options - Request options

356

* @returns ChatCompletionStream with event handlers

357

*/

358

stream<Params extends ChatCompletionStreamParams>(

359

body: Params,

360

options?: RequestOptions

361

): ChatCompletionStream<ParsedT>;

362

```

363

364

**Example:**

365

366

```typescript

367

const stream = await client.chat.completions.stream({

368

model: "gpt-4o",

369

messages: [{ role: "user", content: "Write a haiku about coding" }],

370

});

371

372

// Listen to specific events

373

stream.on("content", (delta, snapshot) => {

374

process.stdout.write(delta);

375

});

376

377

// Or iterate chunks

378

for await (const chunk of stream) {

379

process.stdout.write(chunk.choices[0]?.delta?.content || "");

380

}

381

382

// Get final completion

383

const finalCompletion = await stream.finalChatCompletion();

384

```

385

386

---

387

388

## Message Parameter Types

389

390

All message types used in the `messages` array parameter.

391

392

### Developer Message

393

394

System-level instructions for the model (replaces `system` role in o1 and newer models).

395

396

```typescript { .api }

397

/**

398

* Developer-provided instructions that the model should follow

399

*/

400

interface ChatCompletionDeveloperMessageParam {

401

/** The role of the message author */

402

role: "developer";

403

404

/** The contents of the developer message */

405

content: string | Array<ChatCompletionContentPartText>;

406

407

/** Optional name for the participant */

408

name?: string;

409

}

410

```

411

412

**Example:**

413

414

```typescript

415

{

416

role: "developer",

417

content: "You are an expert software architect. Provide detailed technical explanations."

418

}

419

```

420

421

### System Message

422

423

System-level instructions (for models before o1).

424

425

```typescript { .api }

426

/**

427

* System instructions for the model

428

*/

429

interface ChatCompletionSystemMessageParam {

430

/** The role of the message author */

431

role: "system";

432

433

/** The contents of the system message */

434

content: string | Array<ChatCompletionContentPartText>;

435

436

/** Optional name for the participant */

437

name?: string;

438

}

439

```

440

441

**Example:**

442

443

```typescript

444

{

445

role: "system",

446

content: "You are a helpful assistant that speaks like Shakespeare."

447

}

448

```

449

450

### User Message

451

452

Messages from the end user, supporting text, images, audio, and files.

453

454

```typescript { .api }

455

/**

456

* Messages sent by an end user

457

*/

458

interface ChatCompletionUserMessageParam {

459

/** The role of the message author */

460

role: "user";

461

462

/** The contents of the user message */

463

content: string | Array<ChatCompletionContentPart>;

464

465

/** Optional name for the participant */

466

name?: string;

467

}

468

```

469

470

**Text Example:**

471

472

```typescript

473

{

474

role: "user",

475

content: "Hello, how are you?"

476

}

477

```

478

479

**Multi-modal Example:**

480

481

```typescript

482

{

483

role: "user",

484

content: [

485

{ type: "text", text: "What's in this image?" },

486

{

487

type: "image_url",

488

image_url: {

489

url: "https://example.com/image.jpg",

490

detail: "high"

491

}

492

}

493

]

494

}

495

```

496

497

### Assistant Message

498

499

Messages sent by the model in response to user messages.

500

501

```typescript { .api }

502

/**

503

* Messages sent by the model in response to user messages

504

*/

505

interface ChatCompletionAssistantMessageParam {

506

/** The role of the message author */

507

role: "assistant";

508

509

/** The contents of the assistant message */

510

content?: string | Array<ChatCompletionContentPartText | ChatCompletionContentPartRefusal> | null;

511

512

/** Optional name for the participant */

513

name?: string;

514

515

/** The refusal message by the assistant */

516

refusal?: string | null;

517

518

/** The tool calls generated by the model */

519

tool_calls?: Array<ChatCompletionMessageToolCall>;

520

521

/** Data about a previous audio response */

522

audio?: { id: string } | null;

523

524

/** @deprecated Use tool_calls instead */

525

function_call?: { name: string; arguments: string } | null;

526

}

527

```

528

529

**Example:**

530

531

```typescript

532

{

533

role: "assistant",

534

content: "I'd be happy to help you with that!"

535

}

536

```

537

538

**With Tool Calls:**

539

540

```typescript

541

{

542

role: "assistant",

543

content: null,

544

tool_calls: [

545

{

546

id: "call_abc123",

547

type: "function",

548

function: {

549

name: "get_weather",

550

arguments: '{"location": "Boston"}'

551

}

552

}

553

]

554

}

555

```

556

557

### Tool Message

558

559

Response to a tool call from the assistant.

560

561

```typescript { .api }

562

/**

563

* Response to a tool call

564

*/

565

interface ChatCompletionToolMessageParam {

566

/** The role of the message author */

567

role: "tool";

568

569

/** The contents of the tool message */

570

content: string | Array<ChatCompletionContentPartText>;

571

572

/** Tool call that this message is responding to */

573

tool_call_id: string;

574

}

575

```

576

577

**Example:**

578

579

```typescript

580

{

581

role: "tool",

582

tool_call_id: "call_abc123",

583

content: '{"temperature": 72, "condition": "sunny"}'

584

}

585

```

586

587

### Function Message (Deprecated)

588

589

Legacy function response message.

590

591

```typescript { .api }

592

/**

593

* @deprecated Use tool messages instead

594

*/

595

interface ChatCompletionFunctionMessageParam {

596

/** The role of the message author */

597

role: "function";

598

599

/** The contents of the function message */

600

content: string | null;

601

602

/** The name of the function */

603

name: string;

604

}

605

```

606

607

---

608

609

## Content Part Types

610

611

Content parts for multi-modal messages.

612

613

### Text Content

614

615

Plain text content.

616

617

```typescript { .api }

618

/**

619

* Text content part

620

*/

621

interface ChatCompletionContentPartText {

622

/** The type of the content part */

623

type: "text";

624

625

/** The text content */

626

text: string;

627

}

628

```

629

630

**Example:**

631

632

```typescript

633

{ type: "text", text: "What's in this image?" }

634

```

635

636

### Image Content

637

638

Image input via URL or base64 data.

639

640

```typescript { .api }

641

/**

642

* Image content part for vision

643

*/

644

interface ChatCompletionContentPartImage {

645

/** The type of the content part */

646

type: "image_url";

647

648

/** Image URL configuration */

649

image_url: {

650

/** URL or base64 encoded image data */

651

url: string;

652

653

/** Detail level: auto, low, or high */

654

detail?: "auto" | "low" | "high";

655

};

656

}

657

```

658

659

**URL Example:**

660

661

```typescript

662

{

663

type: "image_url",

664

image_url: {

665

url: "https://example.com/image.jpg",

666

detail: "high"

667

}

668

}

669

```

670

671

**Base64 Example:**

672

673

```typescript

674

{

675

type: "image_url",

676

image_url: {

677

url: "data:image/jpeg;base64,/9j/4AAQSkZJRg..."

678

}

679

}

680

```

681

682

### Audio Content

683

684

Audio input in base64 format.

685

686

```typescript { .api }

687

/**

688

* Audio input content part

689

*/

690

interface ChatCompletionContentPartInputAudio {

691

/** The type of the content part */

692

type: "input_audio";

693

694

/** Audio data configuration */

695

input_audio: {

696

/** Base64 encoded audio data */

697

data: string;

698

699

/** Audio format: wav or mp3 */

700

format: "wav" | "mp3";

701

};

702

}

703

```

704

705

**Example:**

706

707

```typescript

708

{

709

type: "input_audio",

710

input_audio: {

711

data: "UklGRiQAAABXQVZFZm10...",

712

format: "wav"

713

}

714

}

715

```

716

717

### Refusal Content

718

719

Model refusal content.

720

721

```typescript { .api }

722

/**

723

* Refusal content part

724

*/

725

interface ChatCompletionContentPartRefusal {

726

/** The type of the content part */

727

type: "refusal";

728

729

/** The refusal message */

730

refusal: string;

731

}

732

```

733

734

### File Content

735

736

File input for text generation using uploaded files or base64 encoded file data.

737

738

```typescript { .api }

739

/**

740

* File content part for file inputs

741

* Learn about file inputs: https://platform.openai.com/docs/guides/text

742

*/

743

interface ChatCompletionContentPartFile {

744

/** The type of the content part */

745

type: "file";

746

747

/** File data configuration */

748

file: {

749

/** The base64 encoded file data, used when passing the file to the model as a string */

750

file_data?: string;

751

752

/** The ID of an uploaded file to use as input */

753

file_id?: string;

754

755

/** The name of the file, used when passing the file to the model as a string */

756

filename?: string;

757

};

758

}

759

```

760

761

**Using an uploaded file:**

762

763

```typescript

764

{

765

type: "file",

766

file: {

767

file_id: "file-abc123",

768

filename: "document.pdf"

769

}

770

}

771

```

772

773

**Using base64 encoded file data:**

774

775

```typescript

776

{

777

type: "file",

778

file: {

779

file_data: "JVBERi0xLjQKJeLjz9MKMSAwIG9iago8PC...",

780

filename: "document.pdf"

781

}

782

}

783

```

784

785

---

786

787

## Tool Types

788

789

Tool definitions for function calling and custom tools.

790

791

### Function Tool

792

793

Standard function calling tool.

794

795

```typescript { .api }

796

/**

797

* A function tool that can be used to generate a response

798

*/

799

interface ChatCompletionFunctionTool {

800

/** The type of the tool */

801

type: "function";

802

803

/** Function definition */

804

function: FunctionDefinition;

805

}

806

807

/**

808

* Function definition

809

*/

810

interface FunctionDefinition {

811

/** Function name (a-z, A-Z, 0-9, underscores, dashes, max 64 chars) */

812

name: string;

813

814

/** Description of what the function does */

815

description?: string;

816

817

/** JSON Schema object describing parameters */

818

parameters?: FunctionParameters;

819

820

/** Enable strict schema adherence */

821

strict?: boolean | null;

822

}

823

824

/** JSON Schema for function parameters */

825

type FunctionParameters = { [key: string]: unknown };

826

```

827

828

**Example:**

829

830

```typescript

831

const tool: ChatCompletionFunctionTool = {

832

type: "function",

833

function: {

834

name: "get_current_weather",

835

description: "Get the current weather in a given location",

836

parameters: {

837

type: "object",

838

properties: {

839

location: {

840

type: "string",

841

description: "The city and state, e.g. San Francisco, CA",

842

},

843

unit: {

844

type: "string",

845

enum: ["celsius", "fahrenheit"],

846

},

847

},

848

required: ["location"],

849

},

850

strict: true,

851

},

852

};

853

```

854

855

### Custom Tool

856

857

Custom tool with configurable input format.

858

859

```typescript { .api }

860

/**

861

* A custom tool that processes input using a specified format

862

*/

863

interface ChatCompletionCustomTool {

864

/** The type of the tool */

865

type: "custom";

866

867

/** Custom tool properties */

868

custom: {

869

/** Tool name */

870

name: string;

871

872

/** Optional description */

873

description?: string;

874

875

/** Input format configuration */

876

format?: {

877

type: "text";

878

} | {

879

type: "grammar";

880

grammar: {

881

definition: string;

882

syntax: "lark" | "regex";

883

};

884

};

885

};

886

}

887

```

888

889

**Example:**

890

891

```typescript

892

const customTool: ChatCompletionCustomTool = {

893

type: "custom",

894

custom: {

895

name: "data_extractor",

896

description: "Extract structured data from text",

897

format: {

898

type: "text",

899

},

900

},

901

};

902

```

903

904

### Tool Choice Options

905

906

Control which tools the model can use.

907

908

```typescript { .api }

909

/**

910

* Controls which (if any) tool is called by the model

911

*/

912

type ChatCompletionToolChoiceOption =

913

| "none" // Model will not call any tool

914

| "auto" // Model can pick between message or tools

915

| "required" // Model must call one or more tools

916

| ChatCompletionAllowedToolChoice // Constrain to allowed tools

917

| ChatCompletionNamedToolChoice // Force specific function

918

| ChatCompletionNamedToolChoiceCustom; // Force specific custom tool

919

920

/**

921

* Constrain to allowed tools

922

*/

923

interface ChatCompletionAllowedToolChoice {

924

type: "allowed_tools";

925

allowed_tools: {

926

mode: "auto" | "required";

927

tools: Array<{ [key: string]: unknown }>;

928

};

929

}

930

931

/**

932

* Force specific function tool

933

*/

934

interface ChatCompletionNamedToolChoice {

935

type: "function";

936

function: {

937

name: string;

938

};

939

}

940

941

/**

942

* Force specific custom tool

943

*/

944

interface ChatCompletionNamedToolChoiceCustom {

945

type: "custom";

946

custom: {

947

name: string;

948

};

949

}

950

```

951

952

**Examples:**

953

954

```typescript

955

// Let model decide

956

tool_choice: "auto"

957

958

// Require tool use

959

tool_choice: "required"

960

961

// Force specific function

962

tool_choice: {

963

type: "function",

964

function: { name: "get_weather" }

965

}

966

967

// Constrain to allowed tools

968

tool_choice: {

969

type: "allowed_tools",

970

allowed_tools: {

971

mode: "required",

972

tools: [

973

{ type: "function", function: { name: "get_weather" } },

974

{ type: "function", function: { name: "get_time" } }

975

]

976

}

977

}

978

```

979

980

### Tool Call Types

981

982

Tool calls generated by the model.

983

984

```typescript { .api }

985

/**

986

* Union of all tool call types

987

*/

988

type ChatCompletionMessageToolCall =

989

| ChatCompletionMessageFunctionToolCall

990

| ChatCompletionMessageCustomToolCall;

991

992

/**

993

* Function tool call

994

*/

995

interface ChatCompletionMessageFunctionToolCall {

996

/** Tool call ID */

997

id: string;

998

999

/** Tool type */

1000

type: "function";

1001

1002

/** Function that was called */

1003

function: {

1004

/** Function name */

1005

name: string;

1006

1007

/** Arguments in JSON format */

1008

arguments: string;

1009

};

1010

}

1011

1012

/**

1013

* Custom tool call

1014

*/

1015

interface ChatCompletionMessageCustomToolCall {

1016

/** Tool call ID */

1017

id: string;

1018

1019

/** Tool type */

1020

type: "custom";

1021

1022

/** Custom tool that was called */

1023

custom: {

1024

/** Tool name */

1025

name: string;

1026

1027

/** Input for the tool */

1028

input: string;

1029

};

1030

}

1031

```

1032

1033

---

1034

1035

## Response Types

1036

1037

### Chat Completion

1038

1039

Complete response from the model.

1040

1041

```typescript { .api }

1042

/**

1043

* Represents a chat completion response

1044

*/

1045

interface ChatCompletion {

1046

/** Unique identifier */

1047

id: string;

1048

1049

/** Object type: always "chat.completion" */

1050

object: "chat.completion";

1051

1052

/** Unix timestamp of creation */

1053

created: number;

1054

1055

/** Model used for completion */

1056

model: string;

1057

1058

/** List of completion choices */

1059

choices: Array<ChatCompletion.Choice>;

1060

1061

/** Usage statistics */

1062

usage?: CompletionUsage;

1063

1064

/** Service tier used */

1065

service_tier?: "auto" | "default" | "flex" | "scale" | "priority" | null;

1066

1067

/** @deprecated Backend configuration fingerprint */

1068

system_fingerprint?: string;

1069

}

1070

1071

namespace ChatCompletion {

1072

interface Choice {

1073

/** Index of the choice */

1074

index: number;

1075

1076

/** Message generated by the model */

1077

message: ChatCompletionMessage;

1078

1079

/** Why the model stopped */

1080

finish_reason: "stop" | "length" | "tool_calls" | "content_filter" | "function_call";

1081

1082

/** Log probability information */

1083

logprobs: Logprobs | null;

1084

}

1085

1086

interface Logprobs {

1087

/** Content token log probabilities */

1088

content: Array<ChatCompletionTokenLogprob> | null;

1089

1090

/** Refusal token log probabilities */

1091

refusal: Array<ChatCompletionTokenLogprob> | null;

1092

}

1093

}

1094

```

1095

1096

### Chat Completion Message

1097

1098

Message returned by the model.

1099

1100

```typescript { .api }

1101

/**

1102

* A message generated by the model

1103

*/

1104

interface ChatCompletionMessage {

1105

/** Message role */

1106

role: "assistant";

1107

1108

/** Message content */

1109

content: string | null;

1110

1111

/** Refusal message */

1112

refusal: string | null;

1113

1114

/** Tool calls generated by the model */

1115

tool_calls?: Array<ChatCompletionMessageToolCall>;

1116

1117

/** Audio response data */

1118

audio?: ChatCompletionAudio | null;

1119

1120

/** Annotations (e.g., web search citations) */

1121

annotations?: Array<Annotation>;

1122

1123

/** @deprecated Use tool_calls instead */

1124

function_call?: { name: string; arguments: string } | null;

1125

}

1126

1127

/**

1128

* Audio response data

1129

*/

1130

interface ChatCompletionAudio {

1131

/** Audio response ID */

1132

id: string;

1133

1134

/** Base64 encoded audio bytes */

1135

data: string;

1136

1137

/** Unix timestamp of expiration */

1138

expires_at: number;

1139

1140

/** Transcript of the audio */

1141

transcript: string;

1142

}

1143

1144

/**

1145

* URL citation annotation

1146

*/

1147

interface Annotation {

1148

type: "url_citation";

1149

url_citation: {

1150

/** Start character index */

1151

start_index: number;

1152

1153

/** End character index */

1154

end_index: number;

1155

1156

/** Web resource URL */

1157

url: string;

1158

1159

/** Web resource title */

1160

title: string;

1161

};

1162

}

1163

```

1164

1165

### Chat Completion Chunk

1166

1167

Streaming chunk from the model.

1168

1169

```typescript { .api }

1170

/**

1171

* Streamed chunk of a chat completion response

1172

*/

1173

interface ChatCompletionChunk {

1174

/** Unique identifier (same for all chunks) */

1175

id: string;

1176

1177

/** Object type: always "chat.completion.chunk" */

1178

object: "chat.completion.chunk";

1179

1180

/** Unix timestamp of creation */

1181

created: number;

1182

1183

/** Model used */

1184

model: string;

1185

1186

/** List of chunk choices */

1187

choices: Array<ChatCompletionChunk.Choice>;

1188

1189

/** Service tier used */

1190

service_tier?: "auto" | "default" | "flex" | "scale" | "priority" | null;

1191

1192

/** @deprecated Backend configuration fingerprint */

1193

system_fingerprint?: string;

1194

1195

/** Usage statistics (only in last chunk with stream_options) */

1196

usage?: CompletionUsage | null;

1197

}

1198

1199

namespace ChatCompletionChunk {

1200

interface Choice {

1201

/** Choice index */

1202

index: number;

1203

1204

/** Delta containing incremental changes */

1205

delta: Delta;

1206

1207

/** Why the model stopped (only in final chunk) */

1208

finish_reason: "stop" | "length" | "tool_calls" | "content_filter" | "function_call" | null;

1209

1210

/** Log probability information */

1211

logprobs?: Logprobs | null;

1212

}

1213

1214

interface Delta {

1215

/** Role (only in first chunk) */

1216

role?: "developer" | "system" | "user" | "assistant" | "tool";

1217

1218

/** Content delta */

1219

content?: string | null;

1220

1221

/** Refusal delta */

1222

refusal?: string | null;

1223

1224

/** Tool call deltas */

1225

tool_calls?: Array<ToolCall>;

1226

1227

/** @deprecated Function call delta */

1228

function_call?: { name?: string; arguments?: string };

1229

}

1230

1231

interface ToolCall {

1232

/** Index of the tool call */

1233

index: number;

1234

1235

/** Tool call ID */

1236

id?: string;

1237

1238

/** Tool type */

1239

type?: "function";

1240

1241

/** Function details */

1242

function?: {

1243

name?: string;

1244

arguments?: string;

1245

};

1246

}

1247

1248

interface Logprobs {

1249

/** Content token log probabilities */

1250

content: Array<ChatCompletionTokenLogprob> | null;

1251

1252

/** Refusal token log probabilities */

1253

refusal: Array<ChatCompletionTokenLogprob> | null;

1254

}

1255

}

1256

```

1257

1258

### Token Log Probability

1259

1260

Log probability information for tokens.

1261

1262

```typescript { .api }

1263

/**

1264

* Token with log probability information

1265

*/

1266

interface ChatCompletionTokenLogprob {

1267

/** The token */

1268

token: string;

1269

1270

/** UTF-8 bytes representation */

1271

bytes: Array<number> | null;

1272

1273

/** Log probability of this token */

1274

logprob: number;

1275

1276

/** Most likely alternative tokens */

1277

top_logprobs: Array<TopLogprob>;

1278

}

1279

1280

interface TopLogprob {

1281

/** The token */

1282

token: string;

1283

1284

/** UTF-8 bytes representation */

1285

bytes: Array<number> | null;

1286

1287

/** Log probability */

1288

logprob: number;

1289

}

1290

```

1291

1292

### Completion Usage

1293

1294

Token usage statistics.

1295

1296

```typescript { .api }

1297

/**

1298

* Usage statistics for the completion request

1299

*/

1300

interface CompletionUsage {

1301

/** Number of tokens in the prompt */

1302

prompt_tokens: number;

1303

1304

/** Number of tokens in the completion */

1305

completion_tokens: number;

1306

1307

/** Total tokens used */

1308

total_tokens: number;

1309

1310

/** Detailed token breakdown */

1311

completion_tokens_details?: {

1312

/** Tokens used for reasoning */

1313

reasoning_tokens?: number;

1314

1315

/** Tokens used for audio */

1316

audio_tokens?: number;

1317

1318

/** Accepted prediction tokens */

1319

accepted_prediction_tokens?: number;

1320

1321

/** Rejected prediction tokens */

1322

rejected_prediction_tokens?: number;

1323

};

1324

1325

prompt_tokens_details?: {

1326

/** Cached tokens */

1327

cached_tokens?: number;

1328

1329

/** Audio tokens */

1330

audio_tokens?: number;

1331

};

1332

}

1333

```

1334

1335

### Stored Completion Types

1336

1337

Types for stored completions.

1338

1339

```typescript { .api }

1340

/**

1341

* Message in a stored completion

1342

*/

1343

interface ChatCompletionStoreMessage extends ChatCompletionMessage {

1344

/** Message identifier */

1345

id: string;

1346

1347

/** Content parts if provided */

1348

content_parts?: Array<ChatCompletionContentPartText | ChatCompletionContentPartImage> | null;

1349

}

1350

1351

/**

1352

* Deletion confirmation

1353

*/

1354

interface ChatCompletionDeleted {

1355

/** ID of deleted completion */

1356

id: string;

1357

1358

/** Whether deletion succeeded */

1359

deleted: boolean;

1360

1361

/** Object type */

1362

object: "chat.completion.deleted";

1363

}

1364

```

1365

1366

---

1367

1368

## Parameter Types

1369

1370

### Chat Completion Create Parameters

1371

1372

Main parameters for creating a chat completion.

1373

1374

```typescript { .api }

1375

/**

1376

* Parameters for creating a chat completion

1377

*/

1378

interface ChatCompletionCreateParamsBase {

1379

/** Array of conversation messages */

1380

messages: Array<ChatCompletionMessageParam>;

1381

1382

/** Model ID (e.g., "gpt-4o", "gpt-4o-mini") */

1383

model: (string & {}) | ChatModel;

1384

1385

/** Enable streaming */

1386

stream?: boolean | null;

1387

1388

/** Output modalities (text, audio) */

1389

modalities?: Array<"text" | "audio"> | null;

1390

1391

/** Audio output parameters (required for audio modality) */

1392

audio?: ChatCompletionAudioParam | null;

1393

1394

/** Maximum completion tokens */

1395

max_completion_tokens?: number | null;

1396

1397

/** @deprecated Use max_completion_tokens */

1398

max_tokens?: number | null;

1399

1400

/** Sampling temperature (0-2) */

1401

temperature?: number | null;

1402

1403

/** Nucleus sampling parameter */

1404

top_p?: number | null;

1405

1406

/** How many completions to generate */

1407

n?: number | null;

1408

1409

/** Stop sequences */

1410

stop?: string | null | Array<string>;

1411

1412

/** Frequency penalty (-2.0 to 2.0) */

1413

frequency_penalty?: number | null;

1414

1415

/** Presence penalty (-2.0 to 2.0) */

1416

presence_penalty?: number | null;

1417

1418

/** Token bias adjustments */

1419

logit_bias?: { [key: string]: number } | null;

1420

1421

/** Return log probabilities */

1422

logprobs?: boolean | null;

1423

1424

/** Number of top log probs (0-20) */

1425

top_logprobs?: number | null;

1426

1427

/** List of available tools */

1428

tools?: Array<ChatCompletionTool>;

1429

1430

/** Tool choice configuration */

1431

tool_choice?: ChatCompletionToolChoiceOption;

1432

1433

/** Enable parallel function calling */

1434

parallel_tool_calls?: boolean;

1435

1436

/** Response format configuration */

1437

response_format?:

1438

| ResponseFormatText

1439

| ResponseFormatJSONObject

1440

| ResponseFormatJSONSchema;

1441

1442

/** Reasoning effort (for reasoning models) */

1443

reasoning_effort?: ReasoningEffort | null;

1444

1445

/** Service tier selection */

1446

service_tier?: "auto" | "default" | "flex" | "scale" | "priority" | null;

1447

1448

/** Store completion for later retrieval */

1449

store?: boolean | null;

1450

1451

/** Metadata (16 key-value pairs max) */

1452

metadata?: Metadata | null;

1453

1454

/** Streaming options */

1455

stream_options?: ChatCompletionStreamOptions | null;

1456

1457

/** Prediction content for faster generation */

1458

prediction?: ChatCompletionPredictionContent | null;

1459

1460

/** Verbosity level */

1461

verbosity?: "low" | "medium" | "high" | null;

1462

1463

/** Web search configuration */

1464

web_search_options?: WebSearchOptions;

1465

1466

/** @deprecated Deterministic sampling seed */

1467

seed?: number | null;

1468

1469

/** @deprecated Use safety_identifier */

1470

user?: string;

1471

1472

/** Safety/abuse detection identifier */

1473

safety_identifier?: string;

1474

1475

/** Prompt cache key */

1476

prompt_cache_key?: string;

1477

1478

/** Prompt cache retention */

1479

prompt_cache_retention?: "in-memory" | "24h" | null;

1480

1481

/** @deprecated Function definitions (use tools) */

1482

functions?: Array<FunctionDefinition>;

1483

1484

/** @deprecated Function call control (use tool_choice) */

1485

function_call?: "none" | "auto" | { name: string };

1486

}

1487

1488

/**

1489

* Non-streaming parameters

1490

*/

1491

interface ChatCompletionCreateParamsNonStreaming

1492

extends ChatCompletionCreateParamsBase {

1493

stream?: false | null;

1494

}

1495

1496

/**

1497

* Streaming parameters

1498

*/

1499

interface ChatCompletionCreateParamsStreaming

1500

extends ChatCompletionCreateParamsBase {

1501

stream: true;

1502

}

1503

```

1504

1505

### Audio Parameters

1506

1507

Audio output configuration.

1508

1509

```typescript { .api }

1510

/**

1511

* Parameters for audio output

1512

*/

1513

interface ChatCompletionAudioParam {

1514

/** Audio format */

1515

format: "wav" | "aac" | "mp3" | "flac" | "opus" | "pcm16";

1516

1517

/** Voice selection */

1518

voice:

1519

| (string & {})

1520

| "alloy"

1521

| "ash"

1522

| "ballad"

1523

| "coral"

1524

| "echo"

1525

| "sage"

1526

| "shimmer"

1527

| "verse"

1528

| "marin"

1529

| "cedar";

1530

}

1531

```

1532

1533

**Example:**

1534

1535

```typescript

1536

const completion = await client.chat.completions.create({

1537

model: "gpt-4o-audio-preview",

1538

modalities: ["text", "audio"],

1539

audio: {

1540

voice: "alloy",

1541

format: "mp3",

1542

},

1543

messages: [

1544

{ role: "user", content: "Tell me a short story" },

1545

],

1546

});

1547

1548

// Access audio data

1549

const audioData = completion.choices[0].message.audio?.data;

1550

const transcript = completion.choices[0].message.audio?.transcript;

1551

```

1552

1553

### Stream Options

1554

1555

Configuration for streaming responses.

1556

1557

```typescript { .api }

1558

/**

1559

* Options for streaming response

1560

*/

1561

interface ChatCompletionStreamOptions {

1562

/** Include usage statistics in final chunk */

1563

include_usage?: boolean;

1564

1565

/** Enable stream obfuscation for security */

1566

include_obfuscation?: boolean;

1567

}

1568

```

1569

1570

**Example:**

1571

1572

```typescript

1573

const stream = await client.chat.completions.create({

1574

model: "gpt-4o",

1575

messages: [{ role: "user", content: "Hello" }],

1576

stream: true,

1577

stream_options: {

1578

include_usage: true,

1579

},

1580

});

1581

1582

for await (const chunk of stream) {

1583

// Last chunk will include usage statistics

1584

if (chunk.usage) {

1585

console.log("Total tokens:", chunk.usage.total_tokens);

1586

}

1587

}

1588

```

1589

1590

### Prediction Content

1591

1592

Predicted output for faster generation.

1593

1594

```typescript { .api }

1595

/**

1596

* Static predicted output content

1597

*/

1598

interface ChatCompletionPredictionContent {

1599

/** Prediction type */

1600

type: "content";

1601

1602

/** Content to match */

1603

content: string | Array<ChatCompletionContentPartText>;

1604

}

1605

```

1606

1607

**Example:**

1608

1609

```typescript

1610

// When regenerating a file

1611

const completion = await client.chat.completions.create({

1612

model: "gpt-4o",

1613

messages: [

1614

{

1615

role: "user",

1616

content: "Rewrite this file with better comments",

1617

},

1618

],

1619

prediction: {

1620

type: "content",

1621

content: existingFileContent, // The content being regenerated

1622

},

1623

});

1624

```

1625

1626

### Web Search Options

1627

1628

Configuration for web search tool.

1629

1630

```typescript { .api }

1631

/**

1632

* Web search tool configuration

1633

*/

1634

interface WebSearchOptions {

1635

/** Context window size for search results */

1636

search_context_size?: "low" | "medium" | "high";

1637

1638

/** Approximate user location */

1639

user_location?: {

1640

type: "approximate";

1641

approximate: {

1642

/** City name */

1643

city?: string;

1644

1645

/** Two-letter ISO country code */

1646

country?: string;

1647

1648

/** Region/state name */

1649

region?: string;

1650

1651

/** IANA timezone */

1652

timezone?: string;

1653

};

1654

} | null;

1655

}

1656

```

1657

1658

**Example:**

1659

1660

```typescript

1661

const completion = await client.chat.completions.create({

1662

model: "gpt-4o",

1663

messages: [

1664

{ role: "user", content: "What are the best restaurants near me?" },

1665

],

1666

web_search_options: {

1667

search_context_size: "high",

1668

user_location: {

1669

type: "approximate",

1670

approximate: {

1671

city: "San Francisco",

1672

region: "California",

1673

country: "US",

1674

timezone: "America/Los_Angeles",

1675

},

1676

},

1677

},

1678

});

1679

```

1680

1681

### Update Parameters

1682

1683

Parameters for updating stored completions.

1684

1685

```typescript { .api }

1686

/**

1687

* Parameters for updating a stored completion

1688

*/

1689

interface ChatCompletionUpdateParams {

1690

/** Metadata to update */

1691

metadata: Metadata | null;

1692

}

1693

```

1694

1695

### List Parameters

1696

1697

Parameters for listing stored completions.

1698

1699

```typescript { .api }

1700

/**

1701

* Parameters for listing stored completions

1702

*/

1703

interface ChatCompletionListParams extends CursorPageParams {

1704

/** Filter by model */

1705

model?: string;

1706

1707

/** Filter by metadata */

1708

metadata?: Metadata | null;

1709

1710

/** Sort order (asc or desc) */

1711

order?: "asc" | "desc";

1712

1713

/** Cursor for pagination */

1714

after?: string;

1715

1716

/** Page size limit */

1717

limit?: number;

1718

}

1719

```

1720

1721

### Message List Parameters

1722

1723

Parameters for listing stored messages.

1724

1725

```typescript { .api }

1726

/**

1727

* Parameters for listing messages in a stored completion

1728

*/

1729

interface MessageListParams extends CursorPageParams {

1730

/** Sort order (asc or desc) */

1731

order?: "asc" | "desc";

1732

1733

/** Cursor for pagination */

1734

after?: string;

1735

1736

/** Page size limit */

1737

limit?: number;

1738

}

1739

```

1740

1741

---

1742

1743

## Advanced Features

1744

1745

### Streaming with Event Handlers

1746

1747

Fine-grained control over streaming events.

1748

1749

```typescript { .api }

1750

/**

1751

* Stream events

1752

*/

1753

interface ChatCompletionStreamEvents {

1754

/** Content delta and snapshot */

1755

content: (delta: string, snapshot: string) => void;

1756

1757

/** Raw chunk events */

1758

chunk: (chunk: ChatCompletionChunk, snapshot: ChatCompletionSnapshot) => void;

1759

1760

/** Content delta event */

1761

"content.delta": (props: ContentDeltaEvent) => void;

1762

1763

/** Content completed */

1764

"content.done": (props: ContentDoneEvent) => void;

1765

1766

/** Refusal delta */

1767

"refusal.delta": (props: RefusalDeltaEvent) => void;

1768

1769

/** Refusal completed */

1770

"refusal.done": (props: RefusalDoneEvent) => void;

1771

1772

/** Tool call arguments delta */

1773

"tool_calls.function.arguments.delta": (props: FunctionToolCallArgumentsDeltaEvent) => void;

1774

1775

/** Tool call arguments completed */

1776

"tool_calls.function.arguments.done": (props: FunctionToolCallArgumentsDoneEvent) => void;

1777

1778

/** Log probs content delta */

1779

"logprobs.content.delta": (props: LogProbsContentDeltaEvent) => void;

1780

1781

/** Log probs content completed */

1782

"logprobs.content.done": (props: LogProbsContentDoneEvent) => void;

1783

1784

/** Log probs refusal delta */

1785

"logprobs.refusal.delta": (props: LogProbsRefusalDeltaEvent) => void;

1786

1787

/** Log probs refusal completed */

1788

"logprobs.refusal.done": (props: LogProbsRefusalDoneEvent) => void;

1789

}

1790

```

1791

1792

**Example:**

1793

1794

```typescript

1795

const stream = await client.chat.completions.stream({

1796

model: "gpt-4o",

1797

messages: [{ role: "user", content: "Write a poem" }],

1798

});

1799

1800

// Listen to specific events

1801

stream.on("content.delta", ({ delta, snapshot }) => {

1802

console.log("Delta:", delta);

1803

console.log("Full content so far:", snapshot);

1804

});

1805

1806

stream.on("tool_calls.function.arguments.delta", ({ name, arguments_delta }) => {

1807

console.log(`Tool ${name} args:`, arguments_delta);

1808

});

1809

1810

// Wait for completion

1811

const finalCompletion = await stream.finalChatCompletion();

1812

```

1813

1814

### Vision (Image Inputs)

1815

1816

Process images with vision-capable models.

1817

1818

**Example:**

1819

1820

```typescript

1821

const completion = await client.chat.completions.create({

1822

model: "gpt-4o",

1823

messages: [

1824

{

1825

role: "user",

1826

content: [

1827

{

1828

type: "text",

1829

text: "What's in this image? Please describe in detail.",

1830

},

1831

{

1832

type: "image_url",

1833

image_url: {

1834

url: "https://example.com/image.jpg",

1835

detail: "high", // or "low" for faster processing

1836

},

1837

},

1838

],

1839

},

1840

],

1841

});

1842

1843

console.log(completion.choices[0].message.content);

1844

```

1845

1846

**Multiple Images:**

1847

1848

```typescript

1849

const completion = await client.chat.completions.create({

1850

model: "gpt-4o",

1851

messages: [

1852

{

1853

role: "user",

1854

content: [

1855

{ type: "text", text: "Compare these two images:" },

1856

{ type: "image_url", image_url: { url: imageUrl1 } },

1857

{ type: "image_url", image_url: { url: imageUrl2 } },

1858

],

1859

},

1860

],

1861

});

1862

```

1863

1864

### Function Calling

1865

1866

Let the model call functions to get information.

1867

1868

**Example:**

1869

1870

```typescript

1871

const tools = [

1872

{

1873

type: "function" as const,

1874

function: {

1875

name: "get_current_weather",

1876

description: "Get the current weather in a given location",

1877

parameters: {

1878

type: "object",

1879

properties: {

1880

location: {

1881

type: "string",

1882

description: "The city and state, e.g. San Francisco, CA",

1883

},

1884

unit: {

1885

type: "string",

1886

enum: ["celsius", "fahrenheit"],

1887

},

1888

},

1889

required: ["location"],

1890

},

1891

},

1892

},

1893

{

1894

type: "function" as const,

1895

function: {

1896

name: "get_n_day_weather_forecast",

1897

description: "Get an N-day weather forecast",

1898

parameters: {

1899

type: "object",

1900

properties: {

1901

location: { type: "string" },

1902

format: {

1903

type: "string",

1904

enum: ["celsius", "fahrenheit"],

1905

},

1906

num_days: {

1907

type: "integer",

1908

description: "Number of days to forecast",

1909

},

1910

},

1911

required: ["location", "num_days"],

1912

},

1913

},

1914

},

1915

];

1916

1917

const messages: ChatCompletionMessageParam[] = [

1918

{

1919

role: "user",

1920

content: "What's the weather like in Boston today and the 5 day forecast?",

1921

},

1922

];

1923

1924

// First API call

1925

const response = await client.chat.completions.create({

1926

model: "gpt-4o",

1927

messages: messages,

1928

tools: tools,

1929

tool_choice: "auto",

1930

});

1931

1932

const responseMessage = response.choices[0].message;

1933

messages.push(responseMessage);

1934

1935

// Check if the model wants to call a function

1936

if (responseMessage.tool_calls) {

1937

// Call each function

1938

for (const toolCall of responseMessage.tool_calls) {

1939

const functionName = toolCall.function.name;

1940

const functionArgs = JSON.parse(toolCall.function.arguments);

1941

1942

let functionResponse;

1943

if (functionName === "get_current_weather") {

1944

functionResponse = getCurrentWeather(functionArgs.location, functionArgs.unit);

1945

} else if (functionName === "get_n_day_weather_forecast") {

1946

functionResponse = getForecast(

1947

functionArgs.location,

1948

functionArgs.format,

1949

functionArgs.num_days

1950

);

1951

}

1952

1953

// Add function response to messages

1954

messages.push({

1955

role: "tool",

1956

tool_call_id: toolCall.id,

1957

content: JSON.stringify(functionResponse),

1958

});

1959

}

1960

1961

// Get final response from model

1962

const finalResponse = await client.chat.completions.create({

1963

model: "gpt-4o",

1964

messages: messages,

1965

});

1966

1967

console.log(finalResponse.choices[0].message.content);

1968

}

1969

1970

function getCurrentWeather(location: string, unit?: string) {

1971

return {

1972

location,

1973

temperature: "72",

1974

unit: unit || "fahrenheit",

1975

forecast: ["sunny", "windy"],

1976

};

1977

}

1978

1979

function getForecast(location: string, format: string, numDays: number) {

1980

return {

1981

location,

1982

format,

1983

num_days: numDays,

1984

forecast: ["sunny", "cloudy", "rainy", "sunny", "partly cloudy"],

1985

};

1986

}

1987

```

1988

1989

### Structured Outputs with JSON Schema

1990

1991

Generate strictly validated JSON responses.

1992

1993

**Example:**

1994

1995

```typescript

1996

const completion = await client.chat.completions.create({

1997

model: "gpt-4o",

1998

messages: [

1999

{

2000

role: "user",

2001

content: "Extract the event details: Team meeting on Monday at 2pm with Alice and Bob",

2002

},

2003

],

2004

response_format: {

2005

type: "json_schema",

2006

json_schema: {

2007

name: "calendar_event",

2008

strict: true,

2009

schema: {

2010

type: "object",

2011

properties: {

2012

name: { type: "string" },

2013

date: { type: "string" },

2014

time: { type: "string" },

2015

participants: {

2016

type: "array",

2017

items: { type: "string" },

2018

},

2019

},

2020

required: ["name", "date", "participants"],

2021

additionalProperties: false,

2022

},

2023

},

2024

},

2025

});

2026

2027

const event = JSON.parse(completion.choices[0].message.content);

2028

console.log(event);

2029

// { name: "Team meeting", date: "Monday", time: "2pm", participants: ["Alice", "Bob"] }

2030

```

2031

2032

### Reasoning Models

2033

2034

Use reasoning models with configurable effort levels.

2035

2036

**Example:**

2037

2038

```typescript

2039

// Using o3 model with reasoning

2040

const completion = await client.chat.completions.create({

2041

model: "o3",

2042

messages: [

2043

{

2044

role: "user",

2045

content: "Solve this logic puzzle: If all roses are flowers and some flowers fade quickly, can we conclude that some roses fade quickly?",

2046

},

2047

],

2048

reasoning_effort: "high", // or "low", "medium"

2049

});

2050

2051

console.log(completion.choices[0].message.content);

2052

console.log("Reasoning tokens:", completion.usage?.completion_tokens_details?.reasoning_tokens);

2053

```

2054

2055

### Audio Generation

2056

2057

Generate audio responses.

2058

2059

**Example:**

2060

2061

```typescript

2062

const completion = await client.chat.completions.create({

2063

model: "gpt-4o-audio-preview",

2064

modalities: ["text", "audio"],

2065

audio: {

2066

voice: "alloy",

2067

format: "mp3",

2068

},

2069

messages: [

2070

{

2071

role: "user",

2072

content: "Tell me a bedtime story about a friendly dragon",

2073

},

2074

],

2075

});

2076

2077

const audioData = completion.choices[0].message.audio?.data;

2078

const transcript = completion.choices[0].message.audio?.transcript;

2079

2080

// Save audio file

2081

if (audioData) {

2082

const buffer = Buffer.from(audioData, "base64");

2083

fs.writeFileSync("story.mp3", buffer);

2084

}

2085

```

2086

2087

### Prompt Caching

2088

2089

Optimize costs by caching prompt prefixes.

2090

2091

**Example:**

2092

2093

```typescript

2094

// First request establishes cache

2095

const completion1 = await client.chat.completions.create({

2096

model: "gpt-4o",

2097

messages: [

2098

{

2099

role: "developer",

2100

content: veryLongSystemPrompt, // This will be cached

2101

},

2102

{ role: "user", content: "First question" },

2103

],

2104

prompt_cache_key: "my-app-v1",

2105

prompt_cache_retention: "24h", // Keep cached for 24 hours

2106

});

2107

2108

// Subsequent requests reuse cache

2109

const completion2 = await client.chat.completions.create({

2110

model: "gpt-4o",

2111

messages: [

2112

{

2113

role: "developer",

2114

content: veryLongSystemPrompt, // Cache hit!

2115

},

2116

{ role: "user", content: "Second question" },

2117

],

2118

prompt_cache_key: "my-app-v1",

2119

});

2120

```

2121

2122

---

2123

2124

## Helper Types

2125

2126

### Chat Model Union Type

2127

2128

All available chat models.

2129

2130

```typescript { .api }

2131

/**

2132

* Chat-capable models

2133

*/

2134

type ChatModel =

2135

// GPT-5.1 series

2136

| "gpt-5.1"

2137

| "gpt-5.1-2025-11-13"

2138

| "gpt-5.1-codex"

2139

| "gpt-5.1-mini"

2140

| "gpt-5.1-chat-latest"

2141

// GPT-5 series

2142

| "gpt-5"

2143

| "gpt-5-mini"

2144

| "gpt-5-nano"

2145

| "gpt-5-2025-08-07"

2146

| "gpt-5-mini-2025-08-07"

2147

| "gpt-5-nano-2025-08-07"

2148

| "gpt-5-chat-latest"

2149

// GPT-4.1 series

2150

| "gpt-4.1"

2151

| "gpt-4.1-mini"

2152

| "gpt-4.1-nano"

2153

| "gpt-4.1-2025-04-14"

2154

| "gpt-4.1-mini-2025-04-14"

2155

| "gpt-4.1-nano-2025-04-14"

2156

// O-series models

2157

| "o4-mini"

2158

| "o4-mini-2025-04-16"

2159

| "o3"

2160

| "o3-2025-04-16"

2161

| "o3-mini"

2162

| "o3-mini-2025-01-31"

2163

| "o1"

2164

| "o1-2024-12-17"

2165

| "o1-preview"

2166

| "o1-preview-2024-09-12"

2167

| "o1-mini"

2168

| "o1-mini-2024-09-12"

2169

// GPT-4o series

2170

| "gpt-4o"

2171

| "gpt-4o-2024-11-20"

2172

| "gpt-4o-2024-08-06"

2173

| "gpt-4o-2024-05-13"

2174

| "gpt-4o-audio-preview"

2175

| "gpt-4o-audio-preview-2024-10-01"

2176

| "gpt-4o-audio-preview-2024-12-17"

2177

| "gpt-4o-audio-preview-2025-06-03"

2178

| "gpt-4o-mini-audio-preview"

2179

| "gpt-4o-mini-audio-preview-2024-12-17"

2180

| "gpt-4o-search-preview"

2181

| "gpt-4o-mini-search-preview"

2182

| "gpt-4o-search-preview-2025-03-11"

2183

| "gpt-4o-mini-search-preview-2025-03-11"

2184

| "chatgpt-4o-latest"

2185

| "codex-mini-latest"

2186

| "gpt-4o-mini"

2187

| "gpt-4o-mini-2024-07-18"

2188

// GPT-4 series

2189

| "gpt-4-turbo"

2190

| "gpt-4-turbo-2024-04-09"

2191

| "gpt-4-0125-preview"

2192

| "gpt-4-turbo-preview"

2193

| "gpt-4-1106-preview"

2194

| "gpt-4-vision-preview"

2195

| "gpt-4"

2196

| "gpt-4-0314"

2197

| "gpt-4-0613"

2198

| "gpt-4-32k"

2199

| "gpt-4-32k-0314"

2200

| "gpt-4-32k-0613"

2201

// GPT-3.5 series

2202

| "gpt-3.5-turbo"

2203

| "gpt-3.5-turbo-16k"

2204

| "gpt-3.5-turbo-0301"

2205

| "gpt-3.5-turbo-0613"

2206

| "gpt-3.5-turbo-1106"

2207

| "gpt-3.5-turbo-0125"

2208

| "gpt-3.5-turbo-16k-0613"

2209

// Custom/fine-tuned models

2210

| (string & {})

2211

```

2212

2213

### Reasoning Effort

2214

2215

Effort levels for reasoning models.

2216

2217

```typescript { .api }

2218

/**

2219

* Reasoning effort configuration

2220

*/

2221

type ReasoningEffort = "none" | "minimal" | "low" | "medium" | "high" | null;

2222

```

2223

2224

### Chat Completion Role

2225

2226

All valid message roles.

2227

2228

```typescript { .api }

2229

/**

2230

* Role of message author

2231

*/

2232

type ChatCompletionRole = "developer" | "system" | "user" | "assistant" | "tool" | "function";

2233

```

2234

2235

### Modality

2236

2237

Output modalities.

2238

2239

```typescript { .api }

2240

/**

2241

* Output modality types

2242

*/

2243

type ChatCompletionModality = "text" | "audio";

2244

```

2245

2246

### Metadata

2247

2248

Key-value metadata storage.

2249

2250

```typescript { .api }

2251

/**

2252

* Metadata for storing additional information

2253

* Maximum 16 key-value pairs

2254

* Keys: max 64 characters

2255

* Values: max 512 characters

2256

*/

2257

type Metadata = { [key: string]: string };

2258

```

2259

2260

### Response Format Types

2261

2262

Response format options.

2263

2264

```typescript { .api }

2265

/**

2266

* Text response format

2267

*/

2268

interface ResponseFormatText {

2269

type: "text";

2270

}

2271

2272

/**

2273

* JSON object response format

2274

*/

2275

interface ResponseFormatJSONObject {

2276

type: "json_object";

2277

}

2278

2279

/**

2280

* JSON Schema response format (Structured Outputs)

2281

*/

2282

interface ResponseFormatJSONSchema {

2283

type: "json_schema";

2284

json_schema: {

2285

name: string;

2286

description?: string;

2287

schema?: { [key: string]: unknown };

2288

strict?: boolean | null;

2289

};

2290

}

2291

2292

/**

2293

* Grammar-constrained text

2294

*/

2295

interface ResponseFormatTextGrammar {

2296

type: "grammar";

2297

grammar: string;

2298

}

2299

2300

/**

2301

* Python code generation

2302

*/

2303

interface ResponseFormatTextPython {

2304

type: "python";

2305

}

2306

```

2307

2308

---

2309

2310

## Pagination Types

2311

2312

### Chat Completions Page

2313

2314

Cursor-based pagination for stored completions.

2315

2316

```typescript { .api }

2317

/**

2318

* Paginated list of chat completions

2319

*/

2320

type ChatCompletionsPage = CursorPage<ChatCompletion>;

2321

2322

/**

2323

* Cursor page parameters

2324

*/

2325

interface CursorPageParams {

2326

/** Cursor for next page */

2327

after?: string;

2328

2329

/** Page size limit */

2330

limit?: number;

2331

}

2332

```

2333

2334

**Example:**

2335

2336

```typescript

2337

// Manual pagination

2338

let page = await client.chat.completions.list({ limit: 10 });

2339

2340

while (page.hasNextPage()) {

2341

for (const completion of page.data) {

2342

console.log(completion.id);

2343

}

2344

page = await page.getNextPage();

2345

}

2346

2347

// Automatic pagination

2348

for await (const completion of client.chat.completions.list()) {

2349

console.log(completion.id);

2350

}

2351

```

2352

2353

### Chat Completion Store Messages Page

2354

2355

Cursor-based pagination for stored messages.

2356

2357

```typescript { .api }

2358

/**

2359

* Paginated list of stored messages

2360

*/

2361

type ChatCompletionStoreMessagesPage = CursorPage<ChatCompletionStoreMessage>;

2362

```

2363

2364

---

2365

2366

## Parsed Completion Types

2367

2368

Types for parsed completions with automatic validation.

2369

2370

```typescript { .api }

2371

/**

2372

* Parsed completion with validated content

2373

*/

2374

interface ParsedChatCompletion<ParsedT> extends ChatCompletion {

2375

choices: Array<ParsedChoice<ParsedT>>;

2376

}

2377

2378

/**

2379

* Parsed choice

2380

*/

2381

interface ParsedChoice<ParsedT> extends ChatCompletion.Choice {

2382

message: ParsedChatCompletionMessage<ParsedT>;

2383

}

2384

2385

/**

2386

* Parsed message

2387

*/

2388

interface ParsedChatCompletionMessage<ParsedT> extends ChatCompletionMessage {

2389

/** Parsed and validated content */

2390

parsed: ParsedT | null;

2391

2392

/** Parsed tool calls */

2393

tool_calls?: Array<ParsedFunctionToolCall>;

2394

}

2395

2396

/**

2397

* Parsed function tool call

2398

*/

2399

interface ParsedFunctionToolCall extends ChatCompletionMessageFunctionToolCall {

2400

function: ParsedFunction;

2401

}

2402

2403

/**

2404

* Parsed function with arguments

2405

*/

2406

interface ParsedFunction extends ChatCompletionMessageFunctionToolCall.Function {

2407

/** Parsed arguments object */

2408

parsed_arguments?: unknown;

2409

}

2410

```

2411

2412

---

2413

2414

## Error Handling

2415

2416

Common errors and how to handle them.

2417

2418

**Example:**

2419

2420

```typescript

2421

import {

2422

APIError,

2423

RateLimitError,

2424

APIConnectionError,

2425

AuthenticationError,

2426

} from "openai";

2427

2428

try {

2429

const completion = await client.chat.completions.create({

2430

model: "gpt-4o",

2431

messages: [{ role: "user", content: "Hello" }],

2432

});

2433

} catch (error) {

2434

if (error instanceof RateLimitError) {

2435

console.error("Rate limit exceeded:", error.message);

2436

// Wait and retry

2437

} else if (error instanceof AuthenticationError) {

2438

console.error("Invalid API key:", error.message);

2439

} else if (error instanceof APIConnectionError) {

2440

console.error("Connection failed:", error.message);

2441

// Retry logic

2442

} else if (error instanceof APIError) {

2443

console.error("API error:", error.status, error.message);

2444

} else {

2445

console.error("Unexpected error:", error);

2446

}

2447

}

2448

```

2449

2450

### Content Filter Errors

2451

2452

Handle content filtering during streaming.

2453

2454

**Example:**

2455

2456

```typescript

2457

import { ContentFilterFinishReasonError } from "openai";

2458

2459

try {

2460

const stream = await client.chat.completions.stream({

2461

model: "gpt-4o",

2462

messages: [{ role: "user", content: "Hello" }],

2463

});

2464

2465

for await (const chunk of stream) {

2466

process.stdout.write(chunk.choices[0]?.delta?.content || "");

2467

}

2468

} catch (error) {

2469

if (error instanceof ContentFilterFinishReasonError) {

2470

console.error("Content was filtered:", error.message);

2471

}

2472

}

2473

```

2474

2475

---

2476

2477

## Request Options

2478

2479

All methods accept optional request configuration.

2480

2481

```typescript { .api }

2482

/**

2483

* Request configuration options

2484

*/

2485

interface RequestOptions {

2486

/** Custom headers */

2487

headers?: HeadersLike;

2488

2489

/** Maximum number of retries */

2490

maxRetries?: number;

2491

2492

/** Request timeout in milliseconds */

2493

timeout?: number;

2494

2495

/** Query parameters */

2496

query?: Record<string, unknown>;

2497

2498

/** Abort signal for cancellation */

2499

signal?: AbortSignal;

2500

}

2501

```

2502

2503

**Example:**

2504

2505

```typescript

2506

const completion = await client.chat.completions.create(

2507

{

2508

model: "gpt-4o",

2509

messages: [{ role: "user", content: "Hello" }],

2510

},

2511

{

2512

timeout: 30000, // 30 second timeout

2513

maxRetries: 3,

2514

headers: {

2515

"X-Custom-Header": "value",

2516

},

2517

}

2518

);

2519

2520

// With abort signal

2521

const controller = new AbortController();

2522

setTimeout(() => controller.abort(), 5000); // Cancel after 5s

2523

2524

try {

2525

const completion = await client.chat.completions.create(

2526

{

2527

model: "gpt-4o",

2528

messages: [{ role: "user", content: "Hello" }],

2529

},

2530

{

2531

signal: controller.signal,

2532

}

2533

);

2534

} catch (error) {

2535

if (error.name === "AbortError") {

2536

console.log("Request cancelled");

2537

}

2538

}

2539

```

2540

2541

---

2542

2543

## Complete Usage Examples

2544

2545

### Basic Conversation

2546

2547

```typescript

2548

const messages: ChatCompletionMessageParam[] = [];

2549

2550

// Add system message

2551

messages.push({

2552

role: "developer",

2553

content: "You are a helpful assistant that writes haikus.",

2554

});

2555

2556

// Add user message

2557

messages.push({

2558

role: "user",

2559

content: "Write a haiku about programming",

2560

});

2561

2562

const completion = await client.chat.completions.create({

2563

model: "gpt-4o",

2564

messages: messages,

2565

});

2566

2567

// Add assistant response to history

2568

messages.push(completion.choices[0].message);

2569

2570

console.log(completion.choices[0].message.content);

2571

```

2572

2573

### Streaming with Progress

2574

2575

```typescript

2576

const stream = await client.chat.completions.stream({

2577

model: "gpt-4o",

2578

messages: [

2579

{

2580

role: "user",

2581

content: "Write a detailed explanation of recursion with examples",

2582

},

2583

],

2584

stream_options: {

2585

include_usage: true,

2586

},

2587

});

2588

2589

let fullContent = "";

2590

2591

stream.on("content", (delta, snapshot) => {

2592

fullContent = snapshot;

2593

process.stdout.write(delta);

2594

});

2595

2596

stream.on("chunk", (chunk, snapshot) => {

2597

if (chunk.usage) {

2598

console.log("\nToken usage:", chunk.usage);

2599

}

2600

});

2601

2602

await stream.done();

2603

console.log("\n\nFinal content length:", fullContent.length);

2604

```

2605

2606

### Multi-turn Conversation with Context

2607

2608

```typescript

2609

async function chat(userMessage: string, history: ChatCompletionMessageParam[]) {

2610

history.push({

2611

role: "user",

2612

content: userMessage,

2613

});

2614

2615

const completion = await client.chat.completions.create({

2616

model: "gpt-4o",

2617

messages: history,

2618

max_completion_tokens: 500,

2619

});

2620

2621

const assistantMessage = completion.choices[0].message;

2622

history.push(assistantMessage);

2623

2624

return {

2625

response: assistantMessage.content,

2626

history,

2627

usage: completion.usage,

2628

};

2629

}

2630

2631

// Usage

2632

const history: ChatCompletionMessageParam[] = [

2633

{

2634

role: "developer",

2635

content: "You are a coding tutor helping students learn Python.",

2636

},

2637

];

2638

2639

const result1 = await chat("What is a list comprehension?", history);

2640

console.log(result1.response);

2641

2642

const result2 = await chat("Can you show me an example?", history);

2643

console.log(result2.response);

2644

2645

console.log("Total tokens used:", result1.usage.total_tokens + result2.usage.total_tokens);

2646

```

2647

2648

### Image Analysis

2649

2650

```typescript

2651

import fs from "fs";

2652

2653

// Read image file

2654

const imageBuffer = fs.readFileSync("./photo.jpg");

2655

const base64Image = imageBuffer.toString("base64");

2656

2657

const completion = await client.chat.completions.create({

2658

model: "gpt-4o",

2659

messages: [

2660

{

2661

role: "user",

2662

content: [

2663

{

2664

type: "text",

2665

text: "Analyze this image and describe what you see in detail. Also identify any text in the image.",

2666

},

2667

{

2668

type: "image_url",

2669

image_url: {

2670

url: `data:image/jpeg;base64,${base64Image}`,

2671

detail: "high",

2672

},

2673

},

2674

],

2675

},

2676

],

2677

max_completion_tokens: 1000,

2678

});

2679

2680

console.log(completion.choices[0].message.content);

2681

```

2682

2683

### Function Calling with Error Handling

2684

2685

```typescript

2686

async function runWithTools() {

2687

const tools: ChatCompletionTool[] = [

2688

{

2689

type: "function",

2690

function: {

2691

name: "calculate",

2692

description: "Perform a mathematical calculation",

2693

parameters: {

2694

type: "object",

2695

properties: {

2696

expression: {

2697

type: "string",

2698

description: "Mathematical expression to evaluate",

2699

},

2700

},

2701

required: ["expression"],

2702

},

2703

},

2704

},

2705

];

2706

2707

const messages: ChatCompletionMessageParam[] = [

2708

{

2709

role: "user",

2710

content: "What is 15% of 280 plus 42?",

2711

},

2712

];

2713

2714

// First call

2715

let response = await client.chat.completions.create({

2716

model: "gpt-4o",

2717

messages,

2718

tools,

2719

});

2720

2721

let responseMessage = response.choices[0].message;

2722

messages.push(responseMessage);

2723

2724

// Process tool calls

2725

while (responseMessage.tool_calls) {

2726

for (const toolCall of responseMessage.tool_calls) {

2727

if (toolCall.type === "function") {

2728

const functionName = toolCall.function.name;

2729

const args = JSON.parse(toolCall.function.arguments);

2730

2731

let result;

2732

try {

2733

if (functionName === "calculate") {

2734

// Safely evaluate expression

2735

result = eval(args.expression);

2736

}

2737

} catch (error) {

2738

result = { error: "Invalid expression" };

2739

}

2740

2741

messages.push({

2742

role: "tool",

2743

tool_call_id: toolCall.id,

2744

content: JSON.stringify(result),

2745

});

2746

}

2747

}

2748

2749

// Get next response

2750

response = await client.chat.completions.create({

2751

model: "gpt-4o",

2752

messages,

2753

tools,

2754

});

2755

2756

responseMessage = response.choices[0].message;

2757

messages.push(responseMessage);

2758

}

2759

2760

return responseMessage.content;

2761

}

2762

2763

const answer = await runWithTools();

2764

console.log(answer);

2765

```

2766

2767

### Structured Output for Data Extraction

2768

2769

```typescript

2770

const completion = await client.chat.completions.create({

2771

model: "gpt-4o",

2772

messages: [

2773

{

2774

role: "user",

2775

content: `Extract the following information:

2776

2777

John Smith, age 32, works as a Software Engineer at TechCorp.

2778

He has 8 years of experience and specializes in backend development.

2779

Contact: john.smith@example.com, +1-555-0123`,

2780

},

2781

],

2782

response_format: {

2783

type: "json_schema",

2784

json_schema: {

2785

name: "person_info",

2786

strict: true,

2787

schema: {

2788

type: "object",

2789

properties: {

2790

name: { type: "string" },

2791

age: { type: "number" },

2792

job_title: { type: "string" },

2793

company: { type: "string" },

2794

years_experience: { type: "number" },

2795

specialization: { type: "string" },

2796

email: { type: "string" },

2797

phone: { type: "string" },

2798

},

2799

required: [

2800

"name",

2801

"age",

2802

"job_title",

2803

"company",

2804

"years_experience",

2805

"email",

2806

],

2807

additionalProperties: false,

2808

},

2809

},

2810

},

2811

});

2812

2813

const personInfo = JSON.parse(completion.choices[0].message.content);

2814

console.log(personInfo);

2815

```

2816

2817

---

2818

2819

## Best Practices

2820

2821

### Token Management

2822

2823

```typescript

2824

// Estimate costs before making request

2825

function estimateTokens(text: string): number {

2826

// Rough estimate: 1 token ≈ 4 characters

2827

return Math.ceil(text.length / 4);

2828

}

2829

2830

const userMessage = "Long user message...";

2831

const estimatedInputTokens = estimateTokens(userMessage);

2832

console.log(`Estimated input tokens: ${estimatedInputTokens}`);

2833

2834

const completion = await client.chat.completions.create({

2835

model: "gpt-4o",

2836

messages: [{ role: "user", content: userMessage }],

2837

max_completion_tokens: 500, // Limit output

2838

});

2839

2840

console.log("Actual usage:", completion.usage);

2841

```

2842

2843

### Safety and Content Filtering

2844

2845

```typescript

2846

const completion = await client.chat.completions.create({

2847

model: "gpt-4o",

2848

messages: [{ role: "user", content: userInput }],

2849

safety_identifier: hashUserId(userId), // Track abuse

2850

});

2851

2852

// Check for refusal

2853

if (completion.choices[0].message.refusal) {

2854

console.log("Model refused:", completion.choices[0].message.refusal);

2855

// Handle refusal appropriately

2856

}

2857

2858

// Check finish reason

2859

if (completion.choices[0].finish_reason === "content_filter") {

2860

console.log("Content was filtered");

2861

// Handle content filter

2862

}

2863

```

2864

2865

### Caching for Cost Optimization

2866

2867

```typescript

2868

// Cache long system prompts

2869

const systemPrompt = fs.readFileSync("./long-prompt.txt", "utf-8");

2870

2871

const completion = await client.chat.completions.create({

2872

model: "gpt-4o",

2873

messages: [

2874

{ role: "developer", content: systemPrompt }, // Cached

2875

{ role: "user", content: userQuestion },

2876

],

2877

prompt_cache_key: `system-v${PROMPT_VERSION}`,

2878

prompt_cache_retention: "24h",

2879

});

2880

2881

// Check cache hit

2882

if (completion.usage?.prompt_tokens_details?.cached_tokens) {

2883

console.log("Cache hit! Saved tokens:", completion.usage.prompt_tokens_details.cached_tokens);

2884

}

2885

```

2886

2887

### Streaming with Timeout

2888

2889

```typescript

2890

async function streamWithTimeout(timeoutMs: number) {

2891

const controller = new AbortController();

2892

const timeout = setTimeout(() => controller.abort(), timeoutMs);

2893

2894

try {

2895

const stream = await client.chat.completions.stream(

2896

{

2897

model: "gpt-4o",

2898

messages: [{ role: "user", content: "Write a long essay" }],

2899

},

2900

{

2901

signal: controller.signal,

2902

}

2903

);

2904

2905

for await (const chunk of stream) {

2906

process.stdout.write(chunk.choices[0]?.delta?.content || "");

2907

}

2908

} finally {

2909

clearTimeout(timeout);

2910

}

2911

}

2912

```

2913

2914

---

2915

2916

## Migration Notes

2917

2918

### From Legacy Completions API

2919

2920

```typescript

2921

// OLD: Completions API

2922

const completion = await client.completions.create({

2923

model: "text-davinci-003",

2924

prompt: "Say hello",

2925

max_tokens: 100,

2926

});

2927

2928

// NEW: Chat Completions API

2929

const chatCompletion = await client.chat.completions.create({

2930

model: "gpt-4o",

2931

messages: [{ role: "user", content: "Say hello" }],

2932

max_completion_tokens: 100,

2933

});

2934

```

2935

2936

### From Function Calling to Tools

2937

2938

```typescript

2939

// OLD: function_call parameter

2940

const completion = await client.chat.completions.create({

2941

model: "gpt-4o",

2942

messages: [{ role: "user", content: "What's the weather?" }],

2943

functions: [

2944

{

2945

name: "get_weather",

2946

description: "Get weather",

2947

parameters: { type: "object", properties: {} },

2948

},

2949

],

2950

function_call: "auto",

2951

});

2952

2953

// NEW: tools parameter

2954

const completion = await client.chat.completions.create({

2955

model: "gpt-4o",

2956

messages: [{ role: "user", content: "What's the weather?" }],

2957

tools: [

2958

{

2959

type: "function",

2960

function: {

2961

name: "get_weather",

2962

description: "Get weather",

2963

parameters: { type: "object", properties: {} },

2964

},

2965

},

2966

],

2967

tool_choice: "auto",

2968

});

2969

```

2970

2971

### System to Developer Role

2972

2973

```typescript

2974

// OLD: system role (still works for older models)

2975

const completion = await client.chat.completions.create({

2976

model: "gpt-4o",

2977

messages: [

2978

{ role: "system", content: "You are a helpful assistant" },

2979

{ role: "user", content: "Hello" },

2980

],

2981

});

2982

2983

// NEW: developer role (recommended for o1 and newer)

2984

const completion = await client.chat.completions.create({

2985

model: "gpt-5.1",

2986

messages: [

2987

{ role: "developer", content: "You are a helpful assistant" },

2988

{ role: "user", content: "Hello" },

2989

],

2990

});

2991

```

2992

2993

---

2994

2995

## Related APIs

2996

2997

- **Responses API**: Recommended for new projects with additional features

2998

- **Assistants API**: For stateful conversations with built-in memory

2999

- **Completions API**: Legacy text completion endpoint (deprecated)

3000

- **Embeddings API**: Generate text embeddings for semantic search

3001

- **Fine-tuning API**: Customize models with your own data

3002