or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

assistants.mdaudio.mdbatches.mdchat-completions.mdchatkit.mdclient-initialization.mdcompletions.mdcontainers.mdconversations.mdembeddings.mdevals.mdfiles.mdfine-tuning.mdimages.mdindex.mdmodels.mdmoderations.mdrealtime.mdresponses.mdruns.mdthreads-messages.mduploads.mdvector-stores.mdvideos.mdwebhooks.md
KNOWN_ISSUES.md

index.mddocs/

0

# OpenAI Python Library

1

2

The official Python client library for the OpenAI API, providing comprehensive access to OpenAI's suite of AI models including GPT-4, DALL-E, Whisper, and Embeddings. The library features both synchronous and asynchronous implementations, complete type definitions, streaming support, structured output parsing, and integration with OpenAI's latest features including the Assistants API, Realtime API, and advanced capabilities like function calling and vision inputs.

3

4

## Package Information

5

6

- **Package Name**: openai

7

- **Language**: Python

8

- **Installation**: `pip install openai`

9

- **Python Version**: 3.9+

10

- **Official Documentation**: https://platform.openai.com/docs/api-reference

11

12

## Core Imports

13

14

```python

15

from openai import OpenAI, AsyncOpenAI

16

```

17

18

For Azure OpenAI:

19

20

```python

21

from openai import AzureOpenAI, AsyncAzureOpenAI

22

```

23

24

Common types and utilities:

25

26

```python

27

from typing import Callable, Iterable, Awaitable, Mapping, Literal, AsyncGenerator

28

import httpx

29

from openai import (

30

Stream, AsyncStream, Client, AsyncClient,

31

NOT_GIVEN, NotGiven, not_given, FileTypes, Omit, omit,

32

AssistantEventHandler, AsyncAssistantEventHandler,

33

HttpxBinaryResponseContent, RequestOptions, Timeout,

34

APIResponse, AsyncAPIResponse, WebsocketConnectionOptions

35

)

36

from openai.types.chat import (

37

ChatCompletion,

38

ChatCompletionMessage,

39

ChatCompletionMessageParam,

40

ChatCompletionToolUnionParam,

41

ChatCompletionToolChoiceOptionParam,

42

completion_create_params,

43

)

44

from openai.pagination import (

45

SyncPage, AsyncPage,

46

SyncCursorPage, AsyncCursorPage,

47

SyncConversationCursorPage, AsyncConversationCursorPage

48

)

49

50

# Helpers for audio recording and playback

51

from openai.helpers import Microphone, LocalAudioPlayer

52

53

# Access all type definitions

54

from openai import types

55

# Examples: types.ChatCompletion, types.Embedding, types.FileObject

56

# Use types.chat.ChatCompletionMessageParam for nested types

57

```

58

59

## Basic Usage

60

61

```python

62

from openai import OpenAI

63

64

# Initialize the client

65

client = OpenAI(api_key="your-api-key")

66

67

# Create a chat completion

68

response = client.chat.completions.create(

69

model="gpt-4",

70

messages=[

71

{"role": "system", "content": "You are a helpful assistant."},

72

{"role": "user", "content": "Hello!"}

73

]

74

)

75

76

print(response.choices[0].message.content)

77

78

# Async usage

79

import asyncio

80

from openai import AsyncOpenAI

81

82

async def main():

83

client = AsyncOpenAI(api_key="your-api-key")

84

response = await client.chat.completions.create(

85

model="gpt-4",

86

messages=[{"role": "user", "content": "Hello!"}]

87

)

88

print(response.choices[0].message.content)

89

90

asyncio.run(main())

91

```

92

93

## Module-Level Configuration

94

95

The library supports a legacy module-level configuration pattern for backward compatibility. You can configure the default client by setting module-level variables:

96

97

```python

98

import openai

99

100

# API Configuration

101

openai.api_key = "your-api-key"

102

openai.organization = "org-xxx"

103

openai.project = "proj-xxx"

104

openai.webhook_secret = "whsec_xxx"

105

106

# Network Configuration

107

openai.base_url = "https://api.openai.com/v1"

108

openai.timeout = 60.0 # seconds

109

openai.max_retries = 3

110

openai.default_headers = {"Custom-Header": "value"}

111

openai.default_query = {"custom_param": "value"}

112

openai.http_client = custom_httpx_client

113

114

# Azure OpenAI Configuration

115

openai.api_type = "azure" # "openai" or "azure"

116

openai.api_version = "2024-02-01"

117

openai.azure_endpoint = "https://your-resource.openai.azure.com"

118

openai.azure_ad_token = "your-ad-token"

119

openai.azure_ad_token_provider = lambda: get_token()

120

121

# After setting these, you can use module-level methods:

122

# (This pattern is deprecated; prefer explicit client instantiation)

123

```

124

125

**Note**: The module-level configuration pattern is legacy. For new code, prefer explicit client instantiation:

126

127

```python

128

from openai import OpenAI

129

130

client = OpenAI(

131

api_key="your-api-key",

132

organization="org-xxx",

133

timeout=60.0

134

)

135

```

136

137

## Architecture

138

139

The OpenAI Python library is structured around client-resource architecture:

140

141

- **Clients**: `OpenAI`, `AsyncOpenAI`, `AzureOpenAI`, `AsyncAzureOpenAI` - Entry points for API access with configuration

142

- **Resources**: Organized API endpoints (chat, audio, images, files, etc.) accessible as client attributes

143

- **Types**: Comprehensive Pydantic models for all requests and responses with full type safety

144

- **Streaming**: First-class streaming support via `Stream` and `AsyncStream` classes

145

- **Error Handling**: Structured exception hierarchy for different error types

146

147

The library provides both synchronous and asynchronous implementations for all operations, enabling integration into any Python application architecture.

148

149

## Capabilities

150

151

### Client Initialization

152

153

Initialize OpenAI clients with API credentials and configuration options for both OpenAI and Azure OpenAI services.

154

155

```python { .api }

156

class OpenAI:

157

"""Synchronous client for OpenAI API."""

158

def __init__(

159

self,

160

*,

161

api_key: str | None | Callable[[], str] = None,

162

organization: str | None = None,

163

project: str | None = None,

164

webhook_secret: str | None = None,

165

base_url: str | None = None,

166

websocket_base_url: str | httpx.URL | None = None,

167

timeout: float | httpx.Timeout | None | NotGiven = not_given,

168

max_retries: int = 2,

169

default_headers: Mapping[str, str] | None = None,

170

default_query: Mapping[str, object] | None = None,

171

http_client: httpx.Client | None = None,

172

): ...

173

174

class AsyncOpenAI:

175

"""Asynchronous client for OpenAI API."""

176

def __init__(

177

self,

178

*,

179

api_key: str | None | Callable[[], Awaitable[str]] = None,

180

organization: str | None = None,

181

project: str | None = None,

182

webhook_secret: str | None = None,

183

base_url: str | None = None,

184

websocket_base_url: str | httpx.URL | None = None,

185

timeout: float | httpx.Timeout | None | NotGiven = not_given,

186

max_retries: int = 2,

187

default_headers: Mapping[str, str] | None = None,

188

default_query: Mapping[str, object] | None = None,

189

http_client: httpx.AsyncClient | None = None,

190

): ...

191

192

class AzureOpenAI:

193

"""Synchronous client for Azure OpenAI Service."""

194

def __init__(

195

self,

196

*,

197

api_version: str | None = None,

198

azure_endpoint: str | None = None,

199

azure_deployment: str | None = None,

200

api_key: str | None | Callable[[], str] = None,

201

azure_ad_token: str | None = None,

202

azure_ad_token_provider: Callable[[], str] | None = None,

203

organization: str | None = None,

204

project: str | None = None,

205

webhook_secret: str | None = None,

206

base_url: str | None = None,

207

websocket_base_url: str | httpx.URL | None = None,

208

timeout: float | httpx.Timeout | None | NotGiven = not_given,

209

max_retries: int = 2,

210

default_headers: Mapping[str, str] | None = None,

211

default_query: Mapping[str, object] | None = None,

212

http_client: httpx.Client | None = None,

213

): ...

214

215

class AsyncAzureOpenAI:

216

"""Asynchronous client for Azure OpenAI Service."""

217

def __init__(

218

self,

219

*,

220

api_version: str | None = None,

221

azure_endpoint: str | None = None,

222

azure_deployment: str | None = None,

223

api_key: str | None | Callable[[], Awaitable[str]] = None,

224

azure_ad_token: str | None = None,

225

azure_ad_token_provider: Callable[[], Awaitable[str] | str] | None = None,

226

organization: str | None = None,

227

project: str | None = None,

228

webhook_secret: str | None = None,

229

base_url: str | None = None,

230

websocket_base_url: str | httpx.URL | None = None,

231

timeout: float | httpx.Timeout | None | NotGiven = not_given,

232

max_retries: int = 2,

233

default_headers: Mapping[str, str] | None = None,

234

default_query: Mapping[str, object] | None = None,

235

http_client: httpx.AsyncClient | None = None,

236

): ...

237

238

# Aliases for convenience

239

Client = OpenAI

240

AsyncClient = AsyncOpenAI

241

```

242

243

[Client Initialization](./client-initialization.md)

244

245

### Chat Completions

246

247

Create conversational responses using OpenAI's language models with support for text, function calling, vision inputs, audio, and structured output parsing.

248

249

```python { .api }

250

def create(

251

self,

252

*,

253

messages: Iterable[ChatCompletionMessageParam],

254

model: str | ChatModel,

255

audio: dict | Omit = omit,

256

frequency_penalty: float | Omit = omit,

257

function_call: str | dict | Omit = omit,

258

functions: Iterable[dict] | Omit = omit,

259

logit_bias: dict[str, int] | Omit = omit,

260

logprobs: bool | Omit = omit,

261

top_logprobs: int | Omit = omit,

262

max_completion_tokens: int | Omit = omit,

263

max_tokens: int | Omit = omit,

264

metadata: dict[str, str] | Omit = omit,

265

modalities: list[Literal["text", "audio"]] | Omit = omit,

266

n: int | Omit = omit,

267

parallel_tool_calls: bool | Omit = omit,

268

prediction: dict | Omit = omit,

269

presence_penalty: float | Omit = omit,

270

prompt_cache_key: str | Omit = omit,

271

prompt_cache_retention: Literal["in-memory", "24h"] | Omit = omit,

272

reasoning_effort: str | Omit = omit,

273

response_format: completion_create_params.ResponseFormat | Omit = omit,

274

safety_identifier: str | Omit = omit,

275

seed: int | Omit = omit,

276

service_tier: Literal["auto", "default", "flex", "scale", "priority"] | Omit = omit,

277

stop: str | list[str] | Omit = omit,

278

store: bool | Omit = omit,

279

stream: bool | Omit = omit,

280

stream_options: dict | Omit = omit,

281

temperature: float | Omit = omit,

282

tool_choice: ChatCompletionToolChoiceOptionParam | Omit = omit,

283

tools: Iterable[ChatCompletionToolUnionParam] | Omit = omit,

284

top_p: float | Omit = omit,

285

user: str | Omit = omit,

286

verbosity: Literal["low", "medium", "high"] | Omit = omit,

287

web_search_options: dict | Omit = omit,

288

extra_headers: dict[str, str] | None = None,

289

extra_query: dict[str, object] | None = None,

290

extra_body: dict[str, object] | None = None,

291

timeout: float | httpx.Timeout | None | NotGiven = not_given,

292

) -> ChatCompletion | Stream[ChatCompletionChunk]: ...

293

294

def parse(

295

self,

296

*,

297

messages: Iterable[ChatCompletionMessageParam],

298

model: str | ChatModel,

299

response_format: Type[BaseModel],

300

**kwargs

301

) -> ParsedChatCompletion: ...

302

```

303

304

[Chat Completions](./chat-completions.md)

305

306

### Text Completions

307

308

Generate text completions using legacy completion models.

309

310

```python { .api }

311

def create(

312

self,

313

*,

314

model: str,

315

prompt: str | list[str] | list[int] | list[list[int]],

316

best_of: int | None = None,

317

echo: bool | None = None,

318

frequency_penalty: float | None = None,

319

logit_bias: dict[str, int] | None = None,

320

logprobs: int | None = None,

321

max_tokens: int | None = None,

322

n: int | None = None,

323

presence_penalty: float | None = None,

324

seed: int | None = None,

325

stop: str | list[str] | None = None,

326

stream_options: Optional[ChatCompletionStreamOptionsParam] | Omit = omit,

327

stream: bool | None = None,

328

suffix: str | None = None,

329

temperature: float | None = None,

330

top_p: float | None = None,

331

user: str | None = None,

332

extra_headers: dict[str, str] | None = None,

333

extra_query: dict[str, object] | None = None,

334

extra_body: dict[str, object] | None = None,

335

timeout: float | None = None,

336

) -> Completion | Stream[Completion]: ...

337

```

338

339

[Text Completions](./completions.md)

340

341

### Embeddings

342

343

Create vector embeddings for text inputs to use in semantic search, clustering, and other ML applications.

344

345

```python { .api }

346

def create(

347

self,

348

*,

349

input: str | list[str] | list[int] | list[list[int]],

350

model: str | EmbeddingModel,

351

dimensions: int | Omit = omit,

352

encoding_format: Literal["float", "base64"] | Omit = omit,

353

user: str | Omit = omit,

354

extra_headers: Mapping[str, str] | None = None,

355

extra_query: Mapping[str, object] | None = None,

356

extra_body: dict[str, object] | None = None,

357

timeout: float | httpx.Timeout | None | NotGiven = not_given,

358

) -> CreateEmbeddingResponse: ...

359

```

360

361

[Embeddings](./embeddings.md)

362

363

### Audio

364

365

Convert audio to text (transcription and translation) and text to speech using Whisper and TTS models.

366

367

```python { .api }

368

# Transcription

369

def create(

370

self,

371

*,

372

file: FileTypes,

373

model: str | AudioModel,

374

chunking_strategy: Optional[dict] | Omit = omit,

375

include: list[str] | Omit = omit,

376

language: str | None = None,

377

prompt: str | None = None,

378

response_format: str | None = None,

379

stream: bool | Omit = omit,

380

temperature: float | None = None,

381

timestamp_granularities: list[str] | None = None,

382

extra_headers: dict[str, str] | None = None,

383

extra_query: dict[str, object] | None = None,

384

extra_body: dict[str, object] | None = None,

385

timeout: float | None = None,

386

) -> Transcription | TranscriptionVerbose | TranscriptionDiarized: ...

387

388

# Translation

389

def create(

390

self,

391

*,

392

file: FileTypes,

393

model: str | AudioModel,

394

prompt: str | None = None,

395

response_format: str | None = None,

396

temperature: float | None = None,

397

extra_headers: dict[str, str] | None = None,

398

extra_query: dict[str, object] | None = None,

399

extra_body: dict[str, object] | None = None,

400

timeout: float | None = None,

401

) -> Translation | TranslationVerbose: ...

402

403

# Text-to-Speech

404

def create(

405

self,

406

*,

407

input: str,

408

model: str | SpeechModel,

409

voice: str,

410

instructions: str | Omit = omit,

411

response_format: str | None = None,

412

speed: float | None = None,

413

stream_format: Literal["sse", "audio"] | Omit = omit,

414

extra_headers: dict[str, str] | None = None,

415

extra_query: dict[str, object] | None = None,

416

extra_body: dict[str, object] | None = None,

417

timeout: float | None = None,

418

) -> HttpxBinaryResponseContent: ...

419

```

420

421

[Audio](./audio.md)

422

423

### Images

424

425

Generate, edit, and create variations of images using DALL-E models.

426

427

```python { .api }

428

def generate(

429

self,

430

*,

431

prompt: str,

432

background: Literal["transparent", "opaque", "auto"] | None | Omit = omit,

433

model: str | ImageModel | None = None,

434

moderation: Literal["low", "auto"] | None | Omit = omit,

435

n: int | None = None,

436

output_compression: int | None | Omit = omit,

437

output_format: Literal["png", "jpeg", "webp"] | None | Omit = omit,

438

partial_images: int | None | Omit = omit,

439

quality: str | None = None,

440

response_format: str | None = None,

441

size: str | None = None,

442

stream: bool | None | Omit = omit,

443

style: str | None = None,

444

user: str | None = None,

445

extra_headers: dict[str, str] | None = None,

446

extra_query: dict[str, object] | None = None,

447

extra_body: dict[str, object] | None = None,

448

timeout: float | None = None,

449

) -> ImagesResponse: ...

450

451

def edit(

452

self,

453

*,

454

image: FileTypes | list[FileTypes],

455

prompt: str,

456

background: Literal["transparent", "opaque", "auto"] | None | Omit = omit,

457

input_fidelity: Literal["high", "low"] | None | Omit = omit,

458

mask: FileTypes | None = None,

459

model: str | ImageModel | None = None,

460

n: int | None = None,

461

output_compression: int | None | Omit = omit,

462

output_format: Literal["png", "jpeg", "webp"] | None | Omit = omit,

463

partial_images: int | None | Omit = omit,

464

quality: Literal["standard", "low", "medium", "high", "auto"] | None | Omit = omit,

465

response_format: str | None = None,

466

size: str | None = None,

467

stream: bool | None | Omit = omit,

468

user: str | None = None,

469

extra_headers: dict[str, str] | None = None,

470

extra_query: dict[str, object] | None = None,

471

extra_body: dict[str, object] | None = None,

472

timeout: float | None = None,

473

) -> ImagesResponse: ...

474

475

def create_variation(

476

self,

477

*,

478

image: FileTypes,

479

model: str | ImageModel | None = None,

480

n: int | None = None,

481

response_format: str | None = None,

482

size: str | None = None,

483

user: str | None = None,

484

extra_headers: dict[str, str] | None = None,

485

extra_query: dict[str, object] | None = None,

486

extra_body: dict[str, object] | None = None,

487

timeout: float | None = None,

488

) -> ImagesResponse: ...

489

```

490

491

[Images](./images.md)

492

493

### Videos

494

495

Generate and manipulate videos using video generation models.

496

497

```python { .api }

498

def create(

499

self,

500

*,

501

prompt: str,

502

input_reference: FileTypes | Omit = omit,

503

model: VideoModel | Omit = omit,

504

seconds: VideoSeconds | Omit = omit,

505

size: VideoSize | Omit = omit,

506

extra_headers: dict[str, str] | None = None,

507

extra_query: dict[str, object] | None = None,

508

extra_body: dict[str, object] | None = None,

509

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

510

) -> Video: ...

511

512

def create_and_poll(

513

self,

514

*,

515

prompt: str,

516

input_reference: FileTypes | Omit = omit,

517

model: VideoModel | Omit = omit,

518

seconds: VideoSeconds | Omit = omit,

519

size: VideoSize | Omit = omit,

520

poll_interval_ms: int | Omit = omit,

521

extra_headers: dict[str, str] | None = None,

522

extra_query: dict[str, object] | None = None,

523

extra_body: dict[str, object] | None = None,

524

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

525

) -> Video: ...

526

527

def poll(

528

self,

529

video_id: str,

530

*,

531

poll_interval_ms: int | Omit = omit,

532

extra_headers: dict[str, str] | None = None,

533

extra_query: dict[str, object] | None = None,

534

extra_body: dict[str, object] | None = None,

535

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

536

) -> Video: ...

537

538

def retrieve(

539

self,

540

video_id: str,

541

*,

542

extra_headers: dict[str, str] | None = None,

543

extra_query: dict[str, object] | None = None,

544

extra_body: dict[str, object] | None = None,

545

timeout: float | None = None,

546

) -> Video: ...

547

548

def list(

549

self,

550

*,

551

after: str | Omit = omit,

552

limit: int | Omit = omit,

553

order: Literal["asc", "desc"] | Omit = omit,

554

extra_headers: dict[str, str] | None = None,

555

extra_query: dict[str, object] | None = None,

556

extra_body: dict[str, object] | None = None,

557

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

558

) -> SyncCursorPage[Video]: ...

559

560

def delete(

561

self,

562

video_id: str,

563

*,

564

extra_headers: dict[str, str] | None = None,

565

extra_query: dict[str, object] | None = None,

566

extra_body: dict[str, object] | None = None,

567

timeout: float | None = None,

568

) -> VideoDeleteResponse: ...

569

570

def download_content(

571

self,

572

video_id: str,

573

*,

574

extra_headers: dict[str, str] | None = None,

575

extra_query: dict[str, object] | None = None,

576

extra_body: dict[str, object] | None = None,

577

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

578

) -> HttpxBinaryResponseContent: ...

579

580

def remix(

581

self,

582

video_id: str,

583

*,

584

prompt: str,

585

seconds: VideoSeconds | Omit = omit,

586

extra_headers: dict[str, str] | None = None,

587

extra_query: dict[str, object] | None = None,

588

extra_body: dict[str, object] | None = None,

589

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

590

) -> Video: ...

591

```

592

593

[Videos](./videos.md)

594

595

### Files

596

597

Upload and manage files for use with various OpenAI features like Assistants, Fine-tuning, and Batch processing.

598

599

```python { .api }

600

def create(

601

self,

602

*,

603

file: FileTypes,

604

purpose: str | FilePurpose,

605

extra_headers: dict[str, str] | None = None,

606

extra_query: dict[str, object] | None = None,

607

extra_body: dict[str, object] | None = None,

608

timeout: float | None = None,

609

) -> FileObject: ...

610

611

def retrieve(

612

self,

613

file_id: str,

614

*,

615

extra_headers: dict[str, str] | None = None,

616

extra_query: dict[str, object] | None = None,

617

extra_body: dict[str, object] | None = None,

618

timeout: float | None = None,

619

) -> FileObject: ...

620

621

def list(

622

self,

623

*,

624

purpose: str | None = None,

625

limit: int | None = None,

626

order: str | None = None,

627

after: str | None = None,

628

extra_headers: dict[str, str] | None = None,

629

extra_query: dict[str, object] | None = None,

630

extra_body: dict[str, object] | None = None,

631

timeout: float | None = None,

632

) -> SyncPage[FileObject]: ...

633

634

def delete(

635

self,

636

file_id: str,

637

*,

638

extra_headers: dict[str, str] | None = None,

639

extra_query: dict[str, object] | None = None,

640

extra_body: dict[str, object] | None = None,

641

timeout: float | None = None,

642

) -> FileDeleted: ...

643

644

def content(

645

self,

646

file_id: str,

647

*,

648

extra_headers: dict[str, str] | None = None,

649

extra_query: dict[str, object] | None = None,

650

extra_body: dict[str, object] | None = None,

651

timeout: float | None = None,

652

) -> HttpxBinaryResponseContent: ...

653

654

def retrieve_content(

655

self,

656

file_id: str,

657

*,

658

extra_headers: dict[str, str] | None = None,

659

extra_query: dict[str, object] | None = None,

660

extra_body: dict[str, object] | None = None,

661

timeout: float | None = None,

662

) -> HttpxBinaryResponseContent: ...

663

664

def wait_for_processing(

665

self,

666

file_id: str,

667

*,

668

poll_interval: float = 5.0,

669

max_wait: float = 3600.0,

670

) -> FileObject: ...

671

```

672

673

[Files](./files.md)

674

675

### Uploads

676

677

Upload large files in chunks for use with Assistants and other features. Upload parts are managed through the `.parts` subresource.

678

679

```python { .api }

680

def create(

681

self,

682

*,

683

bytes: int,

684

filename: str,

685

mime_type: str,

686

purpose: FilePurpose,

687

expires_after: dict | Omit = omit,

688

extra_headers: dict[str, str] | None = None,

689

extra_query: dict[str, object] | None = None,

690

extra_body: dict[str, object] | None = None,

691

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

692

) -> Upload: ...

693

694

def complete(

695

self,

696

upload_id: str,

697

*,

698

part_ids: list[str],

699

md5: str | Omit = omit,

700

extra_headers: dict[str, str] | None = None,

701

extra_query: dict[str, object] | None = None,

702

extra_body: dict[str, object] | None = None,

703

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

704

) -> Upload: ...

705

706

def cancel(

707

self,

708

upload_id: str,

709

*,

710

extra_headers: dict[str, str] | None = None,

711

extra_query: dict[str, object] | None = None,

712

extra_body: dict[str, object] | None = None,

713

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

714

) -> Upload: ...

715

716

def upload_file_chunked(

717

self,

718

*,

719

file: str | os.PathLike | bytes,

720

mime_type: str,

721

purpose: FilePurpose,

722

filename: str | None = None,

723

bytes: int | None = None,

724

part_size: int | None = None,

725

md5: str | Omit = omit,

726

) -> Upload: ...

727

728

# Access via client.uploads.parts

729

def create(

730

self,

731

upload_id: str,

732

*,

733

data: FileTypes,

734

extra_headers: dict[str, str] | None = None,

735

extra_query: dict[str, object] | None = None,

736

extra_body: dict[str, object] | None = None,

737

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

738

) -> UploadPart: ...

739

```

740

741

[Uploads](./uploads.md)

742

743

### Models

744

745

Retrieve information about available models and manage fine-tuned models.

746

747

```python { .api }

748

def retrieve(

749

self,

750

model: str,

751

*,

752

extra_headers: dict[str, str] | None = None,

753

extra_query: dict[str, object] | None = None,

754

extra_body: dict[str, object] | None = None,

755

timeout: float | None = None,

756

) -> Model: ...

757

758

def list(

759

self,

760

*,

761

extra_headers: dict[str, str] | None = None,

762

extra_query: dict[str, object] | None = None,

763

extra_body: dict[str, object] | None = None,

764

timeout: float | None = None,

765

) -> SyncPage[Model]: ...

766

767

def delete(

768

self,

769

model: str,

770

*,

771

extra_headers: dict[str, str] | None = None,

772

extra_query: dict[str, object] | None = None,

773

extra_body: dict[str, object] | None = None,

774

timeout: float | None = None,

775

) -> ModelDeleted: ...

776

```

777

778

[Models](./models.md)

779

780

### Fine-tuning

781

782

Create and manage fine-tuning jobs to customize models on your own data. Fine-tuning operations are accessed through the `.jobs` and `.checkpoints` subresources.

783

784

```python { .api }

785

# Access via client.fine_tuning.jobs

786

def create(

787

self,

788

*,

789

model: str,

790

training_file: str,

791

hyperparameters: dict | None = None,

792

method: dict | None = None,

793

integrations: list[dict] | None = None,

794

seed: int | None = None,

795

suffix: str | None = None,

796

validation_file: str | None = None,

797

extra_headers: dict[str, str] | None = None,

798

extra_query: dict[str, object] | None = None,

799

extra_body: dict[str, object] | None = None,

800

timeout: float | None = None,

801

) -> FineTuningJob: ...

802

803

def retrieve(

804

self,

805

fine_tuning_job_id: str,

806

*,

807

extra_headers: dict[str, str] | None = None,

808

extra_query: dict[str, object] | None = None,

809

extra_body: dict[str, object] | None = None,

810

timeout: float | None = None,

811

) -> FineTuningJob: ...

812

813

def list(

814

self,

815

*,

816

after: str | None = None,

817

limit: int | None = None,

818

extra_headers: dict[str, str] | None = None,

819

extra_query: dict[str, object] | None = None,

820

extra_body: dict[str, object] | None = None,

821

timeout: float | None = None,

822

) -> SyncCursorPage[FineTuningJob]: ...

823

824

def cancel(

825

self,

826

fine_tuning_job_id: str,

827

*,

828

extra_headers: dict[str, str] | None = None,

829

extra_query: dict[str, object] | None = None,

830

extra_body: dict[str, object] | None = None,

831

timeout: float | None = None,

832

) -> FineTuningJob: ...

833

834

def pause(

835

self,

836

fine_tuning_job_id: str,

837

*,

838

extra_headers: dict[str, str] | None = None,

839

extra_query: dict[str, object] | None = None,

840

extra_body: dict[str, object] | None = None,

841

timeout: float | None = None,

842

) -> FineTuningJob: ...

843

844

def resume(

845

self,

846

fine_tuning_job_id: str,

847

*,

848

extra_headers: dict[str, str] | None = None,

849

extra_query: dict[str, object] | None = None,

850

extra_body: dict[str, object] | None = None,

851

timeout: float | None = None,

852

) -> FineTuningJob: ...

853

854

def list_events(

855

self,

856

fine_tuning_job_id: str,

857

*,

858

after: str | None = None,

859

limit: int | None = None,

860

extra_headers: dict[str, str] | None = None,

861

extra_query: dict[str, object] | None = None,

862

extra_body: dict[str, object] | None = None,

863

timeout: float | None = None,

864

) -> SyncCursorPage[FineTuningJobEvent]: ...

865

866

# Access via client.fine_tuning.checkpoints

867

def list(

868

self,

869

fine_tuning_job_id: str,

870

*,

871

after: str | None = None,

872

limit: int | None = None,

873

extra_headers: dict[str, str] | None = None,

874

extra_query: dict[str, object] | None = None,

875

extra_body: dict[str, object] | None = None,

876

timeout: float | None = None,

877

) -> SyncCursorPage[FineTuningJobCheckpoint]: ...

878

```

879

880

[Fine-tuning](./fine-tuning.md)

881

882

### Moderations

883

884

Check content against OpenAI's usage policies to detect potentially harmful content.

885

886

```python { .api }

887

def create(

888

self,

889

*,

890

input: str | list[str],

891

model: str | ModerationModel | None = None,

892

extra_headers: dict[str, str] | None = None,

893

extra_query: dict[str, object] | None = None,

894

extra_body: dict[str, object] | None = None,

895

timeout: float | None = None,

896

) -> ModerationCreateResponse: ...

897

```

898

899

[Moderations](./moderations.md)

900

901

### Batch Processing

902

903

Submit batch requests for asynchronous processing of multiple API calls.

904

905

```python { .api }

906

def create(

907

self,

908

*,

909

completion_window: str,

910

endpoint: str,

911

input_file_id: str,

912

metadata: dict[str, str] | None = None,

913

extra_headers: dict[str, str] | None = None,

914

extra_query: dict[str, object] | None = None,

915

extra_body: dict[str, object] | None = None,

916

timeout: float | None = None,

917

) -> Batch: ...

918

919

def retrieve(

920

self,

921

batch_id: str,

922

*,

923

extra_headers: dict[str, str] | None = None,

924

extra_query: dict[str, object] | None = None,

925

extra_body: dict[str, object] | None = None,

926

timeout: float | None = None,

927

) -> Batch: ...

928

929

def list(

930

self,

931

*,

932

after: str | None = None,

933

limit: int | None = None,

934

extra_headers: dict[str, str] | None = None,

935

extra_query: dict[str, object] | None = None,

936

extra_body: dict[str, object] | None = None,

937

timeout: float | None = None,

938

) -> SyncCursorPage[Batch]: ...

939

940

def cancel(

941

self,

942

batch_id: str,

943

*,

944

extra_headers: dict[str, str] | None = None,

945

extra_query: dict[str, object] | None = None,

946

extra_body: dict[str, object] | None = None,

947

timeout: float | None = None,

948

) -> Batch: ...

949

```

950

951

[Batch Processing](./batches.md)

952

953

### Vector Stores

954

955

Create and manage vector stores for semantic search and retrieval with the Assistants API. Vector store files are managed through the `.files` subresource, and file batches through the `.file_batches` subresource.

956

957

```python { .api }

958

def create(

959

self,

960

*,

961

file_ids: list[str] | None = None,

962

name: str | None = None,

963

expires_after: dict | None = None,

964

chunking_strategy: dict | None = None,

965

metadata: dict[str, str] | None = None,

966

extra_headers: dict[str, str] | None = None,

967

extra_query: dict[str, object] | None = None,

968

extra_body: dict[str, object] | None = None,

969

timeout: float | None = None,

970

) -> VectorStore: ...

971

972

def retrieve(

973

self,

974

vector_store_id: str,

975

*,

976

extra_headers: dict[str, str] | None = None,

977

extra_query: dict[str, object] | None = None,

978

extra_body: dict[str, object] | None = None,

979

timeout: float | None = None,

980

) -> VectorStore: ...

981

982

def update(

983

self,

984

vector_store_id: str,

985

*,

986

name: str | None = None,

987

expires_after: dict | None = None,

988

metadata: dict[str, str] | None = None,

989

extra_headers: dict[str, str] | None = None,

990

extra_query: dict[str, object] | None = None,

991

extra_body: dict[str, object] | None = None,

992

timeout: float | None = None,

993

) -> VectorStore: ...

994

995

def list(

996

self,

997

*,

998

after: str | None = None,

999

before: str | None = None,

1000

limit: int | None = None,

1001

order: str | None = None,

1002

extra_headers: dict[str, str] | None = None,

1003

extra_query: dict[str, object] | None = None,

1004

extra_body: dict[str, object] | None = None,

1005

timeout: float | None = None,

1006

) -> SyncCursorPage[VectorStore]: ...

1007

1008

def delete(

1009

self,

1010

vector_store_id: str,

1011

*,

1012

extra_headers: dict[str, str] | None = None,

1013

extra_query: dict[str, object] | None = None,

1014

extra_body: dict[str, object] | None = None,

1015

timeout: float | None = None,

1016

) -> VectorStoreDeleted: ...

1017

1018

def search(

1019

self,

1020

vector_store_id: str,

1021

*,

1022

query: str,

1023

limit: int | None = None,

1024

filter: dict | None = None,

1025

extra_headers: dict[str, str] | None = None,

1026

extra_query: dict[str, object] | None = None,

1027

extra_body: dict[str, object] | None = None,

1028

timeout: float | None = None,

1029

) -> VectorStoreSearchResponse: ...

1030

1031

# Access via client.vector_stores.files

1032

def create(

1033

self,

1034

vector_store_id: str,

1035

*,

1036

file_id: str,

1037

chunking_strategy: dict | None = None,

1038

extra_headers: dict[str, str] | None = None,

1039

extra_query: dict[str, object] | None = None,

1040

extra_body: dict[str, object] | None = None,

1041

timeout: float | None = None,

1042

) -> VectorStoreFile: ...

1043

1044

def retrieve(

1045

self,

1046

vector_store_id: str,

1047

file_id: str,

1048

*,

1049

extra_headers: dict[str, str] | None = None,

1050

extra_query: dict[str, object] | None = None,

1051

extra_body: dict[str, object] | None = None,

1052

timeout: float | None = None,

1053

) -> VectorStoreFile: ...

1054

1055

def list(

1056

self,

1057

vector_store_id: str,

1058

*,

1059

after: str | None = None,

1060

before: str | None = None,

1061

filter: str | None = None,

1062

limit: int | None = None,

1063

order: str | None = None,

1064

extra_headers: dict[str, str] | None = None,

1065

extra_query: dict[str, object] | None = None,

1066

extra_body: dict[str, object] | None = None,

1067

timeout: float | None = None,

1068

) -> SyncCursorPage[VectorStoreFile]: ...

1069

1070

def delete(

1071

self,

1072

vector_store_id: str,

1073

file_id: str,

1074

*,

1075

extra_headers: dict[str, str] | None = None,

1076

extra_query: dict[str, object] | None = None,

1077

extra_body: dict[str, object] | None = None,

1078

timeout: float | None = None,

1079

) -> VectorStoreFileDeleted: ...

1080

1081

# Access via client.vector_stores.file_batches

1082

def create(

1083

self,

1084

vector_store_id: str,

1085

*,

1086

file_ids: list[str],

1087

chunking_strategy: dict | None = None,

1088

extra_headers: dict[str, str] | None = None,

1089

extra_query: dict[str, object] | None = None,

1090

extra_body: dict[str, object] | None = None,

1091

timeout: float | None = None,

1092

) -> VectorStoreFileBatch: ...

1093

1094

def retrieve(

1095

self,

1096

vector_store_id: str,

1097

batch_id: str,

1098

*,

1099

extra_headers: dict[str, str] | None = None,

1100

extra_query: dict[str, object] | None = None,

1101

extra_body: dict[str, object] | None = None,

1102

timeout: float | None = None,

1103

) -> VectorStoreFileBatch: ...

1104

1105

def cancel(

1106

self,

1107

vector_store_id: str,

1108

batch_id: str,

1109

*,

1110

extra_headers: dict[str, str] | None = None,

1111

extra_query: dict[str, object] | None = None,

1112

extra_body: dict[str, object] | None = None,

1113

timeout: float | None = None,

1114

) -> VectorStoreFileBatch: ...

1115

1116

def list_files(

1117

self,

1118

vector_store_id: str,

1119

batch_id: str,

1120

*,

1121

after: str | None = None,

1122

before: str | None = None,

1123

filter: str | None = None,

1124

limit: int | None = None,

1125

order: str | None = None,

1126

extra_headers: dict[str, str] | None = None,

1127

extra_query: dict[str, object] | None = None,

1128

extra_body: dict[str, object] | None = None,

1129

timeout: float | None = None,

1130

) -> SyncCursorPage[VectorStoreFile]: ...

1131

```

1132

1133

[Vector Stores](./vector-stores.md)

1134

1135

### Assistants API (Beta)

1136

1137

Build AI assistants with advanced capabilities including code interpreter, file search, and function calling.

1138

1139

```python { .api }

1140

def create(

1141

self,

1142

*,

1143

model: str,

1144

description: str | None = None,

1145

instructions: str | None = None,

1146

metadata: dict[str, str] | None = None,

1147

name: str | None = None,

1148

response_format: dict | None = None,

1149

temperature: float | None = None,

1150

tool_resources: dict | None = None,

1151

tools: list[dict] | None = None,

1152

top_p: float | None = None,

1153

extra_headers: dict[str, str] | None = None,

1154

extra_query: dict[str, object] | None = None,

1155

extra_body: dict[str, object] | None = None,

1156

timeout: float | None = None,

1157

) -> Assistant: ...

1158

1159

def retrieve(

1160

self,

1161

assistant_id: str,

1162

*,

1163

extra_headers: dict[str, str] | None = None,

1164

extra_query: dict[str, object] | None = None,

1165

extra_body: dict[str, object] | None = None,

1166

timeout: float | None = None,

1167

) -> Assistant: ...

1168

1169

def update(

1170

self,

1171

assistant_id: str,

1172

*,

1173

description: str | None = None,

1174

instructions: str | None = None,

1175

metadata: dict[str, str] | None = None,

1176

model: str | None = None,

1177

name: str | None = None,

1178

response_format: dict | None = None,

1179

temperature: float | None = None,

1180

tool_resources: dict | None = None,

1181

tools: list[dict] | None = None,

1182

top_p: float | None = None,

1183

extra_headers: dict[str, str] | None = None,

1184

extra_query: dict[str, object] | None = None,

1185

extra_body: dict[str, object] | None = None,

1186

timeout: float | None = None,

1187

) -> Assistant: ...

1188

1189

def list(

1190

self,

1191

*,

1192

after: str | Omit = omit,

1193

before: str | Omit = omit,

1194

limit: int | Omit = omit,

1195

order: Literal["asc", "desc"] | Omit = omit,

1196

extra_headers: dict[str, str] | None = None,

1197

extra_query: dict[str, object] | None = None,

1198

extra_body: dict[str, object] | None = None,

1199

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

1200

) -> SyncCursorPage[Assistant]: ...

1201

1202

def delete(

1203

self,

1204

assistant_id: str,

1205

*,

1206

extra_headers: dict[str, str] | None = None,

1207

extra_query: dict[str, object] | None = None,

1208

extra_body: dict[str, object] | None = None,

1209

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

1210

) -> AssistantDeleted: ...

1211

```

1212

1213

[Assistants API](./assistants.md)

1214

1215

### Threads and Messages (Beta)

1216

1217

Create conversational threads and manage messages within the Assistants API.

1218

1219

```python { .api }

1220

# Threads

1221

def create(

1222

self,

1223

*,

1224

messages: list[dict] | None = None,

1225

metadata: dict[str, str] | None = None,

1226

tool_resources: dict | None = None,

1227

extra_headers: dict[str, str] | None = None,

1228

extra_query: dict[str, object] | None = None,

1229

extra_body: dict[str, object] | None = None,

1230

timeout: float | None = None,

1231

) -> Thread: ...

1232

1233

def retrieve(

1234

self,

1235

thread_id: str,

1236

*,

1237

extra_headers: dict[str, str] | None = None,

1238

extra_query: dict[str, object] | None = None,

1239

extra_body: dict[str, object] | None = None,

1240

timeout: float | None = None,

1241

) -> Thread: ...

1242

1243

def update(

1244

self,

1245

thread_id: str,

1246

*,

1247

metadata: dict[str, str] | None = None,

1248

tool_resources: dict | None = None,

1249

extra_headers: dict[str, str] | None = None,

1250

extra_query: dict[str, object] | None = None,

1251

extra_body: dict[str, object] | None = None,

1252

timeout: float | None = None,

1253

) -> Thread: ...

1254

1255

def delete(

1256

self,

1257

thread_id: str,

1258

*,

1259

extra_headers: dict[str, str] | None = None,

1260

extra_query: dict[str, object] | None = None,

1261

extra_body: dict[str, object] | None = None,

1262

timeout: float | None = None,

1263

) -> ThreadDeleted: ...

1264

1265

def create_and_run(

1266

self,

1267

*,

1268

assistant_id: str,

1269

instructions: str | None = None,

1270

metadata: dict[str, str] | None = None,

1271

model: str | None = None,

1272

thread: dict | None = None,

1273

tools: list[dict] | None = None,

1274

extra_headers: dict[str, str] | None = None,

1275

extra_query: dict[str, object] | None = None,

1276

extra_body: dict[str, object] | None = None,

1277

timeout: float | None = None,

1278

) -> Run: ...

1279

1280

def create_and_run_poll(

1281

self,

1282

*,

1283

assistant_id: str,

1284

poll_interval_ms: int | None = None,

1285

instructions: str | None = None,

1286

metadata: dict[str, str] | None = None,

1287

model: str | None = None,

1288

thread: dict | None = None,

1289

tools: list[dict] | None = None,

1290

extra_headers: dict[str, str] | None = None,

1291

extra_query: dict[str, object] | None = None,

1292

extra_body: dict[str, object] | None = None,

1293

timeout: float | None = None,

1294

) -> Run: ...

1295

1296

def create_and_run_stream(

1297

self,

1298

*,

1299

assistant_id: str,

1300

event_handler: AssistantEventHandler | None = None,

1301

instructions: str | None = None,

1302

metadata: dict[str, str] | None = None,

1303

model: str | None = None,

1304

thread: dict | None = None,

1305

tools: list[dict] | None = None,

1306

extra_headers: dict[str, str] | None = None,

1307

extra_query: dict[str, object] | None = None,

1308

extra_body: dict[str, object] | None = None,

1309

timeout: float | None = None,

1310

) -> AssistantStreamManager: ...

1311

1312

# Messages

1313

def create(

1314

self,

1315

thread_id: str,

1316

*,

1317

role: str,

1318

content: str | list[dict],

1319

attachments: list[dict] | None = None,

1320

metadata: dict[str, str] | None = None,

1321

extra_headers: dict[str, str] | None = None,

1322

extra_query: dict[str, object] | None = None,

1323

extra_body: dict[str, object] | None = None,

1324

timeout: float | None = None,

1325

) -> Message: ...

1326

1327

def retrieve(

1328

self,

1329

thread_id: str,

1330

message_id: str,

1331

*,

1332

extra_headers: dict[str, str] | None = None,

1333

extra_query: dict[str, object] | None = None,

1334

extra_body: dict[str, object] | None = None,

1335

timeout: float | None = None,

1336

) -> Message: ...

1337

1338

def update(

1339

self,

1340

thread_id: str,

1341

message_id: str,

1342

*,

1343

metadata: dict[str, str] | None = None,

1344

extra_headers: dict[str, str] | None = None,

1345

extra_query: dict[str, object] | None = None,

1346

extra_body: dict[str, object] | None = None,

1347

timeout: float | None = None,

1348

) -> Message: ...

1349

1350

def list(

1351

self,

1352

thread_id: str,

1353

*,

1354

after: str | None = None,

1355

before: str | None = None,

1356

limit: int | None = None,

1357

order: str | None = None,

1358

run_id: str | None = None,

1359

extra_headers: dict[str, str] | None = None,

1360

extra_query: dict[str, object] | None = None,

1361

extra_body: dict[str, object] | None = None,

1362

timeout: float | None = None,

1363

) -> SyncCursorPage[Message]: ...

1364

1365

def delete(

1366

self,

1367

thread_id: str,

1368

message_id: str,

1369

*,

1370

extra_headers: dict[str, str] | None = None,

1371

extra_query: dict[str, object] | None = None,

1372

extra_body: dict[str, object] | None = None,

1373

timeout: float | None = None,

1374

) -> MessageDeleted: ...

1375

```

1376

1377

[Threads and Messages](./threads-messages.md)

1378

1379

### Runs (Beta)

1380

1381

Execute assistants on threads and handle tool calls. Run steps are managed through the `.steps` subresource.

1382

1383

```python { .api }

1384

def create(

1385

self,

1386

thread_id: str,

1387

*,

1388

assistant_id: str,

1389

additional_instructions: str | None = None,

1390

additional_messages: list[dict] | None = None,

1391

instructions: str | None = None,

1392

max_completion_tokens: int | None = None,

1393

max_prompt_tokens: int | None = None,

1394

metadata: dict[str, str] | None = None,

1395

model: str | None = None,

1396

parallel_tool_calls: bool | None = None,

1397

response_format: dict | None = None,

1398

stream: bool | None = None,

1399

temperature: float | None = None,

1400

tool_choice: str | dict | None = None,

1401

tools: list[dict] | None = None,

1402

top_p: float | None = None,

1403

truncation_strategy: dict | None = None,

1404

extra_headers: dict[str, str] | None = None,

1405

extra_query: dict[str, object] | None = None,

1406

extra_body: dict[str, object] | None = None,

1407

timeout: float | None = None,

1408

) -> Run: ...

1409

1410

def retrieve(

1411

self,

1412

thread_id: str,

1413

run_id: str,

1414

*,

1415

extra_headers: dict[str, str] | None = None,

1416

extra_query: dict[str, object] | None = None,

1417

extra_body: dict[str, object] | None = None,

1418

timeout: float | None = None,

1419

) -> Run: ...

1420

1421

def update(

1422

self,

1423

thread_id: str,

1424

run_id: str,

1425

*,

1426

metadata: dict[str, str] | None = None,

1427

extra_headers: dict[str, str] | None = None,

1428

extra_query: dict[str, object] | None = None,

1429

extra_body: dict[str, object] | None = None,

1430

timeout: float | None = None,

1431

) -> Run: ...

1432

1433

def list(

1434

self,

1435

thread_id: str,

1436

*,

1437

after: str | None = None,

1438

before: str | None = None,

1439

limit: int | None = None,

1440

order: str | None = None,

1441

extra_headers: dict[str, str] | None = None,

1442

extra_query: dict[str, object] | None = None,

1443

extra_body: dict[str, object] | None = None,

1444

timeout: float | None = None,

1445

) -> SyncCursorPage[Run]: ...

1446

1447

def cancel(

1448

self,

1449

thread_id: str,

1450

run_id: str,

1451

*,

1452

extra_headers: dict[str, str] | None = None,

1453

extra_query: dict[str, object] | None = None,

1454

extra_body: dict[str, object] | None = None,

1455

timeout: float | None = None,

1456

) -> Run: ...

1457

1458

def submit_tool_outputs(

1459

self,

1460

thread_id: str,

1461

run_id: str,

1462

*,

1463

tool_outputs: list[dict],

1464

stream: bool | None = None,

1465

extra_headers: dict[str, str] | None = None,

1466

extra_query: dict[str, object] | None = None,

1467

extra_body: dict[str, object] | None = None,

1468

timeout: float | None = None,

1469

) -> Run: ...

1470

1471

def stream(

1472

self,

1473

thread_id: str,

1474

*,

1475

assistant_id: str,

1476

event_handler: AssistantEventHandler | None = None,

1477

extra_headers: dict[str, str] | None = None,

1478

extra_query: dict[str, object] | None = None,

1479

extra_body: dict[str, object] | None = None,

1480

timeout: float | None = None,

1481

) -> AssistantStreamManager: ...

1482

1483

# Access via client.beta.threads.runs.steps

1484

def retrieve(

1485

self,

1486

thread_id: str,

1487

run_id: str,

1488

step_id: str,

1489

*,

1490

extra_headers: dict[str, str] | None = None,

1491

extra_query: dict[str, object] | None = None,

1492

extra_body: dict[str, object] | None = None,

1493

timeout: float | None = None,

1494

) -> RunStep: ...

1495

1496

def list(

1497

self,

1498

thread_id: str,

1499

run_id: str,

1500

*,

1501

after: str | None = None,

1502

before: str | None = None,

1503

limit: int | None = None,

1504

order: str | None = None,

1505

extra_headers: dict[str, str] | None = None,

1506

extra_query: dict[str, object] | None = None,

1507

extra_body: dict[str, object] | None = None,

1508

timeout: float | None = None,

1509

) -> SyncCursorPage[RunStep]: ...

1510

```

1511

1512

[Runs](./runs.md)

1513

1514

### ChatKit (Beta)

1515

1516

Simplified, high-level interface for building chat applications with session and thread management.

1517

1518

```python { .api }

1519

def create(

1520

self,

1521

*,

1522

user: str,

1523

workflow: ChatSessionWorkflowParam,

1524

chatkit_configuration: ChatSessionChatKitConfigurationParam | Omit = omit,

1525

expires_after: ChatSessionExpiresAfterParam | Omit = omit,

1526

rate_limits: ChatSessionRateLimitsParam | Omit = omit,

1527

extra_headers: dict[str, str] | None = None,

1528

extra_query: dict[str, object] | None = None,

1529

extra_body: dict[str, object] | None = None,

1530

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

1531

) -> ChatSession: ...

1532

```

1533

1534

[ChatKit](./chatkit.md)

1535

1536

### Realtime API (Beta)

1537

1538

WebSocket-based realtime communication for low-latency conversational AI experiences. Realtime client secrets and calls are managed through the `.client_secrets` and `.calls` subresources.

1539

1540

```python { .api }

1541

def connect(

1542

self,

1543

*,

1544

call_id: str | Omit = omit,

1545

model: str | Omit = omit,

1546

extra_query: dict[str, object] = {},

1547

extra_headers: dict[str, str] = {},

1548

websocket_connection_options: WebsocketConnectionOptions = {},

1549

) -> RealtimeConnectionManager: ...

1550

1551

# Access via client.realtime.client_secrets

1552

def create(

1553

self,

1554

*,

1555

extra_headers: dict[str, str] | None = None,

1556

extra_query: dict[str, object] | None = None,

1557

extra_body: dict[str, object] | None = None,

1558

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

1559

) -> ClientSecret: ...

1560

1561

# Access via client.realtime.calls

1562

def create(

1563

self,

1564

*,

1565

model: str,

1566

call_config: dict | None = None,

1567

extra_headers: dict[str, str] | None = None,

1568

extra_query: dict[str, object] | None = None,

1569

extra_body: dict[str, object] | None = None,

1570

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

1571

) -> Call: ...

1572

1573

def retrieve(

1574

self,

1575

call_id: str,

1576

*,

1577

extra_headers: dict[str, str] | None = None,

1578

extra_query: dict[str, object] | None = None,

1579

extra_body: dict[str, object] | None = None,

1580

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

1581

) -> Call: ...

1582

1583

def update(

1584

self,

1585

call_id: str,

1586

*,

1587

call_config: dict | None = None,

1588

extra_headers: dict[str, str] | None = None,

1589

extra_query: dict[str, object] | None = None,

1590

extra_body: dict[str, object] | None = None,

1591

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

1592

) -> Call: ...

1593

1594

def list(

1595

self,

1596

*,

1597

after: str | Omit = omit,

1598

limit: int | Omit = omit,

1599

order: Literal["asc", "desc"] | Omit = omit,

1600

extra_headers: dict[str, str] | None = None,

1601

extra_query: dict[str, object] | None = None,

1602

extra_body: dict[str, object] | None = None,

1603

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

1604

) -> SyncCursorPage[Call]: ...

1605

```

1606

1607

[Realtime API](./realtime.md)

1608

1609

### Responses API

1610

1611

Create responses with advanced tool support including computer use, file search, and code patching. Response input items and tokens are managed through the `.input_items` and `.input_tokens` subresources.

1612

1613

```python { .api }

1614

def create(

1615

self,

1616

*,

1617

model: str,

1618

input: dict | list[dict],

1619

instructions: str | None = None,

1620

metadata: dict[str, str] | None = None,

1621

parallel_tool_calls: bool | None = None,

1622

reasoning_effort: str | None = None,

1623

store: bool | None = None,

1624

stream: bool | None = None,

1625

temperature: float | None = None,

1626

tool_choice: str | dict | None = None,

1627

tools: list[dict] | None = None,

1628

top_p: float | None = None,

1629

extra_headers: dict[str, str] | None = None,

1630

extra_query: dict[str, object] | None = None,

1631

extra_body: dict[str, object] | None = None,

1632

timeout: float | None = None,

1633

) -> Response: ...

1634

1635

def stream(

1636

self,

1637

*,

1638

model: str,

1639

input: dict | list[dict],

1640

instructions: str | None = None,

1641

metadata: dict[str, str] | None = None,

1642

parallel_tool_calls: bool | None = None,

1643

reasoning_effort: str | None = None,

1644

store: bool | None = None,

1645

temperature: float | None = None,

1646

tool_choice: str | dict | None = None,

1647

tools: list[dict] | None = None,

1648

top_p: float | None = None,

1649

extra_headers: dict[str, str] | None = None,

1650

extra_query: dict[str, object] | None = None,

1651

extra_body: dict[str, object] | None = None,

1652

timeout: float | None = None,

1653

) -> Stream[ResponseStreamEvent]: ...

1654

1655

def parse(

1656

self,

1657

*,

1658

model: str,

1659

input: dict | list[dict],

1660

response_format: Type[BaseModel],

1661

instructions: str | None = None,

1662

metadata: dict[str, str] | None = None,

1663

parallel_tool_calls: bool | None = None,

1664

reasoning_effort: str | None = None,

1665

store: bool | None = None,

1666

temperature: float | None = None,

1667

tool_choice: str | dict | None = None,

1668

tools: list[dict] | None = None,

1669

top_p: float | None = None,

1670

extra_headers: dict[str, str] | None = None,

1671

extra_query: dict[str, object] | None = None,

1672

extra_body: dict[str, object] | None = None,

1673

timeout: float | None = None,

1674

) -> ParsedResponse: ...

1675

1676

def retrieve(

1677

self,

1678

response_id: str,

1679

*,

1680

extra_headers: dict[str, str] | None = None,

1681

extra_query: dict[str, object] | None = None,

1682

extra_body: dict[str, object] | None = None,

1683

timeout: float | None = None,

1684

) -> Response: ...

1685

1686

def delete(

1687

self,

1688

response_id: str,

1689

*,

1690

extra_headers: dict[str, str] | None = None,

1691

extra_query: dict[str, object] | None = None,

1692

extra_body: dict[str, object] | None = None,

1693

timeout: float | None = None,

1694

) -> ResponseDeleted: ...

1695

1696

def cancel(

1697

self,

1698

response_id: str,

1699

*,

1700

extra_headers: dict[str, str] | None = None,

1701

extra_query: dict[str, object] | None = None,

1702

extra_body: dict[str, object] | None = None,

1703

timeout: float | None = None,

1704

) -> Response: ...

1705

1706

# Access via client.responses.input_items

1707

def create(

1708

self,

1709

response_id: str,

1710

*,

1711

type: str,

1712

content: str | list[dict] | None = None,

1713

extra_headers: dict[str, str] | None = None,

1714

extra_query: dict[str, object] | None = None,

1715

extra_body: dict[str, object] | None = None,

1716

timeout: float | None = None,

1717

) -> ResponseInputItem: ...

1718

1719

def delete(

1720

self,

1721

response_id: str,

1722

item_id: str,

1723

*,

1724

extra_headers: dict[str, str] | None = None,

1725

extra_query: dict[str, object] | None = None,

1726

extra_body: dict[str, object] | None = None,

1727

timeout: float | None = None,

1728

) -> ResponseInputItemDeleted: ...

1729

1730

# Access via client.responses.input_tokens

1731

def create(

1732

self,

1733

response_id: str,

1734

*,

1735

token: str,

1736

extra_headers: dict[str, str] | None = None,

1737

extra_query: dict[str, object] | None = None,

1738

extra_body: dict[str, object] | None = None,

1739

timeout: float | None = None,

1740

) -> ResponseInputToken: ...

1741

1742

def delete(

1743

self,

1744

response_id: str,

1745

token_id: str,

1746

*,

1747

extra_headers: dict[str, str] | None = None,

1748

extra_query: dict[str, object] | None = None,

1749

extra_body: dict[str, object] | None = None,

1750

timeout: float | None = None,

1751

) -> ResponseInputTokenDeleted: ...

1752

```

1753

1754

[Responses](./responses.md)

1755

1756

### Evaluations

1757

1758

Create and manage evaluations to test model performance with custom testing criteria. Evaluation runs are managed through the `.runs` subresource.

1759

1760

```python { .api }

1761

def create(

1762

self,

1763

*,

1764

data_source_config: dict,

1765

testing_criteria: Iterable[dict],

1766

metadata: dict[str, str] | None | Omit = omit,

1767

name: str | Omit = omit,

1768

extra_headers: dict[str, str] | None = None,

1769

extra_query: dict[str, object] | None = None,

1770

extra_body: dict[str, object] | None = None,

1771

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

1772

) -> Eval: ...

1773

1774

def retrieve(

1775

self,

1776

eval_id: str,

1777

*,

1778

extra_headers: dict[str, str] | None = None,

1779

extra_query: dict[str, object] | None = None,

1780

extra_body: dict[str, object] | None = None,

1781

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

1782

) -> Eval: ...

1783

1784

def update(

1785

self,

1786

eval_id: str,

1787

*,

1788

metadata: dict[str, str] | None | Omit = omit,

1789

name: str | Omit = omit,

1790

extra_headers: dict[str, str] | None = None,

1791

extra_query: dict[str, object] | None = None,

1792

extra_body: dict[str, object] | None = None,

1793

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

1794

) -> Eval: ...

1795

1796

def list(

1797

self,

1798

*,

1799

after: str | Omit = omit,

1800

limit: int | Omit = omit,

1801

order: Literal["asc", "desc"] | Omit = omit,

1802

order_by: Literal["created_at", "updated_at"] | Omit = omit,

1803

extra_headers: dict[str, str] | None = None,

1804

extra_query: dict[str, object] | None = None,

1805

extra_body: dict[str, object] | None = None,

1806

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

1807

) -> SyncCursorPage[Eval]: ...

1808

1809

def delete(

1810

self,

1811

eval_id: str,

1812

*,

1813

extra_headers: dict[str, str] | None = None,

1814

extra_query: dict[str, object] | None = None,

1815

extra_body: dict[str, object] | None = None,

1816

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

1817

) -> EvalDeleteResponse: ...

1818

1819

# Access via client.evals.runs

1820

def create(

1821

self,

1822

eval_id: str,

1823

*,

1824

extra_headers: dict[str, str] | None = None,

1825

extra_query: dict[str, object] | None = None,

1826

extra_body: dict[str, object] | None = None,

1827

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

1828

) -> EvalRun: ...

1829

1830

def retrieve(

1831

self,

1832

eval_id: str,

1833

run_id: str,

1834

*,

1835

extra_headers: dict[str, str] | None = None,

1836

extra_query: dict[str, object] | None = None,

1837

extra_body: dict[str, object] | None = None,

1838

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

1839

) -> EvalRun: ...

1840

1841

def list(

1842

self,

1843

eval_id: str,

1844

*,

1845

after: str | Omit = omit,

1846

limit: int | Omit = omit,

1847

order: Literal["asc", "desc"] | Omit = omit,

1848

extra_headers: dict[str, str] | None = None,

1849

extra_query: dict[str, object] | None = None,

1850

extra_body: dict[str, object] | None = None,

1851

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

1852

) -> SyncCursorPage[EvalRun]: ...

1853

```

1854

1855

[Evaluations](./evals.md)

1856

1857

### Conversations

1858

1859

Create and manage conversations for structured multi-turn interactions. Conversation items are managed through the `.items` subresource.

1860

1861

```python { .api }

1862

def create(

1863

self,

1864

*,

1865

items: Iterable[dict] | None | Omit = omit,

1866

metadata: dict[str, str] | None | Omit = omit,

1867

extra_headers: dict[str, str] | None = None,

1868

extra_query: dict[str, object] | None = None,

1869

extra_body: dict[str, object] | None = None,

1870

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

1871

) -> Conversation: ...

1872

1873

def retrieve(

1874

self,

1875

conversation_id: str,

1876

*,

1877

extra_headers: dict[str, str] | None = None,

1878

extra_query: dict[str, object] | None = None,

1879

extra_body: dict[str, object] | None = None,

1880

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

1881

) -> Conversation: ...

1882

1883

def update(

1884

self,

1885

conversation_id: str,

1886

*,

1887

metadata: dict[str, str] | None,

1888

extra_headers: dict[str, str] | None = None,

1889

extra_query: dict[str, object] | None = None,

1890

extra_body: dict[str, object] | None = None,

1891

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

1892

) -> Conversation: ...

1893

1894

def delete(

1895

self,

1896

conversation_id: str,

1897

*,

1898

extra_headers: dict[str, str] | None = None,

1899

extra_query: dict[str, object] | None = None,

1900

extra_body: dict[str, object] | None = None,

1901

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

1902

) -> ConversationDeleted: ...

1903

1904

# Access via client.conversations.items

1905

def create(

1906

self,

1907

conversation_id: str,

1908

*,

1909

type: str,

1910

content: str | list[dict] | None = None,

1911

metadata: dict[str, str] | None | Omit = omit,

1912

extra_headers: dict[str, str] | None = None,

1913

extra_query: dict[str, object] | None = None,

1914

extra_body: dict[str, object] | None = None,

1915

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

1916

) -> ConversationItem: ...

1917

1918

def retrieve(

1919

self,

1920

conversation_id: str,

1921

item_id: str,

1922

*,

1923

extra_headers: dict[str, str] | None = None,

1924

extra_query: dict[str, object] | None = None,

1925

extra_body: dict[str, object] | None = None,

1926

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

1927

) -> ConversationItem: ...

1928

1929

def update(

1930

self,

1931

conversation_id: str,

1932

item_id: str,

1933

*,

1934

content: str | list[dict] | None = None,

1935

metadata: dict[str, str] | None | Omit = omit,

1936

extra_headers: dict[str, str] | None = None,

1937

extra_query: dict[str, object] | None = None,

1938

extra_body: dict[str, object] | None = None,

1939

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

1940

) -> ConversationItem: ...

1941

1942

def list(

1943

self,

1944

conversation_id: str,

1945

*,

1946

after: str | Omit = omit,

1947

limit: int | Omit = omit,

1948

order: Literal["asc", "desc"] | Omit = omit,

1949

extra_headers: dict[str, str] | None = None,

1950

extra_query: dict[str, object] | None = None,

1951

extra_body: dict[str, object] | None = None,

1952

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

1953

) -> SyncCursorPage[ConversationItem]: ...

1954

1955

def delete(

1956

self,

1957

conversation_id: str,

1958

item_id: str,

1959

*,

1960

extra_headers: dict[str, str] | None = None,

1961

extra_query: dict[str, object] | None = None,

1962

extra_body: dict[str, object] | None = None,

1963

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

1964

) -> ConversationItemDeleted: ...

1965

```

1966

1967

[Conversations](./conversations.md)

1968

1969

### Containers

1970

1971

Create and manage isolated file storage containers for organizing files. Container files are managed through the `.files` subresource.

1972

1973

```python { .api }

1974

def create(

1975

self,

1976

*,

1977

name: str,

1978

expires_after: dict | Omit = omit,

1979

file_ids: list[str] | Omit = omit,

1980

extra_headers: dict[str, str] | None = None,

1981

extra_query: dict[str, object] | None = None,

1982

extra_body: dict[str, object] | None = None,

1983

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

1984

) -> Container: ...

1985

1986

def retrieve(

1987

self,

1988

container_id: str,

1989

*,

1990

extra_headers: dict[str, str] | None = None,

1991

extra_query: dict[str, object] | None = None,

1992

extra_body: dict[str, object] | None = None,

1993

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

1994

) -> Container: ...

1995

1996

def list(

1997

self,

1998

*,

1999

after: str | Omit = omit,

2000

limit: int | Omit = omit,

2001

order: Literal["asc", "desc"] | Omit = omit,

2002

extra_headers: dict[str, str] | None = None,

2003

extra_query: dict[str, object] | None = None,

2004

extra_body: dict[str, object] | None = None,

2005

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

2006

) -> SyncCursorPage[Container]: ...

2007

2008

def delete(

2009

self,

2010

container_id: str,

2011

*,

2012

extra_headers: dict[str, str] | None = None,

2013

extra_query: dict[str, object] | None = None,

2014

extra_body: dict[str, object] | None = None,

2015

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

2016

) -> None: ...

2017

2018

# Access via client.containers.files

2019

def create(

2020

self,

2021

container_id: str,

2022

*,

2023

file: FileTypes,

2024

purpose: str,

2025

extra_headers: dict[str, str] | None = None,

2026

extra_query: dict[str, object] | None = None,

2027

extra_body: dict[str, object] | None = None,

2028

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

2029

) -> ContainerFile: ...

2030

2031

def retrieve(

2032

self,

2033

container_id: str,

2034

file_id: str,

2035

*,

2036

extra_headers: dict[str, str] | None = None,

2037

extra_query: dict[str, object] | None = None,

2038

extra_body: dict[str, object] | None = None,

2039

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

2040

) -> ContainerFile: ...

2041

2042

def list(

2043

self,

2044

container_id: str,

2045

*,

2046

after: str | Omit = omit,

2047

limit: int | Omit = omit,

2048

order: Literal["asc", "desc"] | Omit = omit,

2049

extra_headers: dict[str, str] | None = None,

2050

extra_query: dict[str, object] | None = None,

2051

extra_body: dict[str, object] | None = None,

2052

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

2053

) -> SyncCursorPage[ContainerFile]: ...

2054

2055

def delete(

2056

self,

2057

container_id: str,

2058

file_id: str,

2059

*,

2060

extra_headers: dict[str, str] | None = None,

2061

extra_query: dict[str, object] | None = None,

2062

extra_body: dict[str, object] | None = None,

2063

timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

2064

) -> ContainerFileDeleted: ...

2065

```

2066

2067

[Containers](./containers.md)

2068

2069

### Webhooks

2070

2071

Verify and handle webhook events from OpenAI for asynchronous notifications.

2072

2073

```python { .api }

2074

def verify_signature(

2075

payload: str | bytes,

2076

headers: dict[str, str] | list[tuple[str, str]],

2077

*,

2078

secret: str | None = None,

2079

tolerance: int = 300

2080

) -> None: ...

2081

2082

def unwrap(

2083

payload: str | bytes,

2084

headers: dict[str, str] | list[tuple[str, str]],

2085

*,

2086

secret: str | None = None

2087

) -> UnwrapWebhookEvent: ...

2088

```

2089

2090

[Webhooks](./webhooks.md)

2091

2092

## Response Wrappers

2093

2094

All resources support response wrapper patterns for accessing raw HTTP responses or streaming responses without loading them into memory.

2095

2096

### Raw Response Access

2097

2098

Access the underlying `httpx.Response` object for any API call:

2099

2100

```python

2101

from openai import OpenAI

2102

2103

client = OpenAI()

2104

2105

# Use .with_raw_response prefix

2106

response = client.with_raw_response.chat.completions.create(

2107

model="gpt-4",

2108

messages=[{"role": "user", "content": "Hello!"}]

2109

)

2110

2111

# Access parsed response

2112

completion = response.parsed

2113

2114

# Access raw HTTP response

2115

http_response = response.http_response

2116

print(f"Status: {http_response.status_code}")

2117

print(f"Headers: {http_response.headers}")

2118

print(f"Raw content: {http_response.content}")

2119

```

2120

2121

Available on all resources:

2122

2123

```python

2124

client.with_raw_response.chat.completions.create(...)

2125

client.with_raw_response.audio.transcriptions.create(...)

2126

client.with_raw_response.images.generate(...)

2127

# ... and all other resources

2128

```

2129

2130

### Streaming Response Access

2131

2132

Stream responses without loading them into memory, useful for large responses:

2133

2134

```python

2135

response = client.with_streaming_response.files.content("file-abc123")

2136

2137

# Stream chunks

2138

for chunk in response.iter_bytes():

2139

process_chunk(chunk)

2140

2141

# Or write directly to file

2142

with open("output.txt", "wb") as f:

2143

for chunk in response.iter_bytes():

2144

f.write(chunk)

2145

```

2146

2147

Available on all resources:

2148

2149

```python

2150

client.with_streaming_response.chat.completions.create(...)

2151

client.with_streaming_response.files.content(...)

2152

# ... and all other resources

2153

```

2154

2155

Both patterns work with async clients:

2156

2157

```python

2158

from openai import AsyncOpenAI

2159

2160

client = AsyncOpenAI()

2161

2162

# Raw response (async)

2163

response = await client.with_raw_response.chat.completions.create(...)

2164

2165

# Streaming response (async)

2166

response = await client.with_streaming_response.files.content("file-abc123")

2167

async for chunk in response.iter_bytes():

2168

process_chunk(chunk)

2169

```

2170

2171

## Error Handling

2172

2173

The library provides a comprehensive exception hierarchy for handling different error scenarios:

2174

2175

```python { .api }

2176

class OpenAIError(Exception):

2177

"""Base exception for all OpenAI errors."""

2178

2179

class APIError(OpenAIError):

2180

"""Base for API-related errors."""

2181

2182

class APIStatusError(APIError):

2183

"""HTTP status code errors (4xx, 5xx)."""

2184

status_code: int

2185

response: httpx.Response

2186

body: object

2187

2188

class APITimeoutError(APIError):

2189

"""Request timeout errors."""

2190

2191

class APIConnectionError(APIError):

2192

"""Connection errors."""

2193

2194

class APIResponseValidationError(APIError):

2195

"""Response validation errors."""

2196

2197

class BadRequestError(APIStatusError):

2198

"""400 Bad Request."""

2199

2200

class AuthenticationError(APIStatusError):

2201

"""401 Authentication error."""

2202

2203

class PermissionDeniedError(APIStatusError):

2204

"""403 Permission denied."""

2205

2206

class NotFoundError(APIStatusError):

2207

"""404 Not found."""

2208

2209

class ConflictError(APIStatusError):

2210

"""409 Conflict."""

2211

2212

class UnprocessableEntityError(APIStatusError):

2213

"""422 Unprocessable Entity."""

2214

2215

class RateLimitError(APIStatusError):

2216

"""429 Rate limit exceeded."""

2217

2218

class InternalServerError(APIStatusError):

2219

"""500+ Server errors."""

2220

2221

class LengthFinishReasonError(OpenAIError):

2222

"""Raised when completion stops due to reaching max tokens."""

2223

2224

class ContentFilterFinishReasonError(OpenAIError):

2225

"""Raised when completion stops due to content filtering."""

2226

2227

class InvalidWebhookSignatureError(OpenAIError):

2228

"""Raised when webhook signature verification fails."""

2229

```

2230

2231

## Common Types

2232

2233

```python { .api }

2234

# Sentinel values for omitted parameters

2235

NOT_GIVEN: Omit # Sentinel indicating parameter was not provided

2236

not_given: Omit # Alias for NOT_GIVEN (lowercase convention)

2237

2238

# Omit type for optional API parameters

2239

# Use Omit type with omit default value instead of None for optional parameters

2240

# This distinguishes between "parameter not provided" vs "parameter set to null"

2241

Omit: TypeAlias # Type for omittable parameters

2242

omit: Omit # Sentinel value for omitted parameters

2243

2244

# Type utilities

2245

NoneType: Type[None] # Used for response casting when no response expected

2246

2247

# Common type aliases

2248

# File upload types - can be specified in multiple formats:

2249

# - Raw file content (bytes, file object, etc.)

2250

# - (filename, content) tuple

2251

# - (filename, content, mime_type) tuple

2252

FileTypes = Union[

2253

FileContent, # Just the file bytes/buffer

2254

Tuple[Optional[str], FileContent], # (filename, content)

2255

Tuple[Optional[str], FileContent, Optional[str]] # (filename, content, mime_type)

2256

]

2257

2258

# Timeout can be a float (seconds) or an httpx.Timeout object for fine-grained control

2259

Timeout = Union[float, httpx.Timeout]

2260

2261

# Azure AD Token Provider

2262

# Callable that returns Azure AD tokens for authentication with Azure OpenAI

2263

# Can be sync or async function

2264

AzureADTokenProvider = Callable[[], str] | Callable[[], Awaitable[str]]

2265

2266

# Default configuration constants

2267

DEFAULT_TIMEOUT: httpx.Timeout # Default timeout: 600s total, 5s connect

2268

DEFAULT_MAX_RETRIES: int # Default maximum retries: 2

2269

DEFAULT_CONNECTION_LIMITS: httpx.Limits # Default connection limits: 1000 max connections, 100 keepalive

2270

2271

# Type aliases for specific domains

2272

VideoModel = Literal["sora-2", "sora-2-pro"]

2273

VideoSeconds = int # Duration in seconds

2274

VideoSize = str # e.g., "720x1280"

2275

2276

# Request configuration

2277

class RequestOptions(TypedDict, total=False):

2278

"""Options for individual API requests."""

2279

extra_headers: dict[str, str]

2280

extra_query: dict[str, object]

2281

extra_body: dict[str, object]

2282

timeout: float | httpx.Timeout

2283

2284

# WebSocket configuration for Realtime API

2285

class WebsocketConnectionOptions(TypedDict, total=False):

2286

"""WebSocket connection options for realtime API connections."""

2287

extensions: Sequence[ClientExtensionFactory] | None # List of supported extensions

2288

subprotocols: Sequence[Subprotocol] | None # List of supported subprotocols

2289

compression: str | None # Compression setting ("permessage-deflate" enabled by default, None to disable)

2290

max_size: int | None # Maximum size of incoming messages in bytes

2291

max_queue: int | None | tuple[int | None, int | None] # High-water mark of receive buffer

2292

write_limit: int | tuple[int, int | None] # High-water mark of write buffer in bytes

2293

2294

# Response wrappers

2295

class HttpxBinaryResponseContent:

2296

"""Binary response content for audio, images, etc."""

2297

content: bytes

2298

response: httpx.Response

2299

def read(self) -> bytes: ...

2300

def write_to_file(self, file: str | os.PathLike) -> None: ...

2301

2302

# Pagination types

2303

class SyncPage[T]:

2304

"""Standard pagination."""

2305

data: list[T]

2306

object: str

2307

def __iter__(self) -> Iterator[T]: ...

2308

def __next__(self) -> T: ...

2309

2310

class AsyncPage[T]:

2311

"""Async pagination."""

2312

data: list[T]

2313

object: str

2314

def __aiter__(self) -> AsyncIterator[T]: ...

2315

def __anext__(self) -> T: ...

2316

2317

class SyncCursorPage[T]:

2318

"""Cursor-based pagination."""

2319

data: list[T]

2320

has_more: bool

2321

def __iter__(self) -> Iterator[T]: ...

2322

def __next__(self) -> T: ...

2323

2324

class AsyncCursorPage[T]:

2325

"""Async cursor-based pagination."""

2326

data: list[T]

2327

has_more: bool

2328

def __aiter__(self) -> AsyncIterator[T]: ...

2329

def __anext__(self) -> T: ...

2330

2331

class SyncConversationCursorPage[T]:

2332

"""Conversation cursor-based pagination."""

2333

data: list[T]

2334

has_more: bool

2335

last_id: str | None

2336

def __iter__(self) -> Iterator[T]: ...

2337

def __next__(self) -> T: ...

2338

2339

class AsyncConversationCursorPage[T]:

2340

"""Async conversation cursor-based pagination."""

2341

data: list[T]

2342

has_more: bool

2343

last_id: str | None

2344

def __aiter__(self) -> AsyncIterator[T]: ...

2345

def __anext__(self) -> T: ...

2346

2347

# Streaming types

2348

class Stream[T]:

2349

"""Synchronous streaming."""

2350

def __iter__(self) -> Iterator[T]: ...

2351

def __next__(self) -> T: ...

2352

def __enter__(self) -> Stream[T]: ...

2353

def __exit__(self, *args) -> None: ...

2354

def close(self) -> None: ...

2355

2356

class AsyncStream[T]:

2357

"""Asynchronous streaming."""

2358

def __aiter__(self) -> AsyncIterator[T]: ...

2359

def __anext__(self) -> T: ...

2360

async def __aenter__(self) -> AsyncStream[T]: ...

2361

async def __aexit__(self, *args) -> None: ...

2362

async def close(self) -> None: ...

2363

```

2364

2365

## Helper Utilities

2366

2367

```python { .api }

2368

# Package Information

2369

__title__: str # Package name ("openai")

2370

VERSION: str # Library version string

2371

__version__: str # Library version (same as VERSION)

2372

2373

# Pydantic Base Model

2374

BaseModel # Base class for creating custom Pydantic models (from pydantic import BaseModel)

2375

2376

# HTTP Client Classes

2377

class DefaultHttpxClient:

2378

"""Default synchronous httpx client with connection pooling."""

2379

2380

class DefaultAsyncHttpxClient:

2381

"""Default asynchronous httpx client with connection pooling."""

2382

2383

class DefaultAioHttpClient:

2384

"""Alternative async HTTP client using aiohttp."""

2385

2386

# Additional Type Exports

2387

Transport: type # HTTP transport type (httpx.BaseTransport)

2388

ProxiesTypes: type # Proxy configuration type

2389

2390

# File and Function Utilities

2391

def file_from_path(path: str) -> FileTypes:

2392

"""

2393

Load a file from filesystem path for upload to OpenAI API.

2394

2395

Args:

2396

path: Filesystem path to the file

2397

2398

Returns:

2399

FileTypes: A (filename, content, mime_type) tuple suitable for API upload

2400

"""

2401

2402

def pydantic_function_tool(

2403

model: Type[BaseModel],

2404

*,

2405

name: str | None = None,

2406

description: str | None = None

2407

) -> ChatCompletionToolParam:

2408

"""

2409

Create a function tool definition from a Pydantic model for use with chat completions.

2410

2411

Args:

2412

model: Pydantic model class defining the function parameters schema

2413

name: Optional function name (defaults to model class name in snake_case)

2414

description: Optional function description (defaults to model docstring)

2415

2416

Returns:

2417

ChatCompletionToolParam: A tool definition dict with 'type' and 'function' keys.

2418

Type from openai.types.chat module.

2419

Can be passed to chat.completions.create(tools=[...])

2420

"""

2421

2422

class AssistantEventHandler:

2423

"""Base class for handling assistant streaming events."""

2424

def on_event(self, event) -> None: ...

2425

def on_run_step_created(self, run_step) -> None: ...

2426

def on_run_step_done(self, run_step) -> None: ...

2427

def on_tool_call_created(self, tool_call) -> None: ...

2428

def on_tool_call_done(self, tool_call) -> None: ...

2429

def on_message_created(self, message) -> None: ...

2430

def on_message_done(self, message) -> None: ...

2431

def on_text_created(self, text) -> None: ...

2432

def on_text_delta(self, delta, snapshot) -> None: ...

2433

def on_text_done(self, text) -> None: ...

2434

2435

class AsyncAssistantEventHandler:

2436

"""Base class for handling assistant streaming events asynchronously."""

2437

async def on_event(self, event) -> None: ...

2438

async def on_run_step_created(self, run_step) -> None: ...

2439

async def on_run_step_done(self, run_step) -> None: ...

2440

async def on_tool_call_created(self, tool_call) -> None: ...

2441

async def on_tool_call_done(self, tool_call) -> None: ...

2442

async def on_message_created(self, message) -> None: ...

2443

async def on_message_done(self, message) -> None: ...

2444

async def on_text_created(self, text) -> None: ...

2445

async def on_text_delta(self, delta, snapshot) -> None: ...

2446

async def on_text_done(self, text) -> None: ...

2447

2448

# Audio Recording Helper

2449

class Microphone(Generic[DType]):

2450

"""

2451

Microphone helper for recording audio input from default audio device.

2452

2453

Requires optional dependencies: numpy, sounddevice

2454

Install with: pip install openai[voice_helpers]

2455

2456

Type Parameters:

2457

DType: numpy dtype for audio data (default: np.int16)

2458

"""

2459

def __init__(

2460

self,

2461

channels: int = 1,

2462

dtype: Type[DType] = np.int16,

2463

should_record: Callable[[], bool] | None = None,

2464

timeout: float | None = None,

2465

):

2466

"""

2467

Initialize microphone for recording.

2468

2469

Args:

2470

channels: Number of audio channels (1 for mono, 2 for stereo)

2471

dtype: Numpy data type for audio samples

2472

should_record: Optional callback function that returns True while recording should continue

2473

timeout: Maximum recording duration in seconds (None for unlimited)

2474

"""

2475

2476

async def record(

2477

self,

2478

return_ndarray: bool | None = False

2479

) -> npt.NDArray[DType] | FileTypes:

2480

"""

2481

Record audio from microphone.

2482

2483

Args:

2484

return_ndarray: If True, return numpy array; if False, return WAV file tuple

2485

2486

Returns:

2487

Either a numpy array of audio samples or FileTypes tuple (filename, buffer, mime_type)

2488

suitable for passing to OpenAI API methods.

2489

"""

2490

2491

# Audio Playback Helper

2492

class LocalAudioPlayer:

2493

"""

2494

Local audio player for playing audio content through default audio device.

2495

2496

Requires optional dependencies: numpy, sounddevice

2497

Install with: pip install openai[voice_helpers]

2498

2499

The player uses a fixed sample rate of 24000 Hz, 1 channel (mono), and float32 dtype.

2500

"""

2501

def __init__(

2502

self,

2503

should_stop: Callable[[], bool] | None = None,

2504

):

2505

"""

2506

Initialize audio player.

2507

2508

Args:

2509

should_stop: Optional callback function that returns True to stop playback

2510

"""

2511

2512

async def play(

2513

self,

2514

input: npt.NDArray[np.int16] | npt.NDArray[np.float32] | HttpxBinaryResponseContent | AsyncStreamedBinaryAPIResponse | StreamedBinaryAPIResponse

2515

) -> None:

2516

"""

2517

Play audio data through local audio device.

2518

2519

Args:

2520

input: Audio data as numpy array (int16 or float32) or response content from TTS API

2521

"""

2522

2523

async def play_stream(

2524

self,

2525

buffer_stream: AsyncGenerator[npt.NDArray[np.float32] | npt.NDArray[np.int16] | None, None]

2526

) -> None:

2527

"""

2528

Stream and play audio data as it arrives.

2529

2530

Useful for playing streaming audio responses from realtime API.

2531

2532

Args:

2533

buffer_stream: Async generator yielding audio buffers (numpy arrays) or None to signal completion

2534

"""

2535

```

2536