pypi-openai

Description
Official Python library for the OpenAI API providing chat completions, embeddings, audio, images, and more
Author
tessl
Last updated

How to use

npx @tessl/cli registry install tessl/pypi-openai@1.106.0

index.md docs/

1
# OpenAI Python Library
2
3
The official Python library for the OpenAI API, providing comprehensive access to OpenAI's powerful AI models including GPT-4, GPT-3.5, DALL·E, Whisper, and more. This library enables developers to integrate cutting-edge AI capabilities into their applications with a simple, intuitive interface.
4
5
## Package Information
6
7
- **Package Name**: openai
8
- **Package Type**: PyPI
9
- **Language**: Python
10
- **Version**: 1.106.0
11
- **Installation**: `pip install openai`
12
13
## Core Imports
14
15
```python
16
import openai
17
```
18
19
Standard client-based usage:
20
21
```python
22
from openai import OpenAI
23
```
24
25
Async client usage:
26
27
```python
28
from openai import AsyncOpenAI
29
```
30
31
Azure OpenAI usage:
32
33
```python
34
from openai import AzureOpenAI, AsyncAzureOpenAI
35
```
36
37
## Basic Usage
38
39
```python
40
from openai import OpenAI
41
42
# Initialize client with API key
43
client = OpenAI(api_key="your-api-key")
44
45
# Chat completions - most common use case
46
response = client.chat.completions.create(
47
model="gpt-4",
48
messages=[
49
{"role": "system", "content": "You are a helpful assistant."},
50
{"role": "user", "content": "What is machine learning?"}
51
],
52
max_tokens=150,
53
temperature=0.7
54
)
55
56
print(response.choices[0].message.content)
57
58
# Generate embeddings
59
embeddings = client.embeddings.create(
60
model="text-embedding-ada-002",
61
input=["Text to embed", "Another text"]
62
)
63
64
# Generate images
65
image_response = client.images.generate(
66
model="dall-e-3",
67
prompt="A futuristic cityscape at sunset",
68
size="1024x1024",
69
quality="standard",
70
n=1
71
)
72
73
# Text-to-speech
74
speech_response = client.audio.speech.create(
75
model="tts-1",
76
voice="alloy",
77
input="Hello! This is a text-to-speech example."
78
)
79
80
# Save audio to file
81
speech_response.stream_to_file("output.mp3")
82
```
83
84
## Architecture
85
86
The OpenAI Python library follows a resource-based architecture with clear separation of concerns:
87
88
- **Client Classes**: `OpenAI`, `AsyncOpenAI`, `AzureOpenAI`, `AsyncAzureOpenAI` provide the main entry points
89
- **Resources**: Logical groupings of related API endpoints (chat, embeddings, images, etc.)
90
- **Sub-resources**: Nested functionality within resources (chat.completions, audio.speech, etc.)
91
- **Type System**: Comprehensive type definitions for all API parameters and responses
92
- **Streaming Support**: Built-in streaming for real-time responses
93
- **Error Handling**: Structured exception hierarchy for different error types
94
95
The library supports both instance-based usage (creating client objects) and module-level usage (direct imports) for convenience.
96
97
## Capabilities
98
99
### Client Setup and Configuration
100
101
Core client initialization, authentication, configuration options, and Azure integration for both synchronous and asynchronous usage patterns.
102
103
```python { .api }
104
class OpenAI:
105
def __init__(
106
self,
107
*,
108
api_key: str | None = None,
109
organization: str | None = None,
110
project: str | None = None,
111
base_url: str | None = None,
112
timeout: float | None = None,
113
max_retries: int = 2,
114
default_headers: dict[str, str] | None = None,
115
http_client: httpx.Client | None = None
116
): ...
117
118
class AsyncOpenAI:
119
def __init__(
120
self,
121
*,
122
api_key: str | None = None,
123
organization: str | None = None,
124
project: str | None = None,
125
base_url: str | None = None,
126
timeout: float | None = None,
127
max_retries: int = 2,
128
default_headers: dict[str, str] | None = None,
129
http_client: httpx.AsyncClient | None = None
130
): ...
131
```
132
133
[Client Setup](./client-setup.md)
134
135
### Chat Completions
136
137
Primary interface for conversational AI using GPT models. Supports streaming responses, function calling, structured outputs, and advanced features like reasoning models.
138
139
```python { .api }
140
def create(
141
self,
142
*,
143
messages: list[ChatCompletionMessageParam],
144
model: str,
145
frequency_penalty: float | None = None,
146
logit_bias: dict[str, int] | None = None,
147
logprobs: bool | None = None,
148
max_completion_tokens: int | None = None,
149
n: int | None = None,
150
presence_penalty: float | None = None,
151
response_format: ResponseFormatParam | None = None,
152
seed: int | None = None,
153
stop: str | list[str] | None = None,
154
stream: bool | None = None,
155
temperature: float | None = None,
156
tool_choice: ToolChoiceParam | None = None,
157
tools: list[ChatCompletionToolParam] | None = None,
158
top_p: float | None = None,
159
user: str | None = None
160
) -> ChatCompletion | Stream[ChatCompletionChunk]: ...
161
```
162
163
[Chat Completions](./chat-completions.md)
164
165
### Text Completions
166
167
Legacy text completion interface for older models like GPT-3.5 Turbo Instruct, providing direct text generation capabilities.
168
169
```python { .api }
170
def create(
171
self,
172
*,
173
model: str,
174
prompt: str | list[str] | None,
175
best_of: int | None = None,
176
echo: bool | None = None,
177
frequency_penalty: float | None = None,
178
logit_bias: dict[str, int] | None = None,
179
logprobs: int | None = None,
180
max_tokens: int | None = None,
181
n: int | None = None,
182
presence_penalty: float | None = None,
183
seed: int | None = None,
184
stop: str | list[str] | None = None,
185
stream: bool | None = None,
186
suffix: str | None = None,
187
temperature: float | None = None,
188
top_p: float | None = None,
189
user: str | None = None
190
) -> Completion | Stream[Completion]: ...
191
```
192
193
[Text Completions](./text-completions.md)
194
195
### Embeddings
196
197
Convert text into high-dimensional vector representations for semantic similarity, search, clustering, and other NLP tasks using OpenAI's embedding models.
198
199
```python { .api }
200
def create(
201
self,
202
*,
203
input: str | list[str],
204
model: str,
205
dimensions: int | None = None,
206
encoding_format: Literal["float", "base64"] | None = None,
207
user: str | None = None
208
) -> CreateEmbeddingResponse: ...
209
```
210
211
[Embeddings](./embeddings.md)
212
213
### Audio APIs
214
215
Comprehensive audio processing including text-to-speech synthesis, speech-to-text transcription, and audio translation capabilities using Whisper and TTS models.
216
217
```python { .api }
218
# Speech synthesis
219
def create(
220
self,
221
*,
222
input: str,
223
model: Union[str, SpeechModel],
224
voice: Union[str, Literal["alloy", "ash", "ballad", "coral", "echo", "sage", "shimmer", "verse", "marin", "cedar"]],
225
instructions: str | NotGiven = NOT_GIVEN,
226
response_format: Literal["mp3", "opus", "aac", "flac", "wav", "pcm"] | NotGiven = NOT_GIVEN,
227
speed: float | NotGiven = NOT_GIVEN,
228
stream_format: Literal["sse", "audio"] | NotGiven = NOT_GIVEN,
229
extra_headers: Headers | None = None,
230
extra_query: Query | None = None,
231
extra_body: Body | None = None,
232
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
233
) -> HttpxBinaryResponseContent: ...
234
235
# Transcription
236
def create(
237
self,
238
*,
239
file: FileTypes,
240
model: Union[str, AudioModel],
241
chunking_strategy: Optional[ChunkingStrategy] | NotGiven = NOT_GIVEN,
242
include: List[TranscriptionInclude] | NotGiven = NOT_GIVEN,
243
language: str | NotGiven = NOT_GIVEN,
244
prompt: str | NotGiven = NOT_GIVEN,
245
response_format: Union[AudioResponseFormat, NotGiven] = NOT_GIVEN,
246
stream: Optional[bool] | NotGiven = NOT_GIVEN,
247
temperature: float | NotGiven = NOT_GIVEN,
248
timestamp_granularities: List[Literal["word", "segment"]] | NotGiven = NOT_GIVEN,
249
extra_headers: Headers | None = None,
250
extra_query: Query | None = None,
251
extra_body: Body | None = None,
252
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
253
) -> str | Transcription | TranscriptionVerbose | Stream[TranscriptionStreamEvent]: ...
254
```
255
256
[Audio](./audio.md)
257
258
### Images
259
260
Generate, edit, and create variations of images using DALL·E models with support for different sizes, quality levels, and style options.
261
262
```python { .api }
263
def generate(
264
self,
265
*,
266
prompt: str,
267
background: Optional[Literal["transparent", "opaque", "auto"]] | NotGiven = NOT_GIVEN,
268
model: Union[str, ImageModel, None] | NotGiven = NOT_GIVEN,
269
moderation: Optional[Literal["low", "auto"]] | NotGiven = NOT_GIVEN,
270
n: Optional[int] | NotGiven = NOT_GIVEN,
271
output_compression: Optional[int] | NotGiven = NOT_GIVEN,
272
output_format: Optional[Literal["png", "jpeg", "webp"]] | NotGiven = NOT_GIVEN,
273
partial_images: Optional[int] | NotGiven = NOT_GIVEN,
274
quality: Optional[Literal["standard", "hd", "low", "medium", "high", "auto"]] | NotGiven = NOT_GIVEN,
275
response_format: Optional[Literal["url", "b64_json"]] | NotGiven = NOT_GIVEN,
276
size: Optional[Literal["auto", "1024x1024", "1536x1024", "1024x1536", "256x256", "512x512", "1792x1024", "1024x1792"]] | NotGiven = NOT_GIVEN,
277
stream: Optional[bool] | NotGiven = NOT_GIVEN,
278
style: Optional[Literal["vivid", "natural"]] | NotGiven = NOT_GIVEN,
279
user: str | NotGiven = NOT_GIVEN,
280
extra_headers: Headers | None = None,
281
extra_query: Query | None = None,
282
extra_body: Body | None = None,
283
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
284
) -> ImagesResponse | Stream[ImageGenStreamEvent]: ...
285
```
286
287
[Images](./images.md)
288
289
### Files
290
291
Upload, manage, and retrieve files for use with various OpenAI services including fine-tuning, assistants, and batch operations.
292
293
```python { .api }
294
def create(
295
self,
296
*,
297
file: FileTypes,
298
purpose: FilePurpose,
299
expires_after: ExpiresAfter | NotGiven = NOT_GIVEN,
300
extra_headers: Headers | None = None,
301
extra_query: Query | None = None,
302
extra_body: Body | None = None,
303
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
304
) -> FileObject: ...
305
306
def list(
307
self,
308
*,
309
after: str | NotGiven = NOT_GIVEN,
310
limit: int | NotGiven = NOT_GIVEN,
311
order: Literal["asc", "desc"] | NotGiven = NOT_GIVEN,
312
purpose: str | NotGiven = NOT_GIVEN,
313
extra_headers: Headers | None = None,
314
extra_query: Query | None = None,
315
extra_body: Body | None = None,
316
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
317
) -> SyncCursorPage[FileObject]: ...
318
319
def retrieve(
320
self,
321
file_id: str,
322
*,
323
extra_headers: Headers | None = None,
324
extra_query: Query | None = None,
325
extra_body: Body | None = None,
326
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
327
) -> FileObject: ...
328
329
def delete(
330
self,
331
file_id: str,
332
*,
333
extra_headers: Headers | None = None,
334
extra_query: Query | None = None,
335
extra_body: Body | None = None,
336
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
337
) -> FileDeleted: ...
338
339
def content(
340
self,
341
file_id: str,
342
*,
343
extra_headers: Headers | None = None,
344
extra_query: Query | None = None,
345
extra_body: Body | None = None,
346
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
347
) -> HttpxBinaryResponseContent: ...
348
349
def wait_for_processing(
350
self,
351
id: str,
352
*,
353
poll_interval: float = 5.0,
354
max_wait_seconds: float = 30 * 60,
355
) -> FileObject: ...
356
```
357
358
[Files](./files.md)
359
360
### Fine-tuning
361
362
Create and manage custom model training jobs to adapt OpenAI models to specific use cases and domains with your own data.
363
364
```python { .api }
365
def create(
366
self,
367
*,
368
model: str,
369
training_file: str,
370
hyperparameters: HyperparametersParam | None = None,
371
suffix: str | None = None,
372
validation_file: str | None = None,
373
integrations: list[IntegrationParam] | None = None,
374
seed: int | None = None
375
) -> FineTuningJob: ...
376
```
377
378
[Fine-tuning](./fine-tuning.md)
379
380
### Assistants API
381
382
Build AI assistants with persistent conversations, file access, function calling, and code interpretation capabilities using the beta assistants framework.
383
384
```python { .api }
385
def create(
386
self,
387
*,
388
model: str,
389
description: str | None = None,
390
instructions: str | None = None,
391
name: str | None = None,
392
tools: list[ToolParam] | None = None,
393
tool_resources: ToolResourcesParam | None = None,
394
metadata: dict | None = None,
395
temperature: float | None = None,
396
top_p: float | None = None,
397
response_format: AssistantResponseFormatParam | None = None
398
) -> Assistant: ...
399
```
400
401
[Assistants](./assistants.md)
402
403
### Batch Operations
404
405
Process large volumes of requests efficiently using the batch API for cost-effective bulk operations with 24-hour processing windows.
406
407
```python { .api }
408
def create(
409
self,
410
*,
411
completion_window: Literal["24h"],
412
endpoint: Literal["/v1/chat/completions", "/v1/embeddings", "/v1/completions"],
413
input_file_id: str,
414
metadata: dict | None = None
415
) -> Batch: ...
416
```
417
418
[Batch Operations](./batch-operations.md)
419
420
### Other APIs
421
422
Additional functionality including models management, content moderation, vector stores, webhooks, and experimental features.
423
424
```python { .api }
425
# Models
426
def list(self) -> SyncPage[Model]: ...
427
def retrieve(self, model: str) -> Model: ...
428
429
# Moderations
430
def create(
431
self,
432
*,
433
input: str | list[str],
434
model: str | None = None
435
) -> ModerationCreateResponse: ...
436
437
# Vector Stores
438
def create(
439
self,
440
*,
441
file_ids: list[str] | None = None,
442
name: str | None = None,
443
expires_after: ExpiresAfterParam | None = None,
444
chunking_strategy: ChunkingStrategyParam | None = None,
445
metadata: dict | None = None
446
) -> VectorStore: ...
447
```
448
449
[Other APIs](./other-apis.md)
450
451
## Types
452
453
### Core Response Types
454
455
```python { .api }
456
class ChatCompletion(BaseModel):
457
id: str
458
choices: list[ChatCompletionChoice]
459
created: int
460
model: str
461
object: Literal["chat.completion"]
462
service_tier: Optional[Literal["scale", "default"]]
463
system_fingerprint: Optional[str]
464
usage: Optional[CompletionUsage]
465
466
class ChatCompletionChunk(BaseModel):
467
id: str
468
choices: list[ChatCompletionChunkChoice]
469
created: int
470
model: str
471
object: Literal["chat.completion.chunk"]
472
service_tier: Optional[Literal["scale", "default"]]
473
system_fingerprint: Optional[str]
474
usage: Optional[CompletionUsage]
475
476
class CreateEmbeddingResponse(BaseModel):
477
data: list[Embedding]
478
model: str
479
object: Literal["list"]
480
usage: EmbeddingUsage
481
482
class ImagesResponse(BaseModel):
483
created: int
484
data: list[Image]
485
486
class FileObject(BaseModel):
487
id: str
488
bytes: int
489
created_at: int
490
filename: str
491
object: Literal["file"]
492
purpose: FilePurpose
493
status: Literal["uploaded", "processed", "error"]
494
status_details: Optional[str]
495
```
496
497
### Parameter Types
498
499
```python { .api }
500
ChatCompletionMessageParam = Union[
501
ChatCompletionSystemMessageParam,
502
ChatCompletionUserMessageParam,
503
ChatCompletionAssistantMessageParam,
504
ChatCompletionToolMessageParam,
505
ChatCompletionFunctionMessageParam
506
]
507
508
class ChatCompletionUserMessageParam(TypedDict, total=False):
509
content: Required[Union[str, list[ChatCompletionContentPartParam]]]
510
role: Required[Literal["user"]]
511
name: str
512
513
class ChatCompletionToolParam(TypedDict, total=False):
514
function: Required[FunctionDefinition]
515
type: Required[Literal["function"]]
516
517
FileTypes = Union[
518
# File contents
519
bytes,
520
# File-like objects
521
IO[bytes],
522
# Paths
523
str,
524
os.PathLike[str],
525
]
526
```
527
528
### Exception Types
529
530
```python { .api }
531
class OpenAIError(Exception):
532
"""Base exception for all OpenAI errors"""
533
534
class APIError(OpenAIError):
535
"""API-related errors"""
536
message: str
537
request: httpx.Request
538
body: object | None = None
539
540
class APIStatusError(APIError):
541
"""HTTP status code errors"""
542
response: httpx.Response
543
status_code: int
544
545
class RateLimitError(APIStatusError):
546
"""HTTP 429 rate limit errors"""
547
548
class AuthenticationError(APIStatusError):
549
"""HTTP 401 authentication errors"""
550
551
class BadRequestError(APIStatusError):
552
"""HTTP 400 bad request errors"""
553
554
class NotFoundError(APIStatusError):
555
"""HTTP 404 not found errors"""
556
557
class APIConnectionError(APIError):
558
"""Connection-related errors"""
559
560
class APITimeoutError(APIConnectionError):
561
"""Request timeout errors"""
562
```