docs
0
# Models
1
2
Retrieve information about available models and manage fine-tuned models. List models, get model details, and delete custom models.
3
4
## Capabilities
5
6
### Retrieve Model
7
8
Get details about a specific model.
9
10
```python { .api }
11
def retrieve(
12
self,
13
model: str,
14
*,
15
extra_headers: dict[str, str] | None = None,
16
extra_query: dict[str, object] | None = None,
17
extra_body: dict[str, object] | None = None,
18
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
19
) -> Model:
20
"""
21
Retrieve model information.
22
23
Args:
24
model: Model ID (e.g., "gpt-4", "gpt-3.5-turbo", custom fine-tuned model).
25
extra_headers: Additional HTTP headers.
26
extra_query: Additional query parameters.
27
extra_body: Additional JSON fields.
28
timeout: Request timeout in seconds.
29
30
Returns:
31
Model: Model details including ID, owner, and capabilities.
32
33
Raises:
34
NotFoundError: Model not found
35
"""
36
```
37
38
Usage example:
39
40
```python
41
from openai import OpenAI
42
43
client = OpenAI()
44
45
# Get model details
46
model = client.models.retrieve("gpt-4")
47
48
print(f"ID: {model.id}")
49
print(f"Owner: {model.owned_by}")
50
print(f"Created: {model.created}")
51
```
52
53
### List Models
54
55
List all available models.
56
57
```python { .api }
58
def list(
59
self,
60
*,
61
extra_headers: dict[str, str] | None = None,
62
extra_query: dict[str, object] | None = None,
63
extra_body: dict[str, object] | None = None,
64
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
65
) -> SyncPage[Model]:
66
"""
67
List all available models.
68
69
Args:
70
extra_headers: Additional HTTP headers.
71
extra_query: Additional query parameters.
72
extra_body: Additional JSON fields.
73
timeout: Request timeout in seconds.
74
75
Returns:
76
SyncPage[Model]: Paginated list of models.
77
"""
78
```
79
80
Usage examples:
81
82
```python
83
# List all models
84
models = client.models.list()
85
86
for model in models.data:
87
print(f"{model.id} - {model.owned_by}")
88
89
# Filter for specific models
90
gpt4_models = [m for m in client.models.list() if "gpt-4" in m.id]
91
92
# Find fine-tuned models
93
fine_tuned = [m for m in client.models.list() if "ft:" in m.id]
94
95
for model in fine_tuned:
96
print(f"Fine-tuned model: {model.id}")
97
```
98
99
### Delete Model
100
101
Delete a fine-tuned model.
102
103
```python { .api }
104
def delete(
105
self,
106
model: str,
107
*,
108
extra_headers: dict[str, str] | None = None,
109
extra_query: dict[str, object] | None = None,
110
extra_body: dict[str, object] | None = None,
111
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
112
) -> ModelDeleted:
113
"""
114
Delete a fine-tuned model. Only works with custom fine-tuned models.
115
116
Args:
117
model: The model ID to delete (must be a fine-tuned model you own).
118
extra_headers: Additional HTTP headers.
119
extra_query: Additional query parameters.
120
extra_body: Additional JSON fields.
121
timeout: Request timeout in seconds.
122
123
Returns:
124
ModelDeleted: Deletion confirmation.
125
126
Raises:
127
NotFoundError: Model not found
128
BadRequestError: Cannot delete base models
129
"""
130
```
131
132
Usage example:
133
134
```python
135
# Delete fine-tuned model
136
result = client.models.delete("ft:gpt-3.5-turbo:custom:abc123")
137
138
print(f"Deleted: {result.id}")
139
print(f"Success: {result.deleted}")
140
141
# Note: Cannot delete base models
142
try:
143
client.models.delete("gpt-4") # Will fail
144
except Exception as e:
145
print(f"Error: {e}")
146
```
147
148
## Types
149
150
```python { .api }
151
from typing import Literal
152
from pydantic import BaseModel
153
154
class Model(BaseModel):
155
"""Model information."""
156
id: str
157
created: int
158
object: Literal["model"]
159
owned_by: str
160
161
class ModelDeleted(BaseModel):
162
"""Model deletion confirmation."""
163
id: str
164
deleted: bool
165
object: Literal["model"]
166
167
# Pagination
168
class SyncPage[T](BaseModel):
169
data: list[T]
170
object: str
171
def __iter__(self) -> Iterator[T]: ...
172
```
173
174
## Common Models
175
176
### Chat Models
177
- `gpt-4o`: Most capable GPT-4 Omni model
178
- `gpt-4-turbo`: Fast GPT-4 with 128K context
179
- `gpt-4`: Original GPT-4 model
180
- `gpt-3.5-turbo`: Fast and cost-effective
181
182
### Reasoning Models
183
- `o1`: Advanced reasoning model
184
- `o3`: Latest reasoning model
185
186
### Embedding Models
187
- `text-embedding-3-large`: Largest embedding model
188
- `text-embedding-3-small`: Efficient embedding model
189
- `text-embedding-ada-002`: Legacy embedding model
190
191
### Other Models
192
- `whisper-1`: Audio transcription/translation
193
- `tts-1`, `tts-1-hd`: Text-to-speech
194
- `dall-e-3`, `dall-e-2`: Image generation
195
- `text-moderation-latest`: Content moderation
196
197
## Best Practices
198
199
```python
200
from openai import OpenAI
201
202
client = OpenAI()
203
204
# 1. Cache model list to reduce API calls
205
def get_available_models():
206
return [m.id for m in client.models.list()]
207
208
# 2. Verify model availability before use
209
def is_model_available(model_id: str) -> bool:
210
try:
211
client.models.retrieve(model_id)
212
return True
213
except:
214
return False
215
216
# 3. List your fine-tuned models
217
def get_my_fine_tuned_models():
218
return [
219
m for m in client.models.list()
220
if "ft:" in m.id
221
]
222
223
# 4. Clean up old fine-tuned models
224
old_models = get_my_fine_tuned_models()
225
for model in old_models:
226
if should_delete(model):
227
client.models.delete(model.id)
228
```
229
230
## Async Usage
231
232
```python
233
import asyncio
234
from openai import AsyncOpenAI
235
236
async def list_models():
237
client = AsyncOpenAI()
238
models = await client.models.list()
239
return [m.id for m in models.data]
240
241
model_ids = asyncio.run(list_models())
242
```
243