pypi-openai

Description
Official Python library for the OpenAI API providing chat completions, embeddings, audio, images, and more
Author
tessl
Last updated

How to use

npx @tessl/cli registry install tessl/pypi-openai@1.106.0

images.md docs/

1
# Images
2
3
Generate, edit, and create variations of images using DALL·E models with support for different sizes, quality levels, and style options.
4
5
## Capabilities
6
7
### Image Generation
8
9
Create images from text descriptions using DALL·E models with various customization options.
10
11
```python { .api }
12
def generate(
13
self,
14
*,
15
prompt: str,
16
background: Optional[Literal["transparent", "opaque", "auto"]] | NotGiven = NOT_GIVEN,
17
model: Union[str, ImageModel, None] | NotGiven = NOT_GIVEN,
18
moderation: Optional[Literal["low", "auto"]] | NotGiven = NOT_GIVEN,
19
n: Optional[int] | NotGiven = NOT_GIVEN,
20
output_compression: Optional[int] | NotGiven = NOT_GIVEN,
21
output_format: Optional[Literal["png", "jpeg", "webp"]] | NotGiven = NOT_GIVEN,
22
partial_images: Optional[int] | NotGiven = NOT_GIVEN,
23
quality: Optional[Literal["standard", "hd", "low", "medium", "high", "auto"]] | NotGiven = NOT_GIVEN,
24
response_format: Optional[Literal["url", "b64_json"]] | NotGiven = NOT_GIVEN,
25
size: Optional[Literal["auto", "1024x1024", "1536x1024", "1024x1536", "256x256", "512x512", "1792x1024", "1024x1792"]] | NotGiven = NOT_GIVEN,
26
stream: Optional[Literal[False]] | Literal[True] | NotGiven = NOT_GIVEN,
27
style: Optional[Literal["vivid", "natural"]] | NotGiven = NOT_GIVEN,
28
user: str | NotGiven = NOT_GIVEN,
29
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
30
# The extra values given here take precedence over values defined on the client or passed to this method.
31
extra_headers: Headers | None = None,
32
extra_query: Query | None = None,
33
extra_body: Body | None = None,
34
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
35
) -> ImagesResponse | Stream[ImageGenStreamEvent]: ...
36
```
37
38
Usage examples:
39
40
```python
41
from openai import OpenAI
42
43
client = OpenAI()
44
45
# Basic image generation
46
response = client.images.generate(
47
model="dall-e-3",
48
prompt="A futuristic cityscape with flying cars and neon lights",
49
size="1024x1024",
50
quality="standard",
51
n=1
52
)
53
54
image_url = response.data[0].url
55
print(f"Generated image URL: {image_url}")
56
57
# High-definition image
58
response = client.images.generate(
59
model="dall-e-3",
60
prompt="A photorealistic portrait of a golden retriever wearing sunglasses",
61
size="1024x1024",
62
quality="hd",
63
style="natural"
64
)
65
66
# Save image
67
import requests
68
from pathlib import Path
69
70
image_url = response.data[0].url
71
image_response = requests.get(image_url)
72
73
with open("golden_retriever.png", "wb") as f:
74
f.write(image_response.content)
75
76
print("Image saved as golden_retriever.png")
77
78
# Multiple images with DALL·E 2
79
response = client.images.generate(
80
model="dall-e-2",
81
prompt="Abstract geometric patterns in bright colors",
82
size="512x512",
83
n=4 # Generate 4 variations
84
)
85
86
for i, image_data in enumerate(response.data):
87
print(f"Image {i+1} URL: {image_data.url}")
88
89
# Different sizes and styles
90
sizes = ["1024x1024", "1792x1024", "1024x1792"]
91
styles = ["vivid", "natural"]
92
93
for size in sizes:
94
for style in styles:
95
response = client.images.generate(
96
model="dall-e-3",
97
prompt="A serene mountain landscape at sunset",
98
size=size,
99
style=style,
100
quality="standard"
101
)
102
103
filename = f"mountain_{size}_{style}.png"
104
105
# Download and save
106
image_url = response.data[0].url
107
image_response = requests.get(image_url)
108
with open(filename, "wb") as f:
109
f.write(image_response.content)
110
111
print(f"Saved {filename}")
112
```
113
114
### Base64 Image Handling
115
116
Work with images in base64 format for direct integration and processing without external URLs.
117
118
```python { .api }
119
def generate(
120
self,
121
*,
122
prompt: str,
123
response_format: Literal["b64_json"],
124
# ... other parameters
125
) -> ImagesResponse: ...
126
```
127
128
Usage examples:
129
130
```python
131
import base64
132
from io import BytesIO
133
from PIL import Image
134
135
# Generate image as base64
136
response = client.images.generate(
137
model="dall-e-3",
138
prompt="A magical forest with glowing mushrooms",
139
size="1024x1024",
140
response_format="b64_json"
141
)
142
143
# Get base64 data
144
base64_image = response.data[0].b64_json
145
146
# Convert to image file
147
image_data = base64.b64decode(base64_image)
148
149
# Save directly from base64
150
with open("magical_forest.png", "wb") as f:
151
f.write(image_data)
152
153
# Open with PIL for processing
154
image_buffer = BytesIO(image_data)
155
pil_image = Image.open(image_buffer)
156
157
# Process image
158
resized_image = pil_image.resize((512, 512))
159
resized_image.save("magical_forest_resized.png")
160
161
print(f"Original size: {pil_image.size}")
162
print(f"Resized to: {resized_image.size}")
163
164
# Generate multiple images as base64
165
response = client.images.generate(
166
model="dall-e-2",
167
prompt="Minimalist logo designs for a tech company",
168
size="256x256",
169
n=3,
170
response_format="b64_json"
171
)
172
173
for i, image_data in enumerate(response.data):
174
# Decode and save each image
175
image_bytes = base64.b64decode(image_data.b64_json)
176
177
with open(f"logo_design_{i+1}.png", "wb") as f:
178
f.write(image_bytes)
179
180
print(f"Saved logo_design_{i+1}.png")
181
```
182
183
### Image Editing
184
185
Edit existing images by providing a mask to specify which areas to modify.
186
187
```python { .api }
188
def edit(
189
self,
190
*,
191
image: Union[FileTypes, SequenceNotStr[FileTypes]],
192
prompt: str,
193
background: Optional[Literal["transparent", "opaque", "auto"]] | NotGiven = NOT_GIVEN,
194
input_fidelity: Optional[Literal["high", "low"]] | NotGiven = NOT_GIVEN,
195
mask: FileTypes | NotGiven = NOT_GIVEN,
196
model: Union[str, ImageModel, None] | NotGiven = NOT_GIVEN,
197
n: Optional[int] | NotGiven = NOT_GIVEN,
198
output_compression: Optional[int] | NotGiven = NOT_GIVEN,
199
output_format: Optional[Literal["png", "jpeg", "webp"]] | NotGiven = NOT_GIVEN,
200
partial_images: Optional[int] | NotGiven = NOT_GIVEN,
201
quality: Optional[Literal["standard", "low", "medium", "high", "auto"]] | NotGiven = NOT_GIVEN,
202
response_format: Optional[Literal["url", "b64_json"]] | NotGiven = NOT_GIVEN,
203
size: Optional[Literal["256x256", "512x512", "1024x1024", "1536x1024", "1024x1536", "auto"]] | NotGiven = NOT_GIVEN,
204
stream: Optional[Literal[False]] | Literal[True] | NotGiven = NOT_GIVEN,
205
user: str | NotGiven = NOT_GIVEN,
206
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
207
# The extra values given here take precedence over values defined on the client or passed to this method.
208
extra_headers: Headers | None = None,
209
extra_query: Query | None = None,
210
extra_body: Body | None = None,
211
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
212
) -> ImagesResponse | Stream[ImageEditStreamEvent]: ...
213
```
214
215
Usage examples:
216
217
```python
218
# Edit image with mask
219
with open("original_image.png", "rb") as image_file, \
220
open("edit_mask.png", "rb") as mask_file:
221
222
response = client.images.edit(
223
image=image_file,
224
mask=mask_file,
225
prompt="Replace the background with a beautiful sunset",
226
size="1024x1024",
227
n=2
228
)
229
230
# Save edited images
231
for i, image_data in enumerate(response.data):
232
image_url = image_data.url
233
image_response = requests.get(image_url)
234
235
with open(f"edited_image_{i+1}.png", "wb") as f:
236
f.write(image_response.content)
237
238
# Edit without explicit mask (transparent areas will be edited)
239
with open("image_with_transparency.png", "rb") as image_file:
240
response = client.images.edit(
241
image=image_file,
242
prompt="Fill transparent areas with a starry night sky",
243
size="1024x1024"
244
)
245
246
# Create mask programmatically and edit
247
from PIL import Image, ImageDraw
248
249
# Open original image
250
original = Image.open("photo.jpg")
251
252
# Create mask (white = edit area, black = keep original)
253
mask = Image.new("RGB", original.size, (0, 0, 0)) # Black background
254
draw = ImageDraw.Draw(mask)
255
256
# Create circular mask in center
257
center_x, center_y = original.size[0] // 2, original.size[1] // 2
258
radius = min(original.size) // 4
259
260
draw.ellipse(
261
[center_x - radius, center_y - radius, center_x + radius, center_y + radius],
262
fill=(255, 255, 255) # White circle
263
)
264
265
# Save mask
266
mask.save("circular_mask.png")
267
268
# Edit using programmatic mask
269
with open("photo.jpg", "rb") as image_file, \
270
open("circular_mask.png", "rb") as mask_file:
271
272
response = client.images.edit(
273
image=image_file,
274
mask=mask_file,
275
prompt="Replace center area with a beautiful flower",
276
size="1024x1024"
277
)
278
```
279
280
### Image Variations
281
282
Create variations of existing images while maintaining similar style and composition.
283
284
```python { .api }
285
def create_variation(
286
self,
287
*,
288
image: FileTypes,
289
model: Union[str, ImageModel, None] | NotGiven = NOT_GIVEN,
290
n: Optional[int] | NotGiven = NOT_GIVEN,
291
response_format: Optional[Literal["url", "b64_json"]] | NotGiven = NOT_GIVEN,
292
size: Optional[Literal["256x256", "512x512", "1024x1024"]] | NotGiven = NOT_GIVEN,
293
user: str | NotGiven = NOT_GIVEN,
294
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
295
# The extra values given here take precedence over values defined on the client or passed to this method.
296
extra_headers: Headers | None = None,
297
extra_query: Query | None = None,
298
extra_body: Body | None = None,
299
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
300
) -> ImagesResponse: ...
301
```
302
303
Usage examples:
304
305
```python
306
# Create variations of an existing image
307
with open("original_artwork.png", "rb") as image_file:
308
response = client.images.create_variation(
309
image=image_file,
310
n=4, # Create 4 variations
311
size="1024x1024"
312
)
313
314
# Save all variations
315
for i, image_data in enumerate(response.data):
316
image_url = image_data.url
317
image_response = requests.get(image_url)
318
319
with open(f"variation_{i+1}.png", "wb") as f:
320
f.write(image_response.content)
321
322
print(f"Saved variation_{i+1}.png")
323
324
# Create variations with different models
325
models = ["dall-e-2"] # Only DALL·E 2 supports variations currently
326
327
for model in models:
328
with open("source_image.png", "rb") as image_file:
329
response = client.images.create_variation(
330
image=image_file,
331
model=model,
332
n=2,
333
size="512x512"
334
)
335
336
for i, image_data in enumerate(response.data):
337
filename = f"{model}_variation_{i+1}.png"
338
339
image_url = image_data.url
340
image_response = requests.get(image_url)
341
342
with open(filename, "wb") as f:
343
f.write(image_response.content)
344
345
# Generate variations as base64
346
with open("logo.png", "rb") as image_file:
347
response = client.images.create_variation(
348
image=image_file,
349
n=3,
350
size="256x256",
351
response_format="b64_json"
352
)
353
354
for i, image_data in enumerate(response.data):
355
# Decode base64 and save
356
image_bytes = base64.b64decode(image_data.b64_json)
357
358
with open(f"logo_variation_{i+1}.png", "wb") as f:
359
f.write(image_bytes)
360
```
361
362
### Advanced Image Processing
363
364
Combine image generation with advanced processing techniques and batch operations.
365
366
Usage examples:
367
368
```python
369
import concurrent.futures
370
from typing import List, Dict
371
import json
372
373
# Batch image generation
374
def generate_image_batch(prompts: List[str], **kwargs) -> List[Dict]:
375
"""Generate multiple images concurrently"""
376
377
def generate_single(prompt):
378
try:
379
response = client.images.generate(
380
prompt=prompt,
381
**kwargs
382
)
383
return {
384
"prompt": prompt,
385
"success": True,
386
"url": response.data[0].url,
387
"revised_prompt": getattr(response.data[0], 'revised_prompt', None)
388
}
389
except Exception as e:
390
return {
391
"prompt": prompt,
392
"success": False,
393
"error": str(e)
394
}
395
396
# Use thread pool for concurrent requests
397
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
398
results = list(executor.map(generate_single, prompts))
399
400
return results
401
402
# Example batch generation
403
prompts = [
404
"A red sports car in the desert",
405
"A blue bird flying over mountains",
406
"A green forest with morning mist",
407
"A purple sunset over the ocean",
408
"A yellow sunflower field"
409
]
410
411
batch_results = generate_image_batch(
412
prompts,
413
model="dall-e-3",
414
size="1024x1024",
415
quality="standard"
416
)
417
418
# Process results
419
successful_generations = [r for r in batch_results if r["success"]]
420
failed_generations = [r for r in batch_results if not r["success"]]
421
422
print(f"Successful: {len(successful_generations)}")
423
print(f"Failed: {len(failed_generations)}")
424
425
# Save batch results metadata
426
with open("batch_results.json", "w") as f:
427
json.dump(batch_results, f, indent=2)
428
429
# Image processing pipeline
430
def process_image_pipeline(prompt: str, output_dir: str = "output/"):
431
"""Complete image generation and processing pipeline"""
432
433
from pathlib import Path
434
import os
435
436
# Create output directory
437
Path(output_dir).mkdir(exist_ok=True)
438
439
# Generate image
440
response = client.images.generate(
441
model="dall-e-3",
442
prompt=prompt,
443
size="1024x1024",
444
quality="hd",
445
response_format="b64_json"
446
)
447
448
# Get image data
449
base64_image = response.data[0].b64_json
450
revised_prompt = getattr(response.data[0], 'revised_prompt', prompt)
451
452
# Decode image
453
image_data = base64.b64decode(base64_image)
454
455
# Save original
456
original_path = Path(output_dir) / "original.png"
457
with open(original_path, "wb") as f:
458
f.write(image_data)
459
460
# Create thumbnails
461
image = Image.open(BytesIO(image_data))
462
463
sizes = [(512, 512), (256, 256), (128, 128)]
464
for size in sizes:
465
thumbnail = image.copy()
466
thumbnail.thumbnail(size, Image.Resampling.LANCZOS)
467
468
thumb_path = Path(output_dir) / f"thumbnail_{size[0]}x{size[1]}.png"
469
thumbnail.save(thumb_path)
470
471
# Save metadata
472
metadata = {
473
"original_prompt": prompt,
474
"revised_prompt": revised_prompt,
475
"model": "dall-e-3",
476
"size": "1024x1024",
477
"quality": "hd",
478
"files": {
479
"original": str(original_path),
480
"thumbnails": [f"thumbnail_{s[0]}x{s[1]}.png" for s in sizes]
481
}
482
}
483
484
metadata_path = Path(output_dir) / "metadata.json"
485
with open(metadata_path, "w") as f:
486
json.dump(metadata, f, indent=2)
487
488
return metadata
489
490
# Run pipeline
491
result = process_image_pipeline(
492
"A cyberpunk warrior in a neon-lit alley",
493
"cyberpunk_warrior/"
494
)
495
496
print("Pipeline completed:")
497
print(json.dumps(result, indent=2))
498
499
# Image style transfer using editing
500
def style_transfer_edit(source_image_path: str, style_prompt: str):
501
"""Use image editing for style transfer effects"""
502
503
# Create a subtle mask for style transfer
504
source_image = Image.open(source_image_path)
505
506
# Create mask with gradual opacity
507
mask = Image.new("L", source_image.size, 128) # 50% opacity everywhere
508
mask.save("style_mask.png")
509
510
# Convert mask to RGB for API
511
mask_rgb = Image.new("RGB", source_image.size, (128, 128, 128))
512
mask_rgb.save("style_mask_rgb.png")
513
514
# Perform style edit
515
with open(source_image_path, "rb") as image_file, \
516
open("style_mask_rgb.png", "rb") as mask_file:
517
518
response = client.images.edit(
519
image=image_file,
520
mask=mask_file,
521
prompt=f"Transform the image with {style_prompt} style",
522
size="1024x1024"
523
)
524
525
return response.data[0].url
526
527
# Example style transfer
528
styled_url = style_transfer_edit(
529
"portrait.jpg",
530
"impressionist painting"
531
)
532
533
print(f"Styled image: {styled_url}")
534
```
535
536
## Types
537
538
### Core Response Types
539
540
```python { .api }
541
class ImagesResponse(BaseModel):
542
created: int
543
data: List[Image]
544
545
class Image(BaseModel):
546
b64_json: Optional[str]
547
revised_prompt: Optional[str]
548
url: Optional[str]
549
```
550
551
### Parameter Types
552
553
```python { .api }
554
# Image generation parameters
555
ImageGenerateParams = TypedDict('ImageGenerateParams', {
556
'prompt': Required[str],
557
'background': NotRequired[Optional[Literal["transparent", "opaque", "auto"]]],
558
'model': NotRequired[Union[str, ImageModel, None]],
559
'moderation': NotRequired[Optional[Literal["low", "auto"]]],
560
'n': NotRequired[Optional[int]],
561
'output_compression': NotRequired[Optional[int]],
562
'output_format': NotRequired[Optional[Literal["png", "jpeg", "webp"]]],
563
'partial_images': NotRequired[Optional[int]],
564
'quality': NotRequired[Optional[Literal["standard", "hd", "low", "medium", "high", "auto"]]],
565
'response_format': NotRequired[Optional[Literal["url", "b64_json"]]],
566
'size': NotRequired[Optional[Literal["auto", "1024x1024", "1536x1024", "1024x1536", "256x256", "512x512", "1792x1024", "1024x1792"]]],
567
'stream': NotRequired[Optional[bool]],
568
'style': NotRequired[Optional[Literal["vivid", "natural"]]],
569
'user': NotRequired[str],
570
'extra_headers': NotRequired[Headers],
571
'extra_query': NotRequired[Query],
572
'extra_body': NotRequired[Body],
573
'timeout': NotRequired[float],
574
}, total=False)
575
576
# Image editing parameters
577
ImageEditParams = TypedDict('ImageEditParams', {
578
'image': Required[Union[FileTypes, SequenceNotStr[FileTypes]]],
579
'prompt': Required[str],
580
'background': NotRequired[Optional[Literal["transparent", "opaque", "auto"]]],
581
'input_fidelity': NotRequired[Optional[Literal["high", "low"]]],
582
'mask': NotRequired[FileTypes],
583
'model': NotRequired[Union[str, ImageModel, None]],
584
'n': NotRequired[Optional[int]],
585
'output_compression': NotRequired[Optional[int]],
586
'output_format': NotRequired[Optional[Literal["png", "jpeg", "webp"]]],
587
'partial_images': NotRequired[Optional[int]],
588
'quality': NotRequired[Optional[Literal["standard", "low", "medium", "high", "auto"]]],
589
'response_format': NotRequired[Optional[Literal["url", "b64_json"]]],
590
'size': NotRequired[Optional[Literal["256x256", "512x512", "1024x1024", "1536x1024", "1024x1536", "auto"]]],
591
'stream': NotRequired[Optional[bool]],
592
'user': NotRequired[str],
593
'extra_headers': NotRequired[Headers],
594
'extra_query': NotRequired[Query],
595
'extra_body': NotRequired[Body],
596
'timeout': NotRequired[float],
597
}, total=False)
598
599
# Image variation parameters
600
ImageCreateVariationParams = TypedDict('ImageCreateVariationParams', {
601
'image': Required[FileTypes],
602
'model': NotRequired[Union[str, ImageModel, None]],
603
'n': NotRequired[Optional[int]],
604
'response_format': NotRequired[Optional[Literal["url", "b64_json"]]],
605
'size': NotRequired[Optional[Literal["256x256", "512x512", "1024x1024"]]],
606
'user': NotRequired[str],
607
'extra_headers': NotRequired[Headers],
608
'extra_query': NotRequired[Query],
609
'extra_body': NotRequired[Body],
610
'timeout': NotRequired[float],
611
}, total=False)
612
```
613
614
### Model and Size Types
615
616
```python { .api }
617
# Supported models
618
ImageModel = Literal["dall-e-2", "dall-e-3", "gpt-image-1"]
619
620
# Image sizes
621
ImageSize = Literal[
622
"auto", # GPT-Image-1 default
623
"256x256", # DALL·E 2 only
624
"512x512", # DALL·E 2 only
625
"1024x1024", # All models
626
"1536x1024", # GPT-Image-1 landscape
627
"1024x1536", # GPT-Image-1 portrait
628
"1792x1024", # DALL·E 3 only
629
"1024x1792" # DALL·E 3 only
630
]
631
632
# Quality options
633
ImageQuality = Literal[
634
"auto", # Default - automatic selection
635
"standard", # DALL·E 2/3
636
"hd", # DALL·E 3 only
637
"low", # GPT-Image-1 only
638
"medium", # GPT-Image-1 only
639
"high" # GPT-Image-1 only
640
]
641
642
# Style options (DALL·E 3 only)
643
ImageStyle = Literal["vivid", "natural"]
644
645
# Response formats
646
ResponseFormat = Literal["url", "b64_json"]
647
648
# File types for input
649
FileTypes = Union[
650
bytes, # Raw image bytes
651
IO[bytes], # File-like object
652
str, # File path
653
os.PathLike[str] # Path object
654
]
655
656
# Streaming types
657
ImageGenStreamEvent = Dict[str, Any]
658
ImageEditStreamEvent = Dict[str, Any]
659
Stream = Iterator[ImageGenStreamEvent]
660
SequenceNotStr = List[FileTypes]
661
```
662
663
### Configuration and Limits
664
665
```python { .api }
666
# Model capabilities and limits
667
class ImageModelLimits:
668
dall_e_2 = {
669
"sizes": ["256x256", "512x512", "1024x1024"],
670
"max_images": 10,
671
"supports_hd": False,
672
"supports_style": False,
673
"supports_variations": True,
674
"supports_editing": True,
675
"supports_streaming": False
676
}
677
678
dall_e_3 = {
679
"sizes": ["1024x1024", "1792x1024", "1024x1792"],
680
"max_images": 1, # Only 1 image per request
681
"supports_hd": True,
682
"supports_style": True,
683
"supports_variations": False, # Not supported
684
"supports_editing": False, # Not supported
685
"supports_streaming": False
686
}
687
688
gpt_image_1 = {
689
"sizes": ["auto", "1024x1024", "1536x1024", "1024x1536"],
690
"max_images": 10,
691
"supports_hd": False,
692
"supports_style": False,
693
"supports_variations": False,
694
"supports_editing": True,
695
"supports_streaming": True,
696
"supports_transparency": True,
697
"supports_compression": True,
698
"max_file_size": 50 * 1024 * 1024, # 50MB
699
"max_files": 16
700
}
701
702
# Parameter constraints
703
class ImageConstraints:
704
prompt_max_length_dalle2: int = 1000 # characters
705
prompt_max_length_dalle3: int = 4000 # characters
706
prompt_max_length_gpt_image: int = 32000 # characters
707
708
n_range_dalle2: Tuple[int, int] = (1, 10)
709
n_range_dalle3: Tuple[int, int] = (1, 1)
710
n_range_gpt_image: Tuple[int, int] = (1, 10)
711
712
# Supported file formats for input
713
supported_formats_dalle2: List[str] = ["png"]
714
supported_formats_gpt_image: List[str] = ["png", "webp", "jpg"]
715
716
max_file_size_dalle2: int = 4 * 1024 * 1024 # 4MB
717
max_file_size_gpt_image: int = 50 * 1024 * 1024 # 50MB
718
719
# Compression range for GPT-Image-1
720
compression_range: Tuple[int, int] = (0, 100)
721
722
# Partial images range for streaming
723
partial_images_range: Tuple[int, int] = (0, 3)
724
```
725
726
## Best Practices
727
728
### Prompt Engineering
729
730
- Be specific and descriptive in prompts for better results
731
- Include style, mood, and technical details (lighting, composition, etc.)
732
- Use artistic style references (e.g., "in the style of Van Gogh")
733
- Specify image properties (photorealistic, illustration, sketch, etc.)
734
- Consider aspect ratio when choosing size
735
736
### Model Selection
737
738
- Use **GPT-Image-1** for advanced editing, streaming, transparency, and multiple input images
739
- Use **DALL·E 3** for highest quality single image generation with style control
740
- Use **DALL·E 2** for variations and basic editing capabilities
741
- Choose appropriate quality setting based on model capabilities
742
- Consider cost and feature trade-offs between models
743
744
### Image Processing
745
746
- Use base64 format for immediate processing without external downloads
747
- Implement proper error handling for generation failures
748
- Cache generated images to avoid regeneration costs
749
- Use appropriate image formats for your use case
750
751
### Performance and Cost
752
753
- Batch requests when possible to reduce API calls
754
- Use thumbnails and resizing for different display contexts
755
- Monitor usage and implement rate limiting for production
756
- Consider user experience with loading states for image generation
757
758
### Safety and Content
759
760
- Review generated content for appropriateness
761
- Implement content filtering based on your application's needs
762
- Be aware of OpenAI's usage policies for generated images
763
- Consider watermarking or attribution for generated content