- Spec files
pypi-openai
Describes: pkg:pypi/openai@1.106.x
- Description
- Official Python library for the OpenAI API providing chat completions, embeddings, audio, images, and more
- Author
- tessl
- Last updated
index.md docs/
1# OpenAI Python Library23The official Python library for the OpenAI API, providing comprehensive access to OpenAI's powerful AI models including GPT-4, GPT-3.5, DALL·E, Whisper, and more. This library enables developers to integrate cutting-edge AI capabilities into their applications with a simple, intuitive interface.45## Package Information67- **Package Name**: openai8- **Package Type**: PyPI9- **Language**: Python10- **Version**: 1.106.011- **Installation**: `pip install openai`1213## Core Imports1415```python16import openai17```1819Standard client-based usage:2021```python22from openai import OpenAI23```2425Async client usage:2627```python28from openai import AsyncOpenAI29```3031Azure OpenAI usage:3233```python34from openai import AzureOpenAI, AsyncAzureOpenAI35```3637## Basic Usage3839```python40from openai import OpenAI4142# Initialize client with API key43client = OpenAI(api_key="your-api-key")4445# Chat completions - most common use case46response = client.chat.completions.create(47model="gpt-4",48messages=[49{"role": "system", "content": "You are a helpful assistant."},50{"role": "user", "content": "What is machine learning?"}51],52max_tokens=150,53temperature=0.754)5556print(response.choices[0].message.content)5758# Generate embeddings59embeddings = client.embeddings.create(60model="text-embedding-ada-002",61input=["Text to embed", "Another text"]62)6364# Generate images65image_response = client.images.generate(66model="dall-e-3",67prompt="A futuristic cityscape at sunset",68size="1024x1024",69quality="standard",70n=171)7273# Text-to-speech74speech_response = client.audio.speech.create(75model="tts-1",76voice="alloy",77input="Hello! This is a text-to-speech example."78)7980# Save audio to file81speech_response.stream_to_file("output.mp3")82```8384## Architecture8586The OpenAI Python library follows a resource-based architecture with clear separation of concerns:8788- **Client Classes**: `OpenAI`, `AsyncOpenAI`, `AzureOpenAI`, `AsyncAzureOpenAI` provide the main entry points89- **Resources**: Logical groupings of related API endpoints (chat, embeddings, images, etc.)90- **Sub-resources**: Nested functionality within resources (chat.completions, audio.speech, etc.)91- **Type System**: Comprehensive type definitions for all API parameters and responses92- **Streaming Support**: Built-in streaming for real-time responses93- **Error Handling**: Structured exception hierarchy for different error types9495The library supports both instance-based usage (creating client objects) and module-level usage (direct imports) for convenience.9697## Capabilities9899### Client Setup and Configuration100101Core client initialization, authentication, configuration options, and Azure integration for both synchronous and asynchronous usage patterns.102103```python { .api }104class OpenAI:105def __init__(106self,107*,108api_key: str | None = None,109organization: str | None = None,110project: str | None = None,111base_url: str | None = None,112timeout: float | None = None,113max_retries: int = 2,114default_headers: dict[str, str] | None = None,115http_client: httpx.Client | None = None116): ...117118class AsyncOpenAI:119def __init__(120self,121*,122api_key: str | None = None,123organization: str | None = None,124project: str | None = None,125base_url: str | None = None,126timeout: float | None = None,127max_retries: int = 2,128default_headers: dict[str, str] | None = None,129http_client: httpx.AsyncClient | None = None130): ...131```132133[Client Setup](./client-setup.md)134135### Chat Completions136137Primary interface for conversational AI using GPT models. Supports streaming responses, function calling, structured outputs, and advanced features like reasoning models.138139```python { .api }140def create(141self,142*,143messages: list[ChatCompletionMessageParam],144model: str,145frequency_penalty: float | None = None,146logit_bias: dict[str, int] | None = None,147logprobs: bool | None = None,148max_completion_tokens: int | None = None,149n: int | None = None,150presence_penalty: float | None = None,151response_format: ResponseFormatParam | None = None,152seed: int | None = None,153stop: str | list[str] | None = None,154stream: bool | None = None,155temperature: float | None = None,156tool_choice: ToolChoiceParam | None = None,157tools: list[ChatCompletionToolParam] | None = None,158top_p: float | None = None,159user: str | None = None160) -> ChatCompletion | Stream[ChatCompletionChunk]: ...161```162163[Chat Completions](./chat-completions.md)164165### Text Completions166167Legacy text completion interface for older models like GPT-3.5 Turbo Instruct, providing direct text generation capabilities.168169```python { .api }170def create(171self,172*,173model: str,174prompt: str | list[str] | None,175best_of: int | None = None,176echo: bool | None = None,177frequency_penalty: float | None = None,178logit_bias: dict[str, int] | None = None,179logprobs: int | None = None,180max_tokens: int | None = None,181n: int | None = None,182presence_penalty: float | None = None,183seed: int | None = None,184stop: str | list[str] | None = None,185stream: bool | None = None,186suffix: str | None = None,187temperature: float | None = None,188top_p: float | None = None,189user: str | None = None190) -> Completion | Stream[Completion]: ...191```192193[Text Completions](./text-completions.md)194195### Embeddings196197Convert text into high-dimensional vector representations for semantic similarity, search, clustering, and other NLP tasks using OpenAI's embedding models.198199```python { .api }200def create(201self,202*,203input: str | list[str],204model: str,205dimensions: int | None = None,206encoding_format: Literal["float", "base64"] | None = None,207user: str | None = None208) -> CreateEmbeddingResponse: ...209```210211[Embeddings](./embeddings.md)212213### Audio APIs214215Comprehensive audio processing including text-to-speech synthesis, speech-to-text transcription, and audio translation capabilities using Whisper and TTS models.216217```python { .api }218# Speech synthesis219def create(220self,221*,222input: str,223model: Union[str, SpeechModel],224voice: Union[str, Literal["alloy", "ash", "ballad", "coral", "echo", "sage", "shimmer", "verse", "marin", "cedar"]],225instructions: str | NotGiven = NOT_GIVEN,226response_format: Literal["mp3", "opus", "aac", "flac", "wav", "pcm"] | NotGiven = NOT_GIVEN,227speed: float | NotGiven = NOT_GIVEN,228stream_format: Literal["sse", "audio"] | NotGiven = NOT_GIVEN,229extra_headers: Headers | None = None,230extra_query: Query | None = None,231extra_body: Body | None = None,232timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,233) -> HttpxBinaryResponseContent: ...234235# Transcription236def create(237self,238*,239file: FileTypes,240model: Union[str, AudioModel],241chunking_strategy: Optional[ChunkingStrategy] | NotGiven = NOT_GIVEN,242include: List[TranscriptionInclude] | NotGiven = NOT_GIVEN,243language: str | NotGiven = NOT_GIVEN,244prompt: str | NotGiven = NOT_GIVEN,245response_format: Union[AudioResponseFormat, NotGiven] = NOT_GIVEN,246stream: Optional[bool] | NotGiven = NOT_GIVEN,247temperature: float | NotGiven = NOT_GIVEN,248timestamp_granularities: List[Literal["word", "segment"]] | NotGiven = NOT_GIVEN,249extra_headers: Headers | None = None,250extra_query: Query | None = None,251extra_body: Body | None = None,252timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,253) -> str | Transcription | TranscriptionVerbose | Stream[TranscriptionStreamEvent]: ...254```255256[Audio](./audio.md)257258### Images259260Generate, edit, and create variations of images using DALL·E models with support for different sizes, quality levels, and style options.261262```python { .api }263def generate(264self,265*,266prompt: str,267background: Optional[Literal["transparent", "opaque", "auto"]] | NotGiven = NOT_GIVEN,268model: Union[str, ImageModel, None] | NotGiven = NOT_GIVEN,269moderation: Optional[Literal["low", "auto"]] | NotGiven = NOT_GIVEN,270n: Optional[int] | NotGiven = NOT_GIVEN,271output_compression: Optional[int] | NotGiven = NOT_GIVEN,272output_format: Optional[Literal["png", "jpeg", "webp"]] | NotGiven = NOT_GIVEN,273partial_images: Optional[int] | NotGiven = NOT_GIVEN,274quality: Optional[Literal["standard", "hd", "low", "medium", "high", "auto"]] | NotGiven = NOT_GIVEN,275response_format: Optional[Literal["url", "b64_json"]] | NotGiven = NOT_GIVEN,276size: Optional[Literal["auto", "1024x1024", "1536x1024", "1024x1536", "256x256", "512x512", "1792x1024", "1024x1792"]] | NotGiven = NOT_GIVEN,277stream: Optional[bool] | NotGiven = NOT_GIVEN,278style: Optional[Literal["vivid", "natural"]] | NotGiven = NOT_GIVEN,279user: str | NotGiven = NOT_GIVEN,280extra_headers: Headers | None = None,281extra_query: Query | None = None,282extra_body: Body | None = None,283timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,284) -> ImagesResponse | Stream[ImageGenStreamEvent]: ...285```286287[Images](./images.md)288289### Files290291Upload, manage, and retrieve files for use with various OpenAI services including fine-tuning, assistants, and batch operations.292293```python { .api }294def create(295self,296*,297file: FileTypes,298purpose: FilePurpose,299expires_after: ExpiresAfter | NotGiven = NOT_GIVEN,300extra_headers: Headers | None = None,301extra_query: Query | None = None,302extra_body: Body | None = None,303timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,304) -> FileObject: ...305306def list(307self,308*,309after: str | NotGiven = NOT_GIVEN,310limit: int | NotGiven = NOT_GIVEN,311order: Literal["asc", "desc"] | NotGiven = NOT_GIVEN,312purpose: str | NotGiven = NOT_GIVEN,313extra_headers: Headers | None = None,314extra_query: Query | None = None,315extra_body: Body | None = None,316timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,317) -> SyncCursorPage[FileObject]: ...318319def retrieve(320self,321file_id: str,322*,323extra_headers: Headers | None = None,324extra_query: Query | None = None,325extra_body: Body | None = None,326timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,327) -> FileObject: ...328329def delete(330self,331file_id: str,332*,333extra_headers: Headers | None = None,334extra_query: Query | None = None,335extra_body: Body | None = None,336timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,337) -> FileDeleted: ...338339def content(340self,341file_id: str,342*,343extra_headers: Headers | None = None,344extra_query: Query | None = None,345extra_body: Body | None = None,346timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,347) -> HttpxBinaryResponseContent: ...348349def wait_for_processing(350self,351id: str,352*,353poll_interval: float = 5.0,354max_wait_seconds: float = 30 * 60,355) -> FileObject: ...356```357358[Files](./files.md)359360### Fine-tuning361362Create and manage custom model training jobs to adapt OpenAI models to specific use cases and domains with your own data.363364```python { .api }365def create(366self,367*,368model: str,369training_file: str,370hyperparameters: HyperparametersParam | None = None,371suffix: str | None = None,372validation_file: str | None = None,373integrations: list[IntegrationParam] | None = None,374seed: int | None = None375) -> FineTuningJob: ...376```377378[Fine-tuning](./fine-tuning.md)379380### Assistants API381382Build AI assistants with persistent conversations, file access, function calling, and code interpretation capabilities using the beta assistants framework.383384```python { .api }385def create(386self,387*,388model: str,389description: str | None = None,390instructions: str | None = None,391name: str | None = None,392tools: list[ToolParam] | None = None,393tool_resources: ToolResourcesParam | None = None,394metadata: dict | None = None,395temperature: float | None = None,396top_p: float | None = None,397response_format: AssistantResponseFormatParam | None = None398) -> Assistant: ...399```400401[Assistants](./assistants.md)402403### Batch Operations404405Process large volumes of requests efficiently using the batch API for cost-effective bulk operations with 24-hour processing windows.406407```python { .api }408def create(409self,410*,411completion_window: Literal["24h"],412endpoint: Literal["/v1/chat/completions", "/v1/embeddings", "/v1/completions"],413input_file_id: str,414metadata: dict | None = None415) -> Batch: ...416```417418[Batch Operations](./batch-operations.md)419420### Other APIs421422Additional functionality including models management, content moderation, vector stores, webhooks, and experimental features.423424```python { .api }425# Models426def list(self) -> SyncPage[Model]: ...427def retrieve(self, model: str) -> Model: ...428429# Moderations430def create(431self,432*,433input: str | list[str],434model: str | None = None435) -> ModerationCreateResponse: ...436437# Vector Stores438def create(439self,440*,441file_ids: list[str] | None = None,442name: str | None = None,443expires_after: ExpiresAfterParam | None = None,444chunking_strategy: ChunkingStrategyParam | None = None,445metadata: dict | None = None446) -> VectorStore: ...447```448449[Other APIs](./other-apis.md)450451## Types452453### Core Response Types454455```python { .api }456class ChatCompletion(BaseModel):457id: str458choices: list[ChatCompletionChoice]459created: int460model: str461object: Literal["chat.completion"]462service_tier: Optional[Literal["scale", "default"]]463system_fingerprint: Optional[str]464usage: Optional[CompletionUsage]465466class ChatCompletionChunk(BaseModel):467id: str468choices: list[ChatCompletionChunkChoice]469created: int470model: str471object: Literal["chat.completion.chunk"]472service_tier: Optional[Literal["scale", "default"]]473system_fingerprint: Optional[str]474usage: Optional[CompletionUsage]475476class CreateEmbeddingResponse(BaseModel):477data: list[Embedding]478model: str479object: Literal["list"]480usage: EmbeddingUsage481482class ImagesResponse(BaseModel):483created: int484data: list[Image]485486class FileObject(BaseModel):487id: str488bytes: int489created_at: int490filename: str491object: Literal["file"]492purpose: FilePurpose493status: Literal["uploaded", "processed", "error"]494status_details: Optional[str]495```496497### Parameter Types498499```python { .api }500ChatCompletionMessageParam = Union[501ChatCompletionSystemMessageParam,502ChatCompletionUserMessageParam,503ChatCompletionAssistantMessageParam,504ChatCompletionToolMessageParam,505ChatCompletionFunctionMessageParam506]507508class ChatCompletionUserMessageParam(TypedDict, total=False):509content: Required[Union[str, list[ChatCompletionContentPartParam]]]510role: Required[Literal["user"]]511name: str512513class ChatCompletionToolParam(TypedDict, total=False):514function: Required[FunctionDefinition]515type: Required[Literal["function"]]516517FileTypes = Union[518# File contents519bytes,520# File-like objects521IO[bytes],522# Paths523str,524os.PathLike[str],525]526```527528### Exception Types529530```python { .api }531class OpenAIError(Exception):532"""Base exception for all OpenAI errors"""533534class APIError(OpenAIError):535"""API-related errors"""536message: str537request: httpx.Request538body: object | None = None539540class APIStatusError(APIError):541"""HTTP status code errors"""542response: httpx.Response543status_code: int544545class RateLimitError(APIStatusError):546"""HTTP 429 rate limit errors"""547548class AuthenticationError(APIStatusError):549"""HTTP 401 authentication errors"""550551class BadRequestError(APIStatusError):552"""HTTP 400 bad request errors"""553554class NotFoundError(APIStatusError):555"""HTTP 404 not found errors"""556557class APIConnectionError(APIError):558"""Connection-related errors"""559560class APITimeoutError(APIConnectionError):561"""Request timeout errors"""562```