- Spec files
pypi-openai
Describes: pkg:pypi/openai@1.106.x
- Description
- Official Python library for the OpenAI API providing chat completions, embeddings, audio, images, and more
- Author
- tessl
- Last updated
chat-completions.md docs/
1# Chat Completions23Primary interface for conversational AI using GPT models. Supports streaming responses, function calling, structured outputs, and advanced features like reasoning models.45## Capabilities67### Basic Chat Completions89Generate conversational responses using GPT models with message-based interaction patterns.1011```python { .api }12def create(13self,14*,15messages: Iterable[ChatCompletionMessageParam],16model: Union[str, ChatModel],17audio: Optional[ChatCompletionAudioParam] | NotGiven = NOT_GIVEN,18frequency_penalty: Optional[float] | NotGiven = NOT_GIVEN,19function_call: completion_create_params.FunctionCall | NotGiven = NOT_GIVEN,20functions: Iterable[completion_create_params.Function] | NotGiven = NOT_GIVEN,21logit_bias: Optional[Dict[str, int]] | NotGiven = NOT_GIVEN,22logprobs: Optional[bool] | NotGiven = NOT_GIVEN,23max_completion_tokens: Optional[int] | NotGiven = NOT_GIVEN,24max_tokens: Optional[int] | NotGiven = NOT_GIVEN,25metadata: Optional[Metadata] | NotGiven = NOT_GIVEN,26modalities: Optional[List[Literal["text", "audio"]]] | NotGiven = NOT_GIVEN,27n: Optional[int] | NotGiven = NOT_GIVEN,28parallel_tool_calls: bool | NotGiven = NOT_GIVEN,29prediction: Optional[ChatCompletionPredictionContentParam] | NotGiven = NOT_GIVEN,30presence_penalty: Optional[float] | NotGiven = NOT_GIVEN,31prompt_cache_key: str | NotGiven = NOT_GIVEN,32reasoning_effort: Optional[ReasoningEffort] | NotGiven = NOT_GIVEN,33response_format: completion_create_params.ResponseFormat | NotGiven = NOT_GIVEN,34safety_identifier: str | NotGiven = NOT_GIVEN,35seed: Optional[int] | NotGiven = NOT_GIVEN,36service_tier: Optional[Literal["auto", "default", "flex", "scale", "priority"]] | NotGiven = NOT_GIVEN,37stop: Union[Optional[str], SequenceNotStr[str], None] | NotGiven = NOT_GIVEN,38store: Optional[bool] | NotGiven = NOT_GIVEN,39stream: Optional[bool] | NotGiven = NOT_GIVEN,40stream_options: Optional[ChatCompletionStreamOptionsParam] | NotGiven = NOT_GIVEN,41temperature: Optional[float] | NotGiven = NOT_GIVEN,42tool_choice: ChatCompletionToolChoiceOptionParam | NotGiven = NOT_GIVEN,43tools: Iterable[ChatCompletionToolUnionParam] | NotGiven = NOT_GIVEN,44top_logprobs: Optional[int] | NotGiven = NOT_GIVEN,45top_p: Optional[float] | NotGiven = NOT_GIVEN,46user: str | NotGiven = NOT_GIVEN,47verbosity: Optional[Literal["low", "medium", "high"]] | NotGiven = NOT_GIVEN,48web_search_options: completion_create_params.WebSearchOptions | NotGiven = NOT_GIVEN,49extra_headers: Headers | None = None,50extra_query: Query | None = None,51extra_body: Body | None = None,52timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN53) -> ChatCompletion | Stream[ChatCompletionChunk]: ...54```5556Usage example:5758```python59from openai import OpenAI6061client = OpenAI()6263# Simple chat completion64response = client.chat.completions.create(65model="gpt-4",66messages=[67{"role": "system", "content": "You are a helpful assistant."},68{"role": "user", "content": "What is the capital of France?"}69]70)7172print(response.choices[0].message.content)7374# With additional parameters75response = client.chat.completions.create(76model="gpt-4",77messages=[78{"role": "system", "content": "You are a creative writer."},79{"role": "user", "content": "Write a short story about a robot."}80],81max_tokens=150,82temperature=0.8,83presence_penalty=0.1,84frequency_penalty=0.185)86```8788### Streaming Chat Completions8990Stream responses in real-time for better user experience with longer generations.9192```python { .api }93def create(94self,95*,96messages: Iterable[ChatCompletionMessageParam],97model: Union[str, ChatModel],98stream: Literal[True],99stream_options: Optional[ChatCompletionStreamOptionsParam] | NotGiven = NOT_GIVEN,100# ... other parameters101) -> Stream[ChatCompletionChunk]: ...102```103104Usage example:105106```python107# Streaming response108stream = client.chat.completions.create(109model="gpt-4",110messages=[{"role": "user", "content": "Tell me a long story"}],111stream=True112)113114print("Response: ", end="")115for chunk in stream:116if chunk.choices[0].delta.content is not None:117print(chunk.choices[0].delta.content, end="")118print()119120# With stream options for usage tracking121stream = client.chat.completions.create(122model="gpt-4",123messages=[{"role": "user", "content": "Hello!"}],124stream=True,125stream_options={"include_usage": True}126)127128for chunk in stream:129if chunk.usage: # Final chunk contains usage info130print(f"Tokens used: {chunk.usage.total_tokens}")131```132133### Function Calling134135Enable models to call external functions and tools for enhanced capabilities and structured interactions.136137```python { .api }138def create(139self,140*,141messages: Iterable[ChatCompletionMessageParam],142model: Union[str, ChatModel],143tools: Iterable[ChatCompletionToolUnionParam] | NotGiven = NOT_GIVEN,144tool_choice: ChatCompletionToolChoiceOptionParam | NotGiven = NOT_GIVEN,145parallel_tool_calls: bool | NotGiven = NOT_GIVEN,146# ... other parameters147) -> ChatCompletion: ...148```149150Usage example:151152```python153import json154155# Define available functions156tools = [157{158"type": "function",159"function": {160"name": "get_weather",161"description": "Get current weather for a location",162"parameters": {163"type": "object",164"properties": {165"location": {166"type": "string",167"description": "City name"168},169"unit": {170"type": "string",171"enum": ["celsius", "fahrenheit"],172"description": "Temperature unit"173}174},175"required": ["location"]176}177}178}179]180181# Function calling conversation182messages = [183{"role": "user", "content": "What's the weather like in Paris?"}184]185186response = client.chat.completions.create(187model="gpt-4",188messages=messages,189tools=tools,190tool_choice="auto"191)192193# Check if model wants to call a function194message = response.choices[0].message195if message.tool_calls:196# Add the assistant's response to messages197messages.append(message)198199# Call the function and add result200for tool_call in message.tool_calls:201function_args = json.loads(tool_call.function.arguments)202203# Your function implementation204weather_result = get_weather(205location=function_args["location"],206unit=function_args.get("unit", "celsius")207)208209messages.append({210"tool_call_id": tool_call.id,211"role": "tool",212"name": "get_weather",213"content": json.dumps(weather_result)214})215216# Get final response217final_response = client.chat.completions.create(218model="gpt-4",219messages=messages,220tools=tools221)222223print(final_response.choices[0].message.content)224```225226### Structured Outputs227228Generate responses in specific JSON formats using response format specification for reliable data extraction.229230```python { .api }231def create(232self,233*,234messages: Iterable[ChatCompletionMessageParam],235model: Union[str, ChatModel],236response_format: ResponseFormatParam | NotGiven = NOT_GIVEN,237# ... other parameters238) -> ChatCompletion: ...239240def parse(241self,242*,243messages: Iterable[ChatCompletionMessageParam],244model: Union[str, ChatModel],245response_format: type[ResponseFormatT],246# ... other parameters247) -> ParsedChatCompletion[ResponseFormatT]: ...248```249250Usage examples:251252```python253from pydantic import BaseModel254from typing import List255256# JSON Schema response format257response = client.chat.completions.create(258model="gpt-4",259messages=[260{"role": "user", "content": "List 3 colors and their hex codes"}261],262response_format={263"type": "json_schema",264"json_schema": {265"name": "colors",266"schema": {267"type": "object",268"properties": {269"colors": {270"type": "array",271"items": {272"type": "object",273"properties": {274"name": {"type": "string"},275"hex": {"type": "string"}276},277"required": ["name", "hex"],278"additionalProperties": False279}280}281},282"required": ["colors"],283"additionalProperties": False284}285}286}287)288289# Parse with Pydantic model290class Color(BaseModel):291name: str292hex: str293294class ColorList(BaseModel):295colors: List[Color]296297parsed_response = client.chat.completions.parse(298model="gpt-4",299messages=[300{"role": "user", "content": "List 3 colors and their hex codes"}301],302response_format=ColorList303)304305colors = parsed_response.choices[0].message.parsed306print(f"First color: {colors.colors[0].name} - {colors.colors[0].hex}")307```308309### Advanced Model Features310311Access advanced capabilities like reasoning, audio modalities, and prediction optimization.312313```python { .api }314def create(315self,316*,317messages: Iterable[ChatCompletionMessageParam],318model: Union[str, ChatModel],319reasoning_effort: Optional[ReasoningEffort] | NotGiven = NOT_GIVEN,320audio: Optional[ChatCompletionAudioParam] | NotGiven = NOT_GIVEN,321modalities: Optional[List[Literal["text", "audio"]]] | NotGiven = NOT_GIVEN,322prediction: Optional[ChatCompletionPredictionContentParam] | NotGiven = NOT_GIVEN,323# ... other parameters324) -> ChatCompletion: ...325```326327Usage examples:328329```python330# Reasoning models with effort control331response = client.chat.completions.create(332model="o1-preview",333messages=[334{"role": "user", "content": "Solve this complex math problem step by step: ..."}335],336reasoning_effort="high"337)338339# Audio input and output340response = client.chat.completions.create(341model="gpt-4o-audio-preview",342modalities=["text", "audio"],343audio={"voice": "alloy", "format": "wav"},344messages=[345{346"role": "user",347"content": [348{"type": "text", "text": "Please respond with audio"},349{350"type": "input_audio",351"input_audio": {352"data": audio_data_base64,353"format": "wav"354}355}356]357}358]359)360361# Prediction for faster responses362response = client.chat.completions.create(363model="gpt-4",364messages=[{"role": "user", "content": "Complete this code: def fibonacci("}],365prediction={366"type": "content",367"content": "n):\n if n <= 1:\n return n\n return fibonacci(n-1) + fibonacci(n-2)"368}369)370```371372### Message Management373374Access conversation history and message management for multi-turn conversations.375376```python { .api }377# Messages sub-resource378class Messages:379def list(380self,381thread_id: str,382*,383after: str | NotGiven = NOT_GIVEN,384before: str | NotGiven = NOT_GIVEN,385limit: int | NotGiven = NOT_GIVEN,386order: Literal["asc", "desc"] | NotGiven = NOT_GIVEN,387run_id: str | NotGiven = NOT_GIVEN388) -> SyncCursorPage[Message]: ...389390def create(391self,392thread_id: str,393*,394content: Union[str, Iterable[MessageContentPartParam]],395role: Literal["user", "assistant"],396attachments: Optional[Iterable[AttachmentParam]] | NotGiven = NOT_GIVEN,397metadata: Optional[object] | NotGiven = NOT_GIVEN398) -> Message: ...399```400401## Types402403### Core Response Types404405```python { .api }406class ChatCompletion(BaseModel):407id: str408choices: List[ChatCompletionChoice]409created: int410model: str411object: Literal["chat.completion"]412service_tier: Optional[Literal["scale", "default"]]413system_fingerprint: Optional[str]414usage: Optional[CompletionUsage]415416class ChatCompletionChoice(BaseModel):417finish_reason: Literal["stop", "length", "tool_calls", "content_filter", "function_call"]418index: int419logprobs: Optional[ChoiceLogprobs]420message: ChatCompletionMessage421422class ChatCompletionMessage(BaseModel):423content: Optional[str]424role: Literal["assistant"]425function_call: Optional[FunctionCall]426tool_calls: Optional[List[ChatCompletionMessageToolCall]]427audio: Optional[ChatCompletionMessageAudio]428429class ChatCompletionChunk(BaseModel):430id: str431choices: List[ChatCompletionChunkChoice]432created: int433model: str434object: Literal["chat.completion.chunk"]435service_tier: Optional[Literal["scale", "default"]]436system_fingerprint: Optional[str]437usage: Optional[CompletionUsage]438439class ParsedChatCompletion(BaseModel, Generic[ResponseFormatT]):440choices: List[ParsedChoice[ResponseFormatT]]441created: int442id: str443model: str444object: Literal["chat.completion"]445service_tier: Optional[Literal["scale", "default"]]446system_fingerprint: Optional[str]447usage: Optional[CompletionUsage]448```449450### Message Parameter Types451452```python { .api }453ChatCompletionMessageParam = Union[454ChatCompletionSystemMessageParam,455ChatCompletionUserMessageParam,456ChatCompletionAssistantMessageParam,457ChatCompletionToolMessageParam,458ChatCompletionFunctionMessageParam459]460461class ChatCompletionSystemMessageParam(TypedDict, total=False):462content: Required[str]463role: Required[Literal["system"]]464name: str465466class ChatCompletionUserMessageParam(TypedDict, total=False):467content: Required[Union[str, List[ChatCompletionContentPartParam]]]468role: Required[Literal["user"]]469name: str470471class ChatCompletionAssistantMessageParam(TypedDict, total=False):472role: Required[Literal["assistant"]]473content: Optional[str]474function_call: FunctionCall475name: str476tool_calls: Iterable[ChatCompletionMessageToolCallParam]477audio: ChatCompletionMessageAudioParam478479class ChatCompletionToolMessageParam(TypedDict, total=False):480content: Required[Union[str, List[ChatCompletionContentPartParam]]]481role: Required[Literal["tool"]]482tool_call_id: Required[str]483484ChatCompletionContentPartParam = Union[485ChatCompletionContentPartTextParam,486ChatCompletionContentPartImageParam,487ChatCompletionContentPartAudioParam,488ChatCompletionContentPartRefusalParam489]490```491492### Tool and Function Types493494```python { .api }495ChatCompletionToolUnionParam = Union[496ChatCompletionToolParam,497ChatCompletionNamedToolChoiceParam498]499500class ChatCompletionToolParam(TypedDict, total=False):501function: Required[FunctionDefinition]502type: Required[Literal["function"]]503504class FunctionDefinition(TypedDict, total=False):505name: Required[str]506description: str507parameters: FunctionParameters508strict: Optional[bool]509510class ChatCompletionMessageToolCall(BaseModel):511id: str512function: Function513type: Literal["function"]514515ChatCompletionToolChoiceOptionParam = Union[516Literal["none", "auto", "required"],517ChatCompletionNamedToolChoiceParam518]519```520521### Response Format Types522523```python { .api }524ResponseFormatParam = Union[525ResponseFormatText,526ResponseFormatJSONObject,527ResponseFormatJSONSchema528]529530class ResponseFormatText(TypedDict, total=False):531type: Required[Literal["text"]]532533class ResponseFormatJSONObject(TypedDict, total=False):534type: Required[Literal["json_object"]]535536class ResponseFormatJSONSchema(TypedDict, total=False):537json_schema: Required[JSONSchema]538type: Required[Literal["json_schema"]]539540class JSONSchema(TypedDict, total=False):541description: str542name: Required[str]543schema: Optional[Dict[str, object]]544strict: Optional[bool]545```546547### Audio and Multimodal Types548549```python { .api }550class ChatCompletionAudioParam(TypedDict, total=False):551voice: Required[AudioVoice]552format: Required[AudioFormat]553554AudioVoice = Literal["alloy", "ash", "ballad", "coral", "echo", "sage", "shimmer", "verse"]555AudioFormat = Literal["wav", "mp3", "flac", "opus"]556557class ChatCompletionContentPartAudioParam(TypedDict, total=False):558input_audio: Required[InputAudio]559type: Required[Literal["input_audio"]]560561class InputAudio(TypedDict, total=False):562data: Required[str] # Base64 encoded audio563format: Required[Literal["wav", "mp3"]]564```565566### Usage and Metadata Types567568```python { .api }569class CompletionUsage(BaseModel):570completion_tokens: int571prompt_tokens: int572total_tokens: int573completion_tokens_details: Optional[CompletionTokensDetails]574prompt_tokens_details: Optional[PromptTokensDetails]575576class ChatCompletionStreamOptionsParam(TypedDict, total=False):577include_usage: bool578579ReasoningEffort = Literal["low", "medium", "high"]580581class ChatCompletionPredictionContentParam(TypedDict, total=False):582type: Required[Literal["content"]]583content: Required[Union[str, List[ChatCompletionContentPartParam]]]584```