- Spec files
pypi-openai
Describes: pkg:pypi/openai@1.106.x
- Description
- Official Python library for the OpenAI API providing chat completions, embeddings, audio, images, and more
- Author
- tessl
- Last updated
text-completions.md docs/
1# Text Completions23Legacy text completion interface for older models like GPT-3.5 Turbo Instruct, providing direct text generation capabilities.45## Capabilities67### Basic Text Completions89Generate text completions using legacy completion models with direct prompt-based interaction.1011```python { .api }12def create(13self,14*,15model: Union[str, Literal["gpt-3.5-turbo-instruct", "davinci-002", "babbage-002"]],16prompt: Union[str, List[str], List[int], List[List[int]], None],17best_of: Optional[int] | NotGiven = NOT_GIVEN,18echo: Optional[bool] | NotGiven = NOT_GIVEN,19frequency_penalty: Optional[float] | NotGiven = NOT_GIVEN,20logit_bias: Optional[Dict[str, int]] | NotGiven = NOT_GIVEN,21logprobs: Optional[int] | NotGiven = NOT_GIVEN,22max_tokens: Optional[int] | NotGiven = NOT_GIVEN,23n: Optional[int] | NotGiven = NOT_GIVEN,24presence_penalty: Optional[float] | NotGiven = NOT_GIVEN,25seed: Optional[int] | NotGiven = NOT_GIVEN,26stop: Union[Optional[str], List[str], None] | NotGiven = NOT_GIVEN,27stream: Optional[bool] | NotGiven = NOT_GIVEN,28stream_options: Optional[ChatCompletionStreamOptionsParam] | NotGiven = NOT_GIVEN,29suffix: Optional[str] | NotGiven = NOT_GIVEN,30temperature: Optional[float] | NotGiven = NOT_GIVEN,31top_p: Optional[float] | NotGiven = NOT_GIVEN,32user: str | NotGiven = NOT_GIVEN33) -> Completion | Stream[Completion]: ...34```3536Usage examples:3738```python39from openai import OpenAI4041client = OpenAI()4243# Simple text completion44response = client.completions.create(45model="gpt-3.5-turbo-instruct",46prompt="Once upon a time, in a land far away,",47max_tokens=100,48temperature=0.749)5051print(response.choices[0].text)5253# Multiple completions54response = client.completions.create(55model="gpt-3.5-turbo-instruct",56prompt="The benefits of renewable energy include:",57max_tokens=150,58n=3, # Generate 3 different completions59temperature=0.860)6162for i, choice in enumerate(response.choices):63print(f"Completion {i+1}: {choice.text}")6465# Code completion66response = client.completions.create(67model="gpt-3.5-turbo-instruct",68prompt="def fibonacci(n):",69max_tokens=100,70temperature=0.1, # Lower temperature for code71stop=["\n\n"] # Stop at double newline72)7374print("Generated code:")75print("def fibonacci(n):" + response.choices[0].text)76```7778### Text Completion with Context7980Use prompts with context and examples for better completion quality and specific formatting.8182Usage examples:8384```python85# Few-shot learning with examples86prompt = """Translate English to French:8788English: Hello, how are you?89French: Salut, comment allez-vous?9091English: What time is it?92French: Quelle heure est-il?9394English: I love programming.95French:"""9697response = client.completions.create(98model="gpt-3.5-turbo-instruct",99prompt=prompt,100max_tokens=50,101temperature=0.3,102stop=["\n"]103)104105print(f"Translation: {response.choices[0].text.strip()}")106107# Text classification108prompt = """Classify the sentiment of these reviews as positive, negative, or neutral:109110Review: "This product is amazing! I love it."111Sentiment: positive112113Review: "It's okay, nothing special."114Sentiment: neutral115116Review: "Terrible quality, waste of money."117Sentiment: negative118119Review: "Best purchase I've made this year!"120Sentiment:"""121122response = client.completions.create(123model="gpt-3.5-turbo-instruct",124prompt=prompt,125max_tokens=10,126temperature=0.0127)128129print(f"Sentiment: {response.choices[0].text.strip()}")130```131132### Streaming Text Completions133134Stream text completions in real-time for responsive applications and long-form content generation.135136```python { .api }137def create(138self,139*,140model: Union[str, Literal["gpt-3.5-turbo-instruct", "davinci-002", "babbage-002"]],141prompt: Union[str, List[str], List[int], List[List[int]], None],142stream: Literal[True],143stream_options: Optional[ChatCompletionStreamOptionsParam] | NotGiven = NOT_GIVEN,144# ... other parameters145) -> Stream[Completion]: ...146```147148Usage examples:149150```python151# Streaming response152prompt = "Write a short story about a space explorer:"153154stream = client.completions.create(155model="gpt-3.5-turbo-instruct",156prompt=prompt,157max_tokens=200,158temperature=0.8,159stream=True160)161162print("Story: ", end="")163for chunk in stream:164if chunk.choices[0].text:165print(chunk.choices[0].text, end="", flush=True)166print()167168# Stream with usage tracking169stream = client.completions.create(170model="gpt-3.5-turbo-instruct",171prompt="Explain quantum computing:",172max_tokens=150,173stream=True,174stream_options={"include_usage": True}175)176177for chunk in stream:178if chunk.choices[0].text:179print(chunk.choices[0].text, end="")180if chunk.usage: # Final chunk181print(f"\n\nTokens used: {chunk.usage.total_tokens}")182```183184### Advanced Text Completion Parameters185186Fine-tune completion behavior with advanced parameters for specific use cases and output control.187188Usage examples:189190```python191# Logprobs for token probability analysis192response = client.completions.create(193model="gpt-3.5-turbo-instruct",194prompt="The capital of France is",195max_tokens=10,196logprobs=5, # Return top 5 token probabilities197echo=True # Include the prompt in the response198)199200print("Full text:", response.choices[0].text)201print("Token logprobs:", response.choices[0].logprobs.tokens)202203# Best of multiple generations204response = client.completions.create(205model="gpt-3.5-turbo-instruct",206prompt="Write a creative product name for a new smartphone:",207max_tokens=20,208temperature=0.9,209best_of=5, # Generate 5, return the best 1210n=1211)212213# Suffix completion (fill in the middle)214response = client.completions.create(215model="gpt-3.5-turbo-instruct",216prompt="The weather today is",217suffix="and tomorrow will be even better.",218max_tokens=10,219temperature=0.5220)221222print("Complete text:", response.choices[0].text)223224# Frequency and presence penalties225response = client.completions.create(226model="gpt-3.5-turbo-instruct",227prompt="List the benefits of exercise:",228max_tokens=150,229frequency_penalty=0.5, # Reduce repetition230presence_penalty=0.3, # Encourage new topics231temperature=0.7232)233```234235### Token-based Input236237Use tokenized input for precise control over model input and fine-grained prompt engineering.238239Usage examples:240241```python242import tiktoken243244# Get tokenizer for the model245encoding = tiktoken.encoding_for_model("gpt-3.5-turbo-instruct")246247# Tokenize input248text = "Hello, world! How are you today?"249tokens = encoding.encode(text)250print(f"Tokens: {tokens}")251252# Use tokens as input253response = client.completions.create(254model="gpt-3.5-turbo-instruct",255prompt=tokens, # List of token IDs256max_tokens=50,257temperature=0.5258)259260print("Response:", response.choices[0].text)261262# Multiple prompts with different tokenizations263prompts = [264encoding.encode("Complete this sentence: The future of AI is"),265encoding.encode("Write a haiku about technology:")266]267268response = client.completions.create(269model="gpt-3.5-turbo-instruct",270prompt=prompts, # List of token lists271max_tokens=50,272n=1,273temperature=0.7274)275276for i, choice in enumerate(response.choices):277print(f"Prompt {i+1} completion: {choice.text}")278```279280## Types281282### Core Response Types283284```python { .api }285class Completion(BaseModel):286id: str287choices: List[CompletionChoice]288created: int289model: str290object: Literal["text_completion"]291system_fingerprint: Optional[str]292usage: Optional[CompletionUsage]293294class CompletionChoice(BaseModel):295finish_reason: Literal["stop", "length", "content_filter"]296index: int297logprobs: Optional[CompletionLogprobs]298text: str299300class CompletionLogprobs(BaseModel):301text_offset: List[int]302token_logprobs: List[Optional[float]]303tokens: List[str]304top_logprobs: List[Optional[Dict[str, float]]]305```306307### Parameter Types308309```python { .api }310CompletionCreateParams = TypedDict('CompletionCreateParams', {311'model': Required[Union[str, Literal['gpt-3.5-turbo-instruct', 'davinci-002', 'babbage-002']]],312'prompt': Required[Union[str, List[str], List[int], List[List[int]], None]],313'best_of': NotRequired[Optional[int]],314'echo': NotRequired[Optional[bool]],315'frequency_penalty': NotRequired[Optional[float]],316'logit_bias': NotRequired[Optional[Dict[str, int]]],317'logprobs': NotRequired[Optional[int]],318'max_tokens': NotRequired[Optional[int]],319'n': NotRequired[Optional[int]],320'presence_penalty': NotRequired[Optional[float]],321'seed': NotRequired[Optional[int]],322'stop': NotRequired[Union[Optional[str], List[str], None]],323'stream': NotRequired[Optional[bool]],324'suffix': NotRequired[Optional[str]],325'temperature': NotRequired[Optional[float]],326'top_p': NotRequired[Optional[float]],327'user': NotRequired[str],328}, total=False)329330# Prompt can be various formats331PromptParam = Union[332str, # Simple text prompt333List[str], # Multiple text prompts334List[int], # Token IDs335List[List[int]], # Multiple token sequences336None # No prompt (for suffix-only completion)337]338```339340### Model Types341342```python { .api }343# Supported legacy completion models344CompletionModel = Literal[345"gpt-3.5-turbo-instruct",346"davinci-002",347"babbage-002"348]349```350351### Usage and Configuration352353```python { .api }354class CompletionUsage(BaseModel):355completion_tokens: int356prompt_tokens: int357total_tokens: int358359# Parameter ranges and defaults360class CompletionParams:361temperature: float = 1.0 # 0.0 to 2.0362top_p: float = 1.0 # 0.0 to 1.0363max_tokens: int = 16 # 1 to model limit364frequency_penalty: float = 0.0 # -2.0 to 2.0365presence_penalty: float = 0.0 # -2.0 to 2.0366n: int = 1 # 1 to 128367best_of: int = 1 # 1 to 20368logprobs: int = None # 0 to 5369```370371## Migration Notes372373The completions API is a legacy interface. For new applications, consider using the chat completions API which offers:374375- Better model performance (GPT-4, GPT-3.5 Turbo)376- Structured conversation format377- Function calling capabilities378- Improved safety and reliability379380To migrate from completions to chat completions:381382```python383# Old completions approach384response = client.completions.create(385model="gpt-3.5-turbo-instruct",386prompt="Translate 'Hello' to French:",387max_tokens=50388)389390# New chat completions approach391response = client.chat.completions.create(392model="gpt-3.5-turbo",393messages=[394{"role": "user", "content": "Translate 'Hello' to French:"}395],396max_tokens=50397)398```