- Spec files
npm-anthropic-ai--sdk
Describes: pkg:npm/@anthropic-ai/sdk@0.61.x
- Description
- The official TypeScript library for the Anthropic API providing comprehensive client functionality for Claude AI models.
- Author
- tessl
- Last updated
completions-api.md docs/
1# Completions API (Legacy)23The Text Completions API is a legacy interface for generating text completions. **This API is deprecated and will not be compatible with future models.** New applications should use the [Messages API](./messages-api.md) instead.45See the [migration guide](https://docs.anthropic.com/en/api/migrating-from-text-completions-to-messages) for guidance on migrating from Text Completions to Messages.67## Capabilities89### Create Text Completion1011Generate text completions using the legacy prompt format with explicit human/assistant markers.1213```typescript { .api }14/**15* Create a text completion (legacy interface)16* @param params - Completion parameters17* @returns Promise resolving to Completion or Stream for streaming18*/19create(params: CompletionCreateParamsNonStreaming): APIPromise<Completion>;20create(params: CompletionCreateParamsStreaming): APIPromise<Stream<Completion>>;2122interface CompletionCreateParams {23/** The model to use for completion */24model: Model;25/** The prompt to complete, including explicit markers */26prompt: string;27/** Maximum tokens to generate */28max_tokens_to_sample: number;29/** Sampling temperature (0.0 to 1.0) */30temperature?: number;31/** Top-p sampling parameter */32top_p?: number;33/** Top-k sampling parameter */34top_k?: number;35/** Stop sequences to halt generation */36stop_sequences?: string[];37/** Whether to stream the response */38stream?: boolean;39/** Metadata for the request */40metadata?: CompletionMetadata;41/** Beta features to enable */42betas?: AnthropicBeta;43}4445interface CompletionCreateParamsNonStreaming extends CompletionCreateParams {46stream?: false;47}4849interface CompletionCreateParamsStreaming extends CompletionCreateParams {50stream: true;51}52```5354**Usage Examples:**5556```typescript57import Anthropic, { HUMAN_PROMPT, AI_PROMPT } from "@anthropic-ai/sdk";5859const client = new Anthropic();6061// Basic completion62const completion = await client.completions.create({63model: "claude-2.1",64prompt: `${HUMAN_PROMPT} What is the capital of France?${AI_PROMPT}`,65max_tokens_to_sample: 100,66});6768console.log(completion.completion);6970// Multi-turn conversation71const conversationPrompt = `${HUMAN_PROMPT} Hello, I'm learning about astronomy.${AI_PROMPT} That's wonderful! I'd be happy to help you learn about astronomy. What specific topics interest you?${HUMAN_PROMPT} Tell me about black holes.${AI_PROMPT}`;7273const completion = await client.completions.create({74model: "claude-2.1",75prompt: conversationPrompt,76max_tokens_to_sample: 500,77temperature: 0.7,78});7980// Streaming completion81const stream = await client.completions.create({82model: "claude-2.1",83prompt: `${HUMAN_PROMPT} Write a short story about a robot.${AI_PROMPT}`,84max_tokens_to_sample: 1000,85stream: true,86});8788for await (const chunk of stream) {89process.stdout.write(chunk.completion);90}91```9293## Legacy Prompt Format9495The Completions API requires explicit prompt markers to distinguish between human and AI text:9697```typescript { .api }98/** Legacy human prompt marker */99const HUMAN_PROMPT: string;100101/** Legacy AI prompt marker */102const AI_PROMPT: string;103```104105**Prompt Structure:**106```107{HUMAN_PROMPT} Human message here{AI_PROMPT} Assistant response here{HUMAN_PROMPT} Next human message{AI_PROMPT}108```109110**Usage Examples:**111112```typescript113import { HUMAN_PROMPT, AI_PROMPT } from "@anthropic-ai/sdk";114115// Single turn116const prompt = `${HUMAN_PROMPT} What is 2+2?${AI_PROMPT}`;117118// Multi-turn119const multiTurnPrompt = `${HUMAN_PROMPT} Hi there!${AI_PROMPT} Hello! How can I help you today?${HUMAN_PROMPT} Tell me about the weather.${AI_PROMPT}`;120121// System-like instructions (place at beginning)122const instructedPrompt = `${HUMAN_PROMPT} You are a helpful math tutor. Please explain concepts clearly.123124What is calculus?${AI_PROMPT}`;125```126127## Response Types128129```typescript { .api }130interface Completion {131/** The generated completion text */132completion: string;133/** Reason why generation stopped */134stop_reason: "stop_sequence" | "max_tokens" | null;135/** The stop sequence that ended generation (if any) */136stop?: string;137/** Unique completion identifier */138id: string;139/** Model used for generation */140model: string;141/** Request type indicator */142type: "completion";143/** Log probabilities (if requested) */144log_id?: string;145}146147interface CompletionMetadata {148/** User identifier for tracking */149user_id?: string;150}151```152153## Sampling Parameters154155```typescript { .api }156interface SamplingParameters {157/** Temperature controls randomness (0.0-1.0) */158temperature?: number;159/** Top-p nucleus sampling (0.0-1.0) */160top_p?: number;161/** Top-k sampling (positive integer) */162top_k?: number;163/** Sequences that stop generation */164stop_sequences?: string[];165}166```167168**Parameter Guidelines:**169170- **Temperature**: Lower values (0.1-0.3) for factual tasks, higher (0.7-0.9) for creative tasks171- **Top-p**: Usually 0.9-0.95 for good results172- **Top-k**: Typically 40-100, lower for more focused responses173- **Stop sequences**: Use to control response format and length174175**Usage Examples:**176177```typescript178// Factual, focused response179const factualCompletion = await client.completions.create({180model: "claude-2.1",181prompt: `${HUMAN_PROMPT} What year was the Declaration of Independence signed?${AI_PROMPT}`,182max_tokens_to_sample: 50,183temperature: 0.1,184top_p: 0.9,185});186187// Creative, varied response188const creativeCompletion = await client.completions.create({189model: "claude-2.1",190prompt: `${HUMAN_PROMPT} Write a creative opening line for a mystery novel.${AI_PROMPT}`,191max_tokens_to_sample: 100,192temperature: 0.8,193top_k: 50,194});195196// Structured response with stop sequences197const structuredCompletion = await client.completions.create({198model: "claude-2.1",199prompt: `${HUMAN_PROMPT} List three benefits of exercise:${AI_PROMPT}`,200max_tokens_to_sample: 200,201stop_sequences: ["\n\n", "4."],202});203```204205## Streaming Completions206207```typescript { .api }208/**209* Stream interface for completion responses210*/211interface Stream<T> extends AsyncIterable<T> {212/** Iterate over stream chunks */213[Symbol.asyncIterator](): AsyncIterableIterator<T>;214/** Convert stream to array */215toArray(): Promise<T[]>;216/** Get the controller for manual stream handling */217controller: AbortController;218}219```220221**Usage Examples:**222223```typescript224// Basic streaming225const stream = await client.completions.create({226model: "claude-2.1",227prompt: `${HUMAN_PROMPT} Tell me a story about space exploration.${AI_PROMPT}`,228max_tokens_to_sample: 1000,229stream: true,230});231232let fullText = "";233for await (const chunk of stream) {234const text = chunk.completion;235process.stdout.write(text);236fullText += text;237}238239// Manual stream control240const stream = await client.completions.create({241model: "claude-2.1",242prompt: `${HUMAN_PROMPT} Long explanation needed...${AI_PROMPT}`,243max_tokens_to_sample: 2000,244stream: true,245});246247// Cancel stream after 10 seconds248setTimeout(() => {249stream.controller.abort();250}, 10000);251252try {253for await (const chunk of stream) {254console.log(chunk.completion);255}256} catch (error) {257if (error.name === "AbortError") {258console.log("Stream was cancelled");259}260}261```262263## Migration to Messages API264265**Legacy Completions format:**266```typescript267const completion = await client.completions.create({268model: "claude-2.1",269prompt: `${HUMAN_PROMPT} Hello!${AI_PROMPT}`,270max_tokens_to_sample: 100,271});272```273274**Modern Messages format:**275```typescript276const message = await client.messages.create({277model: "claude-3-sonnet-20240229",278max_tokens: 100,279messages: [280{ role: "user", content: "Hello!" }281],282});283```284285**Key Differences:**286- Messages API uses structured message arrays instead of prompt strings287- No need for explicit `HUMAN_PROMPT`/`AI_PROMPT` markers288- Better support for multi-modal content (images, documents)289- More modern models available290- Better streaming interface291- Tool usage support292293## Error Handling294295```typescript { .api }296// Handle common completion errors297try {298const completion = await client.completions.create({299model: "claude-2.1",300prompt: `${HUMAN_PROMPT} Question here${AI_PROMPT}`,301max_tokens_to_sample: 1000,302});303} catch (error) {304if (error instanceof BadRequestError) {305console.log("Invalid prompt format or parameters");306} else if (error instanceof RateLimitError) {307console.log("Rate limit exceeded, retry later");308} else if (error instanceof AuthenticationError) {309console.log("Invalid API key");310}311}312```313314## Supported Models315316Legacy models that work with the Completions API:317318```typescript { .api }319type CompletionModel =320| "claude-2.1"321| "claude-2.0"322| "claude-instant-1.2";323```324325**Note**: Claude 3 models (Haiku, Sonnet, Opus) are not available through the Completions API and require the Messages API.