npm-anthropic-ai--sdk

Description
The official TypeScript library for the Anthropic API providing comprehensive client functionality for Claude AI models.
Author
tessl
Last updated

How to use

npx @tessl/cli registry install tessl/npm-anthropic-ai--sdk@0.61.0

completions-api.md docs/

1
# Completions API (Legacy)
2
3
The Text Completions API is a legacy interface for generating text completions. **This API is deprecated and will not be compatible with future models.** New applications should use the [Messages API](./messages-api.md) instead.
4
5
See the [migration guide](https://docs.anthropic.com/en/api/migrating-from-text-completions-to-messages) for guidance on migrating from Text Completions to Messages.
6
7
## Capabilities
8
9
### Create Text Completion
10
11
Generate text completions using the legacy prompt format with explicit human/assistant markers.
12
13
```typescript { .api }
14
/**
15
* Create a text completion (legacy interface)
16
* @param params - Completion parameters
17
* @returns Promise resolving to Completion or Stream for streaming
18
*/
19
create(params: CompletionCreateParamsNonStreaming): APIPromise<Completion>;
20
create(params: CompletionCreateParamsStreaming): APIPromise<Stream<Completion>>;
21
22
interface CompletionCreateParams {
23
/** The model to use for completion */
24
model: Model;
25
/** The prompt to complete, including explicit markers */
26
prompt: string;
27
/** Maximum tokens to generate */
28
max_tokens_to_sample: number;
29
/** Sampling temperature (0.0 to 1.0) */
30
temperature?: number;
31
/** Top-p sampling parameter */
32
top_p?: number;
33
/** Top-k sampling parameter */
34
top_k?: number;
35
/** Stop sequences to halt generation */
36
stop_sequences?: string[];
37
/** Whether to stream the response */
38
stream?: boolean;
39
/** Metadata for the request */
40
metadata?: CompletionMetadata;
41
/** Beta features to enable */
42
betas?: AnthropicBeta;
43
}
44
45
interface CompletionCreateParamsNonStreaming extends CompletionCreateParams {
46
stream?: false;
47
}
48
49
interface CompletionCreateParamsStreaming extends CompletionCreateParams {
50
stream: true;
51
}
52
```
53
54
**Usage Examples:**
55
56
```typescript
57
import Anthropic, { HUMAN_PROMPT, AI_PROMPT } from "@anthropic-ai/sdk";
58
59
const client = new Anthropic();
60
61
// Basic completion
62
const completion = await client.completions.create({
63
model: "claude-2.1",
64
prompt: `${HUMAN_PROMPT} What is the capital of France?${AI_PROMPT}`,
65
max_tokens_to_sample: 100,
66
});
67
68
console.log(completion.completion);
69
70
// Multi-turn conversation
71
const conversationPrompt = `${HUMAN_PROMPT} Hello, I'm learning about astronomy.${AI_PROMPT} That's wonderful! I'd be happy to help you learn about astronomy. What specific topics interest you?${HUMAN_PROMPT} Tell me about black holes.${AI_PROMPT}`;
72
73
const completion = await client.completions.create({
74
model: "claude-2.1",
75
prompt: conversationPrompt,
76
max_tokens_to_sample: 500,
77
temperature: 0.7,
78
});
79
80
// Streaming completion
81
const stream = await client.completions.create({
82
model: "claude-2.1",
83
prompt: `${HUMAN_PROMPT} Write a short story about a robot.${AI_PROMPT}`,
84
max_tokens_to_sample: 1000,
85
stream: true,
86
});
87
88
for await (const chunk of stream) {
89
process.stdout.write(chunk.completion);
90
}
91
```
92
93
## Legacy Prompt Format
94
95
The Completions API requires explicit prompt markers to distinguish between human and AI text:
96
97
```typescript { .api }
98
/** Legacy human prompt marker */
99
const HUMAN_PROMPT: string;
100
101
/** Legacy AI prompt marker */
102
const AI_PROMPT: string;
103
```
104
105
**Prompt Structure:**
106
```
107
{HUMAN_PROMPT} Human message here{AI_PROMPT} Assistant response here{HUMAN_PROMPT} Next human message{AI_PROMPT}
108
```
109
110
**Usage Examples:**
111
112
```typescript
113
import { HUMAN_PROMPT, AI_PROMPT } from "@anthropic-ai/sdk";
114
115
// Single turn
116
const prompt = `${HUMAN_PROMPT} What is 2+2?${AI_PROMPT}`;
117
118
// Multi-turn
119
const multiTurnPrompt = `${HUMAN_PROMPT} Hi there!${AI_PROMPT} Hello! How can I help you today?${HUMAN_PROMPT} Tell me about the weather.${AI_PROMPT}`;
120
121
// System-like instructions (place at beginning)
122
const instructedPrompt = `${HUMAN_PROMPT} You are a helpful math tutor. Please explain concepts clearly.
123
124
What is calculus?${AI_PROMPT}`;
125
```
126
127
## Response Types
128
129
```typescript { .api }
130
interface Completion {
131
/** The generated completion text */
132
completion: string;
133
/** Reason why generation stopped */
134
stop_reason: "stop_sequence" | "max_tokens" | null;
135
/** The stop sequence that ended generation (if any) */
136
stop?: string;
137
/** Unique completion identifier */
138
id: string;
139
/** Model used for generation */
140
model: string;
141
/** Request type indicator */
142
type: "completion";
143
/** Log probabilities (if requested) */
144
log_id?: string;
145
}
146
147
interface CompletionMetadata {
148
/** User identifier for tracking */
149
user_id?: string;
150
}
151
```
152
153
## Sampling Parameters
154
155
```typescript { .api }
156
interface SamplingParameters {
157
/** Temperature controls randomness (0.0-1.0) */
158
temperature?: number;
159
/** Top-p nucleus sampling (0.0-1.0) */
160
top_p?: number;
161
/** Top-k sampling (positive integer) */
162
top_k?: number;
163
/** Sequences that stop generation */
164
stop_sequences?: string[];
165
}
166
```
167
168
**Parameter Guidelines:**
169
170
- **Temperature**: Lower values (0.1-0.3) for factual tasks, higher (0.7-0.9) for creative tasks
171
- **Top-p**: Usually 0.9-0.95 for good results
172
- **Top-k**: Typically 40-100, lower for more focused responses
173
- **Stop sequences**: Use to control response format and length
174
175
**Usage Examples:**
176
177
```typescript
178
// Factual, focused response
179
const factualCompletion = await client.completions.create({
180
model: "claude-2.1",
181
prompt: `${HUMAN_PROMPT} What year was the Declaration of Independence signed?${AI_PROMPT}`,
182
max_tokens_to_sample: 50,
183
temperature: 0.1,
184
top_p: 0.9,
185
});
186
187
// Creative, varied response
188
const creativeCompletion = await client.completions.create({
189
model: "claude-2.1",
190
prompt: `${HUMAN_PROMPT} Write a creative opening line for a mystery novel.${AI_PROMPT}`,
191
max_tokens_to_sample: 100,
192
temperature: 0.8,
193
top_k: 50,
194
});
195
196
// Structured response with stop sequences
197
const structuredCompletion = await client.completions.create({
198
model: "claude-2.1",
199
prompt: `${HUMAN_PROMPT} List three benefits of exercise:${AI_PROMPT}`,
200
max_tokens_to_sample: 200,
201
stop_sequences: ["\n\n", "4."],
202
});
203
```
204
205
## Streaming Completions
206
207
```typescript { .api }
208
/**
209
* Stream interface for completion responses
210
*/
211
interface Stream<T> extends AsyncIterable<T> {
212
/** Iterate over stream chunks */
213
[Symbol.asyncIterator](): AsyncIterableIterator<T>;
214
/** Convert stream to array */
215
toArray(): Promise<T[]>;
216
/** Get the controller for manual stream handling */
217
controller: AbortController;
218
}
219
```
220
221
**Usage Examples:**
222
223
```typescript
224
// Basic streaming
225
const stream = await client.completions.create({
226
model: "claude-2.1",
227
prompt: `${HUMAN_PROMPT} Tell me a story about space exploration.${AI_PROMPT}`,
228
max_tokens_to_sample: 1000,
229
stream: true,
230
});
231
232
let fullText = "";
233
for await (const chunk of stream) {
234
const text = chunk.completion;
235
process.stdout.write(text);
236
fullText += text;
237
}
238
239
// Manual stream control
240
const stream = await client.completions.create({
241
model: "claude-2.1",
242
prompt: `${HUMAN_PROMPT} Long explanation needed...${AI_PROMPT}`,
243
max_tokens_to_sample: 2000,
244
stream: true,
245
});
246
247
// Cancel stream after 10 seconds
248
setTimeout(() => {
249
stream.controller.abort();
250
}, 10000);
251
252
try {
253
for await (const chunk of stream) {
254
console.log(chunk.completion);
255
}
256
} catch (error) {
257
if (error.name === "AbortError") {
258
console.log("Stream was cancelled");
259
}
260
}
261
```
262
263
## Migration to Messages API
264
265
**Legacy Completions format:**
266
```typescript
267
const completion = await client.completions.create({
268
model: "claude-2.1",
269
prompt: `${HUMAN_PROMPT} Hello!${AI_PROMPT}`,
270
max_tokens_to_sample: 100,
271
});
272
```
273
274
**Modern Messages format:**
275
```typescript
276
const message = await client.messages.create({
277
model: "claude-3-sonnet-20240229",
278
max_tokens: 100,
279
messages: [
280
{ role: "user", content: "Hello!" }
281
],
282
});
283
```
284
285
**Key Differences:**
286
- Messages API uses structured message arrays instead of prompt strings
287
- No need for explicit `HUMAN_PROMPT`/`AI_PROMPT` markers
288
- Better support for multi-modal content (images, documents)
289
- More modern models available
290
- Better streaming interface
291
- Tool usage support
292
293
## Error Handling
294
295
```typescript { .api }
296
// Handle common completion errors
297
try {
298
const completion = await client.completions.create({
299
model: "claude-2.1",
300
prompt: `${HUMAN_PROMPT} Question here${AI_PROMPT}`,
301
max_tokens_to_sample: 1000,
302
});
303
} catch (error) {
304
if (error instanceof BadRequestError) {
305
console.log("Invalid prompt format or parameters");
306
} else if (error instanceof RateLimitError) {
307
console.log("Rate limit exceeded, retry later");
308
} else if (error instanceof AuthenticationError) {
309
console.log("Invalid API key");
310
}
311
}
312
```
313
314
## Supported Models
315
316
Legacy models that work with the Completions API:
317
318
```typescript { .api }
319
type CompletionModel =
320
| "claude-2.1"
321
| "claude-2.0"
322
| "claude-instant-1.2";
323
```
324
325
**Note**: Claude 3 models (Haiku, Sonnet, Opus) are not available through the Completions API and require the Messages API.