The official TypeScript library for the OpenAI API
—
Quality
Pending
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
The OpenAI Assistants API (v2) enables building AI assistants with persistent threads, tool use (code interpreter, file search, function calling), and streaming support. This beta feature provides a stateful approach to building AI applications.
Note: The Assistants API has been marked as deprecated in favor of the Responses API in recent SDK versions, but remains fully functional.
client.beta.assistants, client.beta.threadsimport OpenAI from "openai";
const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
// Access Assistants API
const assistant = await client.beta.assistants.create({ model: "gpt-4o" });
const thread = await client.beta.threads.create();
const run = await client.beta.threads.runs.create(thread.id, {
assistant_id: assistant.id,
});The Assistants API is organized hierarchically:
client.beta.assistants) - AI assistants with instructions and toolsclient.beta.threads) - Conversation threads containing messagesclient.beta.threads.messages) - User and assistant messages in threadsclient.beta.threads.runs) - Execution of an assistant on a threadclient.beta.threads.runs.steps) - Individual steps within a runKey capabilities:
client.beta.assistants)Create and manage AI assistants with custom instructions, tools, and configuration.
/**
* Create an assistant with a model and instructions
* @param body - Assistant configuration
* @returns Promise<Assistant>
*/
create(
body: AssistantCreateParams,
options?: RequestOptions
): Promise<Assistant>;Usage:
const assistant = await client.beta.assistants.create({
model: "gpt-4o",
name: "Math Tutor",
instructions: "You are a helpful math tutor. Guide students step by step.",
tools: [
{ type: "code_interpreter" },
{ type: "file_search" }
],
tool_resources: {
code_interpreter: {
file_ids: ["file-123"]
},
file_search: {
vector_store_ids: ["vs-456"]
}
},
temperature: 0.7,
metadata: { department: "education" }
});/**
* Retrieves an assistant by ID
* @param assistantID - The assistant ID
* @returns Promise<Assistant>
*/
retrieve(
assistantID: string,
options?: RequestOptions
): Promise<Assistant>;/**
* Modifies an assistant
* @param assistantID - The assistant ID
* @param body - Fields to update
* @returns Promise<Assistant>
*/
update(
assistantID: string,
body: AssistantUpdateParams,
options?: RequestOptions
): Promise<Assistant>;Usage:
const updated = await client.beta.assistants.update(assistant.id, {
instructions: "Updated instructions",
tools: [{ type: "code_interpreter" }, { type: "function", function: myFunc }],
metadata: { version: "2.0" }
});/**
* Returns a list of assistants with pagination support
* @param query - Pagination parameters
* @returns PagePromise<AssistantsPage, Assistant>
*/
list(
query?: AssistantListParams,
options?: RequestOptions
): PagePromise<AssistantsPage, Assistant>;Usage:
// Auto-pagination with async iteration
for await (const assistant of client.beta.assistants.list({ limit: 20 })) {
console.log(assistant.name);
}
// Manual pagination
const page = await client.beta.assistants.list({ limit: 10, order: "desc" });/**
* Delete an assistant
* @param assistantID - The assistant ID
* @returns Promise<AssistantDeleted>
*/
delete(
assistantID: string,
options?: RequestOptions
): Promise<AssistantDeleted>;/**
* Represents an assistant that can call the model and use tools
*/
interface Assistant {
/** The identifier, which can be referenced in API endpoints */
id: string;
/** The Unix timestamp (in seconds) for when the assistant was created */
created_at: number;
/** The description of the assistant (max 512 characters) */
description: string | null;
/** System instructions for the assistant (max 256,000 characters) */
instructions: string | null;
/** Key-value metadata pairs (max 16 pairs) */
metadata: Metadata | null;
/** ID of the model to use */
model: string;
/** The name of the assistant (max 256 characters) */
name: string | null;
/** Always 'assistant' */
object: "assistant";
/** List of tools enabled on the assistant (max 128 tools) */
tools: Array<AssistantTool>;
/** Response format specification (json_schema, json_object, or auto) */
response_format?: AssistantResponseFormatOption | null;
/** Sampling temperature (0-2) */
temperature?: number | null;
/** Tool-specific resources (files for code_interpreter, vector stores for file_search) */
tool_resources?: Assistant.ToolResources | null;
/** Nucleus sampling parameter (0-1) */
top_p?: number | null;
}/**
* Tool definitions for assistants
*/
type AssistantTool = CodeInterpreterTool | FileSearchTool | FunctionTool;
interface CodeInterpreterTool {
/** Always 'code_interpreter' */
type: "code_interpreter";
}
interface FileSearchTool {
/** Always 'file_search' */
type: "file_search";
/** Configuration for file search */
file_search?: {
/** Max results to return (1-50, default 20 for gpt-4, 5 for gpt-3.5) */
max_num_results?: number;
/** Ranking configuration */
ranking_options?: {
/** Score threshold (0-1) */
score_threshold: number;
/** Ranker type */
ranker?: "auto" | "default_2024_08_21";
};
};
}
interface FunctionTool {
/** Function definition */
function: FunctionDefinition;
/** Always 'function' */
type: "function";
}/**
* Parameters for creating an assistant
*/
interface AssistantCreateParams {
/** Model ID to use */
model: string | ChatModel;
/** Description (max 512 chars) */
description?: string | null;
/** System instructions (max 256,000 chars) */
instructions?: string | null;
/** Metadata key-value pairs */
metadata?: Metadata | null;
/** Name (max 256 chars) */
name?: string | null;
/** Reasoning effort (none, minimal, low, medium, high) */
reasoning_effort?: ReasoningEffort | null;
/** Response format configuration */
response_format?: AssistantResponseFormatOption | null;
/** Temperature (0-2) */
temperature?: number | null;
/** Tool resources (files, vector stores) */
tool_resources?: AssistantCreateParams.ToolResources | null;
/** Tools to enable */
tools?: Array<AssistantTool>;
/** Top-p sampling (0-1) */
top_p?: number | null;
}/**
* Confirmation of assistant deletion
*/
interface AssistantDeleted {
id: string;
deleted: boolean;
object: "assistant.deleted";
}client.beta.threads)Manage conversation threads that contain messages and maintain state.
/**
* Create a thread with optional initial messages
* @param body - Thread configuration
* @returns Promise<Thread>
*/
create(
body?: ThreadCreateParams,
options?: RequestOptions
): Promise<Thread>;Usage:
// Empty thread
const thread = await client.beta.threads.create();
// Thread with initial messages
const thread = await client.beta.threads.create({
messages: [
{
role: "user",
content: "Hello! I need help with calculus.",
attachments: [
{
file_id: "file-123",
tools: [{ type: "file_search" }]
}
]
}
],
tool_resources: {
code_interpreter: { file_ids: ["file-456"] }
},
metadata: { session_id: "abc123" }
});/**
* Retrieves a thread
* @param threadID - The thread ID
* @returns Promise<Thread>
*/
retrieve(
threadID: string,
options?: RequestOptions
): Promise<Thread>;/**
* Modifies a thread's metadata or tool resources
* @param threadID - The thread ID
* @param body - Fields to update
* @returns Promise<Thread>
*/
update(
threadID: string,
body: ThreadUpdateParams,
options?: RequestOptions
): Promise<Thread>;/**
* Delete a thread
* @param threadID - The thread ID
* @returns Promise<ThreadDeleted>
*/
delete(
threadID: string,
options?: RequestOptions
): Promise<ThreadDeleted>;/**
* Create a thread and immediately start a run (supports streaming)
* @param body - Thread and run configuration
* @returns Promise<Run> | Promise<Stream<AssistantStreamEvent>>
*/
createAndRun(
body: ThreadCreateAndRunParamsNonStreaming,
options?: RequestOptions
): Promise<Run>;
createAndRun(
body: ThreadCreateAndRunParamsStreaming,
options?: RequestOptions
): Promise<Stream<AssistantStreamEvent>>;Usage:
// Non-streaming
const run = await client.beta.threads.createAndRun({
assistant_id: assistant.id,
thread: {
messages: [{ role: "user", content: "Explain quantum physics" }]
}
});
// Streaming
const stream = await client.beta.threads.createAndRun({
assistant_id: assistant.id,
thread: {
messages: [{ role: "user", content: "Write a story" }]
},
stream: true
});
for await (const event of stream) {
if (event.event === "thread.message.delta") {
process.stdout.write(event.data.delta.content?.[0]?.text?.value || "");
}
}/**
* Helper: Create thread, run, and poll until completion
* @param body - Thread and run configuration
* @param options - Options including pollIntervalMs
* @returns Promise<Run>
*/
createAndRunPoll(
body: ThreadCreateAndRunParamsNonStreaming,
options?: RequestOptions & { pollIntervalMs?: number }
): Promise<Run>;Usage:
const run = await client.beta.threads.createAndRunPoll({
assistant_id: assistant.id,
thread: {
messages: [{ role: "user", content: "Calculate 234 * 567" }]
}
}, { pollIntervalMs: 1000 });
// Run is now in terminal state (completed, failed, etc.)
console.log(run.status); // "completed"/**
* Helper: Create thread and stream the run with typed AssistantStream
* @param body - Thread and run configuration
* @returns AssistantStream
*/
createAndRunStream(
body: ThreadCreateAndRunParamsBaseStream,
options?: RequestOptions
): AssistantStream;Usage:
const stream = client.beta.threads.createAndRunStream({
assistant_id: assistant.id,
thread: {
messages: [{ role: "user", content: "Help me debug this code" }]
}
});
stream
.on("textCreated", () => process.stdout.write("\nassistant > "))
.on("textDelta", (delta) => process.stdout.write(delta.value || ""))
.on("textDone", () => console.log("\n"))
.on("toolCallCreated", (toolCall) => console.log(`Tool: ${toolCall.type}`))
.on("toolCallDelta", (delta, snapshot) => {
if (delta.type === "code_interpreter" && delta.code_interpreter?.input) {
process.stdout.write(delta.code_interpreter.input);
}
});
const finalRun = await stream.finalRun();/**
* Represents a thread that contains messages
*/
interface Thread {
/** The identifier */
id: string;
/** Creation timestamp (Unix seconds) */
created_at: number;
/** Metadata key-value pairs */
metadata: Metadata | null;
/** Always 'thread' */
object: "thread";
/** Tool resources available in this thread */
tool_resources: Thread.ToolResources | null;
}
namespace Thread {
interface ToolResources {
code_interpreter?: {
/** File IDs for code interpreter (max 20) */
file_ids?: Array<string>;
};
file_search?: {
/** Vector store IDs for file search (max 1) */
vector_store_ids?: Array<string>;
};
}
}/**
* Confirmation of thread deletion
*/
interface ThreadDeleted {
id: string;
deleted: boolean;
object: "thread.deleted";
}/**
* Parameters for creating a thread
*/
interface ThreadCreateParams {
/** Initial messages for the thread */
messages?: Array<ThreadCreateParams.Message>;
/** Metadata key-value pairs */
metadata?: Metadata | null;
/** Tool resources */
tool_resources?: ThreadCreateParams.ToolResources | null;
}
namespace ThreadCreateParams {
interface Message {
/** Message text or content parts */
content: string | Array<MessageContentPartParam>;
/** Message role */
role: "user" | "assistant";
/** File attachments */
attachments?: Array<Attachment> | null;
/** Metadata */
metadata?: Metadata | null;
}
interface Attachment {
/** File ID */
file_id?: string;
/** Tools to use with this file */
tools?: Array<CodeInterpreterTool | FileSearchTool>;
}
}client.beta.threads.messages)Create and manage messages within threads.
/**
* Create a message in a thread
* @param threadID - The thread ID
* @param body - Message configuration
* @returns Promise<Message>
*/
create(
threadID: string,
body: MessageCreateParams,
options?: RequestOptions
): Promise<Message>;Usage:
const message = await client.beta.threads.messages.create(thread.id, {
role: "user",
content: "Analyze this data and create visualizations",
attachments: [
{
file_id: "file-789",
tools: [{ type: "code_interpreter" }]
}
]
});
// Multi-modal message
const message = await client.beta.threads.messages.create(thread.id, {
role: "user",
content: [
{ type: "text", text: "What's in this image?" },
{
type: "image_url",
image_url: { url: "https://example.com/image.jpg" }
}
]
});/**
* Retrieve a message
* @param messageID - The message ID
* @param params - Thread ID parameter
* @returns Promise<Message>
*/
retrieve(
messageID: string,
params: MessageRetrieveParams,
options?: RequestOptions
): Promise<Message>;/**
* Update a message's metadata
* @param messageID - The message ID
* @param params - Thread ID and metadata
* @returns Promise<Message>
*/
update(
messageID: string,
params: MessageUpdateParams,
options?: RequestOptions
): Promise<Message>;/**
* List messages in a thread with pagination
* @param threadID - The thread ID
* @param query - Pagination and filter parameters
* @returns PagePromise<MessagesPage, Message>
*/
list(
threadID: string,
query?: MessageListParams,
options?: RequestOptions
): PagePromise<MessagesPage, Message>;Usage:
// Iterate all messages
for await (const message of client.beta.threads.messages.list(thread.id)) {
console.log(`${message.role}: ${message.content[0].text?.value}`);
}
// Filter by run and order
const messages = await client.beta.threads.messages.list(thread.id, {
run_id: run.id,
order: "asc",
limit: 100
});/**
* Delete a message
* @param messageID - The message ID
* @param params - Thread ID parameter
* @returns Promise<MessageDeleted>
*/
delete(
messageID: string,
params: MessageDeleteParams,
options?: RequestOptions
): Promise<MessageDeleted>;/**
* Represents a message within a thread
*/
interface Message {
/** The identifier */
id: string;
/** Assistant ID that authored this message (if applicable) */
assistant_id: string | null;
/** File attachments */
attachments: Array<Message.Attachment> | null;
/** Completion timestamp (Unix seconds) */
completed_at: number | null;
/** Message content (text, images, etc.) */
content: Array<MessageContent>;
/** Creation timestamp (Unix seconds) */
created_at: number;
/** When message was marked incomplete */
incomplete_at: number | null;
/** Details about why message is incomplete */
incomplete_details: Message.IncompleteDetails | null;
/** Metadata */
metadata: Metadata | null;
/** Always 'thread.message' */
object: "thread.message";
/** Message role */
role: "user" | "assistant";
/** Associated run ID */
run_id: string | null;
/** Message status */
status: "in_progress" | "incomplete" | "completed";
/** Parent thread ID */
thread_id: string;
}
namespace Message {
interface IncompleteDetails {
/** Reason for incompletion */
reason: "content_filter" | "max_tokens" | "run_cancelled" | "run_expired" | "run_failed";
}
}/**
* Content types in messages
*/
type MessageContent =
| ImageFileContentBlock
| ImageURLContentBlock
| TextContentBlock
| RefusalContentBlock;
interface TextContentBlock {
text: Text;
type: "text";
}
interface Text {
/** Annotations (file citations, file paths) */
annotations: Array<Annotation>;
/** The text data */
value: string;
}
interface ImageFileContentBlock {
image_file: ImageFile;
type: "image_file";
}
interface ImageFile {
/** File ID of the image */
file_id: string;
/** Detail level (auto, low, high) */
detail?: "auto" | "low" | "high";
}
interface ImageURLContentBlock {
image_url: ImageURL;
type: "image_url";
}
interface ImageURL {
/** External image URL */
url: string;
/** Detail level (auto, low, high) */
detail?: "auto" | "low" | "high";
}
interface RefusalContentBlock {
/** Refusal text */
refusal: string;
/** Always 'refusal' */
type: "refusal";
}/**
* Annotations within message text
*/
type Annotation = FileCitationAnnotation | FilePathAnnotation;
/**
* Citation pointing to a specific quote from a file (file_search tool)
*/
interface FileCitationAnnotation {
/** End index in text */
end_index: number;
/** Citation details */
file_citation: {
/** File ID being cited */
file_id: string;
};
/** Start index in text */
start_index: number;
/** Text to be replaced */
text: string;
/** Always 'file_citation' */
type: "file_citation";
}
/**
* URL for file generated by code_interpreter tool
*/
interface FilePathAnnotation {
/** End index in text */
end_index: number;
/** File path details */
file_path: {
/** Generated file ID */
file_id: string;
};
/** Start index in text */
start_index: number;
/** Text to be replaced */
text: string;
/** Always 'file_path' */
type: "file_path";
}/**
* Annotation deltas during streaming
*/
type AnnotationDelta = FileCitationDeltaAnnotation | FilePathDeltaAnnotation;
/**
* Citation delta during streaming (file_search tool)
*/
interface FileCitationDeltaAnnotation {
/** The index of the annotation in the text content part */
index: number;
/** Always 'file_citation' */
type: "file_citation";
/** Citation details */
file_citation?: {
/** The ID of the specific File the citation is from */
file_id?: string;
/** The specific quote in the file */
quote?: string;
};
/** Start index in text */
start_index?: number;
/** End index in text */
end_index?: number;
/** The text in the message content that needs to be replaced */
text?: string;
}
/**
* File path delta during streaming (code_interpreter tool)
*/
interface FilePathDeltaAnnotation {
/** The index of the annotation in the text content part */
index: number;
/** Always 'file_path' */
type: "file_path";
/** File path details */
file_path?: {
/** Generated file ID */
file_id?: string;
};
/** Start index in text */
start_index?: number;
/** End index in text */
end_index?: number;
/** The text in the message content that needs to be replaced */
text?: string;
}/**
* Delta containing changed fields during streaming
*/
interface MessageDelta {
/** Content deltas */
content?: Array<MessageContentDelta>;
/** Role */
role?: "user" | "assistant";
}
/**
* Message delta event during streaming
*/
interface MessageDeltaEvent {
/** Message ID */
id: string;
/** The delta */
delta: MessageDelta;
/** Always 'thread.message.delta' */
object: "thread.message.delta";
}
/**
* Content delta types
*/
type MessageContentDelta =
| ImageFileDeltaBlock
| TextDeltaBlock
| RefusalDeltaBlock
| ImageURLDeltaBlock;
interface TextDeltaBlock {
/** Content index */
index: number;
/** Always 'text' */
type: "text";
/** Text delta */
text?: TextDelta;
}
interface TextDelta {
/** Annotation deltas */
annotations?: Array<AnnotationDelta>;
/** Text value delta */
value?: string;
}client.beta.threads.runs)Execute an assistant on a thread with optional streaming and tool calling.
/**
* Create a run on a thread (supports streaming)
* @param threadID - The thread ID
* @param params - Run configuration
* @returns Promise<Run> | Promise<Stream<AssistantStreamEvent>>
*/
create(
threadID: string,
params: RunCreateParamsNonStreaming,
options?: RequestOptions
): Promise<Run>;
create(
threadID: string,
params: RunCreateParamsStreaming,
options?: RequestOptions
): Promise<Stream<AssistantStreamEvent>>;Usage:
// Basic run
const run = await client.beta.threads.runs.create(thread.id, {
assistant_id: assistant.id
});
// Run with overrides
const run = await client.beta.threads.runs.create(thread.id, {
assistant_id: assistant.id,
instructions: "Additional instructions for this run only",
additional_messages: [
{ role: "user", content: "Context for this run" }
],
model: "gpt-4o",
tools: [{ type: "code_interpreter" }],
temperature: 0.8,
max_prompt_tokens: 10000,
max_completion_tokens: 2000,
parallel_tool_calls: true,
tool_choice: "auto",
metadata: { session: "xyz" }
});
// Streaming run
const stream = await client.beta.threads.runs.create(thread.id, {
assistant_id: assistant.id,
stream: true
});
for await (const event of stream) {
console.log(event.event, event.data);
}/**
* Retrieve a run
* @param runID - The run ID
* @param params - Thread ID parameter
* @returns Promise<Run>
*/
retrieve(
runID: string,
params: RunRetrieveParams,
options?: RequestOptions
): Promise<Run>;/**
* Update a run's metadata
* @param runID - The run ID
* @param params - Thread ID and metadata
* @returns Promise<Run>
*/
update(
runID: string,
params: RunUpdateParams,
options?: RequestOptions
): Promise<Run>;/**
* List runs for a thread
* @param threadID - The thread ID
* @param query - Pagination parameters
* @returns PagePromise<RunsPage, Run>
*/
list(
threadID: string,
query?: RunListParams,
options?: RequestOptions
): PagePromise<RunsPage, Run>;/**
* Cancel an in-progress run
* @param runID - The run ID
* @param params - Thread ID parameter
* @returns Promise<Run>
*/
cancel(
runID: string,
params: RunCancelParams,
options?: RequestOptions
): Promise<Run>;/**
* Submit tool outputs for a run requiring action (supports streaming)
* @param runID - The run ID
* @param params - Tool outputs and thread ID
* @returns Promise<Run> | Promise<Stream<AssistantStreamEvent>>
*/
submitToolOutputs(
runID: string,
params: RunSubmitToolOutputsParamsNonStreaming,
options?: RequestOptions
): Promise<Run>;
submitToolOutputs(
runID: string,
params: RunSubmitToolOutputsParamsStreaming,
options?: RequestOptions
): Promise<Stream<AssistantStreamEvent>>;Usage:
// Check if action is required
let run = await client.beta.threads.runs.retrieve(run.id, {
thread_id: thread.id
});
if (run.status === "requires_action" &&
run.required_action?.type === "submit_tool_outputs") {
const toolCalls = run.required_action.submit_tool_outputs.tool_calls;
const toolOutputs = [];
for (const toolCall of toolCalls) {
if (toolCall.type === "function") {
const args = JSON.parse(toolCall.function.arguments);
const result = await myFunction(args);
toolOutputs.push({
tool_call_id: toolCall.id,
output: JSON.stringify(result)
});
}
}
run = await client.beta.threads.runs.submitToolOutputs(run.id, {
thread_id: thread.id,
tool_outputs: toolOutputs
});
}/**
* Helper: Create run and poll until terminal state
* @param threadId - The thread ID
* @param body - Run configuration
* @param options - Options including pollIntervalMs
* @returns Promise<Run>
*/
createAndPoll(
threadId: string,
body: RunCreateParamsNonStreaming,
options?: RequestOptions & { pollIntervalMs?: number }
): Promise<Run>;Usage:
const run = await client.beta.threads.runs.createAndPoll(
thread.id,
{ assistant_id: assistant.id },
{ pollIntervalMs: 500 }
);
console.log(run.status); // Terminal state: completed, failed, etc.
if (run.status === "completed") {
const messages = await client.beta.threads.messages.list(thread.id);
console.log(messages.data[0].content[0].text.value);
}/**
* Helper: Create and stream run with AssistantStream
* @param threadId - The thread ID
* @param body - Run configuration
* @returns AssistantStream
* @deprecated Use stream() instead
*/
createAndStream(
threadId: string,
body: RunCreateParamsBaseStream,
options?: RequestOptions
): AssistantStream;/**
* Helper: Poll run until terminal state
* @param runId - The run ID
* @param params - Thread ID parameter
* @param options - Options including pollIntervalMs
* @returns Promise<Run>
*/
poll(
runId: string,
params: RunRetrieveParams,
options?: RequestOptions & { pollIntervalMs?: number }
): Promise<Run>;/**
* Helper: Stream run with AssistantStream
* @param threadId - The thread ID
* @param body - Run configuration
* @returns AssistantStream
*/
stream(
threadId: string,
body: RunCreateParamsBaseStream,
options?: RequestOptions
): AssistantStream;Usage:
const stream = client.beta.threads.runs.stream(thread.id, {
assistant_id: assistant.id
});
stream
.on("textCreated", () => console.log("Text created"))
.on("textDelta", (delta, snapshot) => {
process.stdout.write(delta.value || "");
})
.on("messageDone", async (message) => {
console.log("\nMessage done:", message.id);
});
await stream.finalRun();/**
* Helper: Submit tool outputs and poll until terminal state
* @param runId - The run ID
* @param params - Tool outputs and thread ID
* @param options - Options including pollIntervalMs
* @returns Promise<Run>
*/
submitToolOutputsAndPoll(
runId: string,
params: RunSubmitToolOutputsParamsNonStreaming,
options?: RequestOptions & { pollIntervalMs?: number }
): Promise<Run>;/**
* Helper: Submit tool outputs and stream with AssistantStream
* @param runId - The run ID
* @param params - Tool outputs and thread ID
* @returns AssistantStream
*/
submitToolOutputsStream(
runId: string,
params: RunSubmitToolOutputsParamsStream,
options?: RequestOptions
): AssistantStream;/**
* Represents an execution run on a thread
*/
interface Run {
/** The identifier */
id: string;
/** Assistant ID used for this run */
assistant_id: string;
/** When run was cancelled (Unix seconds) */
cancelled_at: number | null;
/** When run was completed (Unix seconds) */
completed_at: number | null;
/** Creation timestamp (Unix seconds) */
created_at: number;
/** Expiration timestamp (Unix seconds) */
expires_at: number | null;
/** When run failed (Unix seconds) */
failed_at: number | null;
/** Incomplete details */
incomplete_details: Run.IncompleteDetails | null;
/** Instructions used for this run */
instructions: string;
/** Last error if any */
last_error: Run.LastError | null;
/** Max completion tokens */
max_completion_tokens: number | null;
/** Max prompt tokens */
max_prompt_tokens: number | null;
/** Metadata */
metadata: Metadata | null;
/** Model used */
model: string;
/** Always 'thread.run' */
object: "thread.run";
/** Whether parallel function calling is enabled */
parallel_tool_calls: boolean;
/** Required action details if status is requires_action */
required_action: Run.RequiredAction | null;
/** Response format */
response_format: AssistantResponseFormatOption | null;
/** When run started (Unix seconds) */
started_at: number | null;
/** Run status */
status: RunStatus;
/** Parent thread ID */
thread_id: string;
/** Tool choice configuration */
tool_choice: AssistantToolChoiceOption | null;
/** Tools used */
tools: Array<AssistantTool>;
/** Truncation strategy */
truncation_strategy: Run.TruncationStrategy | null;
/** Usage statistics (null until terminal state) */
usage: Run.Usage | null;
/** Sampling temperature */
temperature?: number | null;
/** Nucleus sampling */
top_p?: number | null;
}
namespace Run {
interface IncompleteDetails {
/** Why run is incomplete */
reason?: "max_completion_tokens" | "max_prompt_tokens";
}
interface LastError {
/** Error code */
code: "server_error" | "rate_limit_exceeded" | "invalid_prompt";
/** Error message */
message: string;
}
interface RequiredAction {
/** Tool outputs required */
submit_tool_outputs: {
/** Tool calls needing outputs */
tool_calls: Array<RequiredActionFunctionToolCall>;
};
/** Always 'submit_tool_outputs' */
type: "submit_tool_outputs";
}
interface TruncationStrategy {
/** Truncation type */
type: "auto" | "last_messages";
/** Number of recent messages to keep */
last_messages?: number | null;
}
interface Usage {
/** Completion tokens used */
completion_tokens: number;
/** Prompt tokens used */
prompt_tokens: number;
/** Total tokens used */
total_tokens: number;
}
}/**
* Run status values
*/
type RunStatus =
| "queued"
| "in_progress"
| "requires_action"
| "cancelling"
| "cancelled"
| "failed"
| "completed"
| "incomplete"
| "expired";/**
* Function tool call requiring output
*/
interface RequiredActionFunctionToolCall {
/** Tool call ID (reference when submitting outputs) */
id: string;
/** Function definition */
function: {
/** Arguments as JSON string */
arguments: string;
/** Function name */
name: string;
};
/** Always 'function' */
type: "function";
}/**
* Parameters for creating a run
*/
interface RunCreateParamsBase {
/** Assistant ID to use */
assistant_id: string;
/** Additional fields to include in response */
include?: Array<RunStepInclude>;
/** Additional instructions appended to assistant instructions */
additional_instructions?: string | null;
/** Additional messages added before run */
additional_messages?: Array<RunCreateParams.AdditionalMessage> | null;
/** Override assistant instructions */
instructions?: string | null;
/** Max completion tokens */
max_completion_tokens?: number | null;
/** Max prompt tokens */
max_prompt_tokens?: number | null;
/** Metadata */
metadata?: Metadata | null;
/** Override model */
model?: string | ChatModel | null;
/** Enable parallel function calling */
parallel_tool_calls?: boolean;
/** Reasoning effort */
reasoning_effort?: ReasoningEffort | null;
/** Response format */
response_format?: AssistantResponseFormatOption | null;
/** Enable streaming */
stream?: boolean | null;
/** Temperature */
temperature?: number | null;
/** Tool choice */
tool_choice?: AssistantToolChoiceOption | null;
/** Override tools */
tools?: Array<AssistantTool> | null;
/** Top-p sampling */
top_p?: number | null;
/** Truncation strategy */
truncation_strategy?: RunCreateParams.TruncationStrategy | null;
}
type RunCreateParams =
| RunCreateParamsNonStreaming
| RunCreateParamsStreaming;
interface RunCreateParamsNonStreaming extends RunCreateParamsBase {
stream?: false | null;
}
interface RunCreateParamsStreaming extends RunCreateParamsBase {
stream: true;
}/**
* Parameters for submitting tool outputs
*/
interface RunSubmitToolOutputsParamsBase {
/** Thread ID */
thread_id: string;
/** Tool outputs */
tool_outputs: Array<RunSubmitToolOutputsParams.ToolOutput>;
/** Enable streaming */
stream?: boolean | null;
}
namespace RunSubmitToolOutputsParams {
interface ToolOutput {
/** Output value as string */
output?: string;
/** Tool call ID */
tool_call_id?: string;
}
}
type RunSubmitToolOutputsParams =
| RunSubmitToolOutputsParamsNonStreaming
| RunSubmitToolOutputsParamsStreaming;/**
* Controls which tool is called
*/
type AssistantToolChoiceOption =
| "none" // Model will not call tools
| "auto" // Model can choose (default)
| "required" // Model must call at least one tool
| AssistantToolChoice;
interface AssistantToolChoice {
/** Tool type */
type: "function" | "code_interpreter" | "file_search";
/** Function name (if type is function) */
function?: {
name: string;
};
}client.beta.threads.runs.steps)Access detailed step-by-step execution information for runs.
/**
* Retrieve a run step
* @param stepID - The step ID
* @param params - Thread ID and run ID
* @returns Promise<RunStep>
*/
retrieve(
stepID: string,
params: StepRetrieveParams,
options?: RequestOptions
): Promise<RunStep>;Usage:
const step = await client.beta.threads.runs.steps.retrieve(
stepId,
{
thread_id: thread.id,
run_id: run.id,
include: ["step_details.tool_calls[*].file_search.results[*].content"]
}
);
console.log(step.type); // "message_creation" or "tool_calls"/**
* List steps for a run
* @param runID - The run ID
* @param params - Thread ID and pagination
* @returns PagePromise<RunStepsPage, RunStep>
*/
list(
runID: string,
params: StepListParams,
options?: RequestOptions
): PagePromise<RunStepsPage, RunStep>;Usage:
for await (const step of client.beta.threads.runs.steps.list(run.id, {
thread_id: thread.id,
order: "asc"
})) {
console.log(`Step ${step.id}: ${step.type} - ${step.status}`);
if (step.type === "tool_calls") {
for (const toolCall of step.step_details.tool_calls) {
if (toolCall.type === "code_interpreter") {
console.log("Code:", toolCall.code_interpreter.input);
console.log("Outputs:", toolCall.code_interpreter.outputs);
}
}
}
}/**
* Represents a step in execution of a run
*/
interface RunStep {
/** The identifier */
id: string;
/** Associated assistant ID */
assistant_id: string;
/** When cancelled (Unix seconds) */
cancelled_at: number | null;
/** When completed (Unix seconds) */
completed_at: number | null;
/** Creation timestamp (Unix seconds) */
created_at: number;
/** When expired (Unix seconds) */
expired_at: number | null;
/** When failed (Unix seconds) */
failed_at: number | null;
/** Last error */
last_error: RunStep.LastError | null;
/** Metadata */
metadata: Metadata | null;
/** Always 'thread.run.step' */
object: "thread.run.step";
/** Parent run ID */
run_id: string;
/** Step status */
status: "in_progress" | "cancelled" | "failed" | "completed" | "expired";
/** Step details */
step_details: MessageCreationStepDetails | ToolCallsStepDetails;
/** Parent thread ID */
thread_id: string;
/** Step type */
type: "message_creation" | "tool_calls";
/** Usage statistics (null while in_progress) */
usage: RunStep.Usage | null;
}
namespace RunStep {
interface LastError {
/** Error code */
code: "server_error" | "rate_limit_exceeded";
/** Error message */
message: string;
}
interface Usage {
/** Completion tokens */
completion_tokens: number;
/** Prompt tokens */
prompt_tokens: number;
/** Total tokens */
total_tokens: number;
}
}/**
* Message creation step details
*/
interface MessageCreationStepDetails {
message_creation: {
/** ID of created message */
message_id: string;
};
/** Always 'message_creation' */
type: "message_creation";
}/**
* Tool calls step details
*/
interface ToolCallsStepDetails {
/** Tool calls in this step */
tool_calls: Array<ToolCall>;
/** Always 'tool_calls' */
type: "tool_calls";
}/**
* Tool call types
*/
type ToolCall =
| CodeInterpreterToolCall
| FileSearchToolCall
| FunctionToolCall;
/**
* Code interpreter tool call
*/
interface CodeInterpreterToolCall {
/** Tool call ID */
id: string;
/** Code interpreter details */
code_interpreter: {
/** Input code */
input: string;
/** Outputs (logs or images) */
outputs: Array<
| { type: "logs"; logs: string }
| { type: "image"; image: { file_id: string } }
>;
};
/** Always 'code_interpreter' */
type: "code_interpreter";
}
/**
* File search tool call
*/
interface FileSearchToolCall {
/** Tool call ID */
id: string;
/** File search details */
file_search: {
/** Ranking options */
ranking_options?: {
/** Ranker type */
ranker: "auto" | "default_2024_08_21";
/** Score threshold */
score_threshold: number;
};
/** Search results */
results?: Array<{
/** File ID */
file_id: string;
/** File name */
file_name: string;
/** Relevance score */
score: number;
/** Content (if requested) */
content?: Array<{ text?: string; type?: "text" }>;
}>;
};
/** Always 'file_search' */
type: "file_search";
}
/**
* Function tool call
*/
interface FunctionToolCall {
/** Tool call ID */
id: string;
/** Function details */
function: {
/** Arguments */
arguments: string;
/** Function name */
name: string;
/** Output (null until submitted) */
output: string | null;
};
/** Always 'function' */
type: "function";
}/**
* Run step delta during streaming
*/
interface RunStepDelta {
/** Step details delta */
step_details?: RunStepDeltaMessageDelta | ToolCallDeltaObject;
}
/**
* Run step delta event
*/
interface RunStepDeltaEvent {
/** Step ID */
id: string;
/** The delta */
delta: RunStepDelta;
/** Always 'thread.run.step.delta' */
object: "thread.run.step.delta";
}
/**
* Tool call delta
*/
type ToolCallDelta =
| CodeInterpreterToolCallDelta
| FileSearchToolCallDelta
| FunctionToolCallDelta;
interface ToolCallDeltaObject {
/** Always 'tool_calls' */
type: "tool_calls";
/** Tool call deltas */
tool_calls?: Array<ToolCallDelta>;
}The AssistantStream class provides a rich event-based interface for streaming runs.
/**
* Typed stream for assistant responses with event handlers
*/
class AssistantStream extends EventStream<AssistantStreamEvents>
implements AsyncIterable<AssistantStreamEvent> {
/**
* Current event being processed
*/
currentEvent(): AssistantStreamEvent | undefined;
/**
* Current run snapshot
*/
currentRun(): Run | undefined;
/**
* Current message snapshot
*/
currentMessageSnapshot(): Message | undefined;
/**
* Current run step snapshot
*/
currentRunStepSnapshot(): RunStep | undefined;
/**
* Get final run steps after stream completes
*/
finalRunSteps(): Promise<RunStep[]>;
/**
* Get final messages after stream completes
*/
finalMessages(): Promise<Message[]>;
/**
* Get final run after stream completes
*/
finalRun(): Promise<Run>;
/**
* Create stream from ReadableStream
*/
static fromReadableStream(stream: ReadableStream): AssistantStream;
/**
* Convert to ReadableStream
*/
toReadableStream(): ReadableStream;
}/**
* Event handlers for AssistantStream
*/
interface AssistantStreamEvents {
/** Run state changes */
run: (run: Run) => void;
/** Message lifecycle */
messageCreated: (message: Message) => void;
messageDelta: (message: MessageDelta, snapshot: Message) => void;
messageDone: (message: Message) => void;
/** Run step lifecycle */
runStepCreated: (runStep: RunStep) => void;
runStepDelta: (delta: RunStepDelta, snapshot: RunStep) => void;
runStepDone: (runStep: RunStep, snapshot: RunStep) => void;
/** Tool call lifecycle */
toolCallCreated: (toolCall: ToolCall) => void;
toolCallDelta: (delta: ToolCallDelta, snapshot: ToolCall) => void;
toolCallDone: (toolCall: ToolCall) => void;
/** Text content lifecycle */
textCreated: (content: Text) => void;
textDelta: (delta: TextDelta, snapshot: Text) => void;
textDone: (content: Text, snapshot: Message) => void;
/** Image files (not streamed) */
imageFileDone: (content: ImageFile, snapshot: Message) => void;
/** Raw events */
event: (event: AssistantStreamEvent) => void;
/** Stream lifecycle */
connect: () => void;
end: () => void;
abort: (error: Error) => void;
error: (error: Error) => void;
}/**
* Union of all streaming event types
*/
type AssistantStreamEvent =
// Thread events
| ThreadCreated
// Run events
| ThreadRunCreated
| ThreadRunQueued
| ThreadRunInProgress
| ThreadRunRequiresAction
| ThreadRunCompleted
| ThreadRunIncomplete
| ThreadRunFailed
| ThreadRunCancelling
| ThreadRunCancelled
| ThreadRunExpired
// Step events
| ThreadRunStepCreated
| ThreadRunStepInProgress
| ThreadRunStepDelta
| ThreadRunStepCompleted
| ThreadRunStepFailed
| ThreadRunStepCancelled
| ThreadRunStepExpired
// Message events
| ThreadMessageCreated
| ThreadMessageInProgress
| ThreadMessageDelta
| ThreadMessageCompleted
| ThreadMessageIncomplete
// Error
| ErrorEvent;
/**
* Thread created event
*/
interface ThreadCreated {
data: Thread;
event: "thread.created";
enabled?: boolean;
}
/**
* Run created event
*/
interface ThreadRunCreated {
data: Run;
event: "thread.run.created";
}
/**
* Run queued event
*/
interface ThreadRunQueued {
data: Run;
event: "thread.run.queued";
}
/**
* Run in progress event
*/
interface ThreadRunInProgress {
data: Run;
event: "thread.run.in_progress";
}
/**
* Run requires action event
*/
interface ThreadRunRequiresAction {
data: Run;
event: "thread.run.requires_action";
}
/**
* Run completed event
*/
interface ThreadRunCompleted {
data: Run;
event: "thread.run.completed";
}
/**
* Run incomplete event
*/
interface ThreadRunIncomplete {
data: Run;
event: "thread.run.incomplete";
}
/**
* Run failed event
*/
interface ThreadRunFailed {
data: Run;
event: "thread.run.failed";
}
/**
* Run cancelling event
*/
interface ThreadRunCancelling {
data: Run;
event: "thread.run.cancelling";
}
/**
* Run cancelled event
*/
interface ThreadRunCancelled {
data: Run;
event: "thread.run.cancelled";
}
/**
* Run expired event
*/
interface ThreadRunExpired {
data: Run;
event: "thread.run.expired";
}
/**
* Run step created event
*/
interface ThreadRunStepCreated {
data: RunStep;
event: "thread.run.step.created";
}
/**
* Run step in progress event
*/
interface ThreadRunStepInProgress {
data: RunStep;
event: "thread.run.step.in_progress";
}
/**
* Run step delta event (streaming changes)
*/
interface ThreadRunStepDelta {
data: RunStepDeltaEvent;
event: "thread.run.step.delta";
}
/**
* Run step completed event
*/
interface ThreadRunStepCompleted {
data: RunStep;
event: "thread.run.step.completed";
}
/**
* Run step failed event
*/
interface ThreadRunStepFailed {
data: RunStep;
event: "thread.run.step.failed";
}
/**
* Run step cancelled event
*/
interface ThreadRunStepCancelled {
data: RunStep;
event: "thread.run.step.cancelled";
}
/**
* Run step expired event
*/
interface ThreadRunStepExpired {
data: RunStep;
event: "thread.run.step.expired";
}
/**
* Message created event
*/
interface ThreadMessageCreated {
data: Message;
event: "thread.message.created";
}
/**
* Message in progress event
*/
interface ThreadMessageInProgress {
data: Message;
event: "thread.message.in_progress";
}
/**
* Message delta event (streaming changes)
*/
interface ThreadMessageDelta {
data: MessageDeltaEvent;
event: "thread.message.delta";
}
/**
* Message completed event
*/
interface ThreadMessageCompleted {
data: Message;
event: "thread.message.completed";
}
/**
* Message incomplete event
*/
interface ThreadMessageIncomplete {
data: Message;
event: "thread.message.incomplete";
}
/**
* Error event
*/
interface ErrorEvent {
data: ErrorObject;
event: "error";
}import OpenAI from "openai";
const client = new OpenAI();
// 1. Create an assistant
const assistant = await client.beta.assistants.create({
model: "gpt-4o",
name: "Code Helper",
instructions: "You help debug code and write tests.",
tools: [{ type: "code_interpreter" }]
});
// 2. Create a thread
const thread = await client.beta.threads.create();
// 3. Add a message
await client.beta.threads.messages.create(thread.id, {
role: "user",
content: "Write a function to calculate fibonacci numbers"
});
// 4. Run the assistant and poll
const run = await client.beta.threads.runs.createAndPoll(thread.id, {
assistant_id: assistant.id
});
// 5. Retrieve messages
if (run.status === "completed") {
const messages = await client.beta.threads.messages.list(thread.id);
for (const message of messages.data.reverse()) {
console.log(`${message.role}: ${message.content[0].text.value}`);
}
}const stream = client.beta.threads.runs.stream(thread.id, {
assistant_id: assistant.id
});
stream
.on("textCreated", () => console.log("\nassistant > "))
.on("textDelta", (delta, snapshot) => {
process.stdout.write(delta.value || "");
})
.on("textDone", (content, message) => {
console.log("\n");
})
.on("toolCallCreated", (toolCall) => {
console.log(`\nTool: ${toolCall.type}`);
})
.on("toolCallDelta", (delta, snapshot) => {
if (delta.type === "code_interpreter") {
if (delta.code_interpreter?.input) {
process.stdout.write(delta.code_interpreter.input);
}
if (delta.code_interpreter?.outputs) {
for (const output of delta.code_interpreter.outputs) {
if (output.type === "logs") {
console.log("\nLogs:", output.logs);
}
}
}
}
})
.on("messageDone", async (message) => {
// Check for citations
const textContent = message.content[0];
if (textContent.type === "text") {
for (const annotation of textContent.text.annotations) {
if (annotation.type === "file_citation") {
console.log(`\nCited file: ${annotation.file_citation.file_id}`);
}
}
}
});
const finalRun = await stream.finalRun();
console.log("\nRun completed:", finalRun.status);// Define function
const functions = [
{
type: "function" as const,
function: {
name: "get_weather",
description: "Get current weather for a location",
parameters: {
type: "object",
properties: {
location: {
type: "string",
description: "City and state, e.g., San Francisco, CA"
},
unit: {
type: "string",
enum: ["celsius", "fahrenheit"]
}
},
required: ["location"]
}
}
}
];
// Create assistant with function
const assistant = await client.beta.assistants.create({
model: "gpt-4o",
tools: functions
});
// Create thread and run
const thread = await client.beta.threads.create({
messages: [
{ role: "user", content: "What's the weather in Boston?" }
]
});
let run = await client.beta.threads.runs.create(thread.id, {
assistant_id: assistant.id
});
// Poll until requires_action
while (run.status === "queued" || run.status === "in_progress") {
await new Promise(resolve => setTimeout(resolve, 1000));
run = await client.beta.threads.runs.retrieve(run.id, {
thread_id: thread.id
});
}
// Handle function calling
if (run.status === "requires_action" &&
run.required_action?.type === "submit_tool_outputs") {
const toolCalls = run.required_action.submit_tool_outputs.tool_calls;
const toolOutputs = [];
for (const toolCall of toolCalls) {
if (toolCall.function.name === "get_weather") {
const args = JSON.parse(toolCall.function.arguments);
// Call your actual weather API
const weatherData = await getWeather(args.location);
toolOutputs.push({
tool_call_id: toolCall.id,
output: JSON.stringify(weatherData)
});
}
}
// Submit outputs and poll
run = await client.beta.threads.runs.submitToolOutputsAndPoll(
run.id,
{
thread_id: thread.id,
tool_outputs: toolOutputs
}
);
}
// Get final response
if (run.status === "completed") {
const messages = await client.beta.threads.messages.list(thread.id);
console.log(messages.data[0].content[0].text.value);
}// Upload files
const file1 = await client.files.create({
file: fs.createReadStream("research-paper.pdf"),
purpose: "assistants"
});
const file2 = await client.files.create({
file: fs.createReadStream("documentation.md"),
purpose: "assistants"
});
// Create vector store
const vectorStore = await client.vectorStores.create({
name: "Research Documents",
file_ids: [file1.id, file2.id]
});
// Wait for file processing
await client.vectorStores.files.poll(vectorStore.id, file1.id);
await client.vectorStores.files.poll(vectorStore.id, file2.id);
// Create assistant with file search
const assistant = await client.beta.assistants.create({
model: "gpt-4o",
name: "Research Assistant",
instructions: "You help answer questions based on provided documents.",
tools: [{ type: "file_search" }],
tool_resources: {
file_search: {
vector_store_ids: [vectorStore.id]
}
}
});
// Create thread and run
const thread = await client.beta.threads.create({
messages: [
{
role: "user",
content: "What are the main findings in the research paper?"
}
]
});
const run = await client.beta.threads.runs.createAndPoll(thread.id, {
assistant_id: assistant.id
});
// Get response with citations
if (run.status === "completed") {
const messages = await client.beta.threads.messages.list(thread.id);
const message = messages.data[0];
if (message.content[0].type === "text") {
console.log(message.content[0].text.value);
// Show citations
for (const annotation of message.content[0].text.annotations) {
if (annotation.type === "file_citation") {
console.log(`\nSource: File ${annotation.file_citation.file_id}`);
console.log(`Text: "${annotation.text}"`);
}
}
}
}// Upload data file
const file = await client.files.create({
file: fs.createReadStream("sales-data.csv"),
purpose: "assistants"
});
// Create assistant
const assistant = await client.beta.assistants.create({
model: "gpt-4o",
name: "Data Analyst",
instructions: "You analyze data and create visualizations.",
tools: [{ type: "code_interpreter" }],
tool_resources: {
code_interpreter: {
file_ids: [file.id]
}
}
});
// Stream the analysis
const stream = client.beta.threads.runs.stream(thread.id, {
assistant_id: assistant.id
});
stream
.on("toolCallCreated", (toolCall) => {
if (toolCall.type === "code_interpreter") {
console.log("\nRunning code...");
}
})
.on("toolCallDelta", (delta, snapshot) => {
if (delta.type === "code_interpreter") {
if (delta.code_interpreter?.input) {
console.log(delta.code_interpreter.input);
}
if (delta.code_interpreter?.outputs) {
for (const output of delta.code_interpreter.outputs) {
if (output.type === "logs") {
console.log("Output:", output.logs);
} else if (output.type === "image") {
console.log("Image generated:", output.image?.file_id);
}
}
}
}
});
await stream.finalRun();// Create thread and run together
const run = await client.beta.threads.createAndRunPoll({
assistant_id: assistant.id,
thread: {
messages: [
{
role: "user",
content: "Explain machine learning in simple terms",
attachments: [
{
file_id: "file-123",
tools: [{ type: "file_search" }]
}
]
}
],
metadata: { conversation_id: "conv-456" }
},
instructions: "Use simple language suitable for beginners",
temperature: 0.7,
tool_choice: "auto"
});
console.log("Thread ID:", run.thread_id);
console.log("Status:", run.status);
// Retrieve messages
const messages = await client.beta.threads.messages.list(run.thread_id);try {
const run = await client.beta.threads.runs.createAndPoll(thread.id, {
assistant_id: assistant.id
});
// Check run status
switch (run.status) {
case "completed":
console.log("Success!");
break;
case "failed":
console.error("Run failed:", run.last_error);
break;
case "incomplete":
console.warn("Run incomplete:", run.incomplete_details);
break;
case "expired":
console.error("Run expired");
break;
case "requires_action":
console.log("Action required");
// Handle function calling
break;
}
} catch (error) {
if (error instanceof OpenAI.APIError) {
console.error("API Error:", error.message);
console.error("Status:", error.status);
console.error("Type:", error.type);
} else {
console.error("Unexpected error:", error);
}
}const stream = await client.beta.threads.runs.create(thread.id, {
assistant_id: assistant.id,
stream: true
});
for await (const event of stream) {
switch (event.event) {
case "thread.run.created":
console.log("Run created:", event.data.id);
break;
case "thread.message.delta":
const delta = event.data.delta.content?.[0];
if (delta?.type === "text" && delta.text?.value) {
process.stdout.write(delta.text.value);
}
break;
case "thread.run.step.delta":
const stepDelta = event.data.delta.step_details;
if (stepDelta?.type === "tool_calls") {
const toolCall = stepDelta.tool_calls?.[0];
if (toolCall?.type === "code_interpreter") {
console.log("\nCode:", toolCall.code_interpreter?.input);
}
}
break;
case "thread.run.completed":
console.log("\nRun completed");
break;
case "error":
console.error("Error:", event.data);
break;
}
}// Reuse threads for conversations
const thread = await client.beta.threads.create({
metadata: { user_id: "user-123", session_start: Date.now() }
});
// Add messages over time
await client.beta.threads.messages.create(thread.id, {
role: "user",
content: "First question"
});
const run1 = await client.beta.threads.runs.createAndPoll(thread.id, {
assistant_id: assistant.id
});
// Continue conversation in same thread
await client.beta.threads.messages.create(thread.id, {
role: "user",
content: "Follow-up question"
});
const run2 = await client.beta.threads.runs.createAndPoll(thread.id, {
assistant_id: assistant.id
});
// Clean up when done
await client.beta.threads.delete(thread.id);// Set token limits
const run = await client.beta.threads.runs.create(thread.id, {
assistant_id: assistant.id,
max_prompt_tokens: 5000,
max_completion_tokens: 1000,
truncation_strategy: {
type: "last_messages",
last_messages: 10
}
});
// Check usage
if (run.status === "completed" && run.usage) {
console.log(`Tokens used: ${run.usage.total_tokens}`);
console.log(` Prompt: ${run.usage.prompt_tokens}`);
console.log(` Completion: ${run.usage.completion_tokens}`);
}// Use metadata for tracking
const assistant = await client.beta.assistants.create({
model: "gpt-4o",
metadata: {
version: "2.0",
department: "support",
created_by: "admin@example.com"
}
});
const thread = await client.beta.threads.create({
metadata: {
user_id: "user-456",
conversation_type: "support",
priority: "high"
}
});
// Query by metadata (when listing)
for await (const assistant of client.beta.assistants.list()) {
if (assistant.metadata?.department === "support") {
console.log("Support assistant:", assistant.name);
}
}// Custom poll interval
const run = await client.beta.threads.runs.createAndPoll(
thread.id,
{ assistant_id: assistant.id },
{
pollIntervalMs: 500, // Poll every 500ms
timeout: 60000 // 60 second timeout
}
);
// The server may suggest poll intervals via openai-poll-after-ms header
// The SDK automatically respects these suggestionsclient.beta.assistants
├── create(params)
├── retrieve(id)
├── update(id, params)
├── list(params?)
└── delete(id)
client.beta.threads
├── create(params?)
├── retrieve(id)
├── update(id, params)
├── delete(id)
├── createAndRun(params)
├── createAndRunPoll(params, options)
├── createAndRunStream(params, options)
│
├── messages
│ ├── create(threadId, params)
│ ├── retrieve(messageId, params)
│ ├── update(messageId, params)
│ ├── list(threadId, params?)
│ └── delete(messageId, params)
│
└── runs
├── create(threadId, params)
├── retrieve(runId, params)
├── update(runId, params)
├── list(threadId, params?)
├── cancel(runId, params)
├── submitToolOutputs(runId, params)
├── createAndPoll(threadId, params, options)
├── createAndStream(threadId, params, options)
├── poll(runId, params, options)
├── stream(threadId, params, options)
├── submitToolOutputsAndPoll(runId, params, options)
├── submitToolOutputsStream(runId, params, options)
│
└── steps
├── retrieve(stepId, params)
└── list(runId, params)Core types: Assistant, Thread, Message, Run, RunStep
Tool types: CodeInterpreterTool, FileSearchTool, FunctionTool
Content types: TextContentBlock, ImageFileContentBlock, ImageURLContentBlock, RefusalContentBlock
Annotation types: FileCitationAnnotation, FilePathAnnotation
Stream event types: 20+ event interfaces in AssistantStreamEvent union
Delta types: MessageDelta, RunStepDelta, TextDelta, ToolCallDelta
Status types: RunStatus, message status, step status
Streaming: AssistantStream, AssistantStreamEvents, event interfaces
'OpenAI-Beta': 'assistants=v2' header automaticallycreateAndPoll, stream, etc.) simplify common workflowsrequires_action statusInstall with Tessl CLI
npx tessl i tessl/npm-openai