or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

agents.mderrors.mdguardrails.mdhandoffs.mdindex.mdmcp.mdmodels.mdrealtime.mdsessions.mdtools.mdtracing.md
tile.json

tessl/npm-openai--agents

A lightweight yet powerful framework for building multi-agent workflows with tool integration, handoffs, and guardrails.

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
npmpkg:npm/@openai/agents@0.3.x

To install, run

npx @tessl/cli install tessl/npm-openai--agents@0.3.0

index.mddocs/

OpenAI Agents SDK

The OpenAI Agents SDK is a lightweight yet powerful framework for building multi-agent workflows in JavaScript/TypeScript. It enables developers to create sophisticated AI systems where multiple LLM-powered agents can collaborate, handoff tasks between each other, and integrate external tools and functions.

Package Information

  • Package Name: @openai/agents
  • Package Type: npm
  • Language: TypeScript
  • Installation: npm install @openai/agents zod@3

Core Imports

import { Agent, run, tool } from '@openai/agents';

For CommonJS:

const { Agent, run, tool } = require('@openai/agents');

Additional exports:

// Realtime voice agents
import { RealtimeAgent, RealtimeSession } from '@openai/agents/realtime';

// Utilities
import {
  isZodObject,
  toSmartString,
  encodeUint8ArrayToBase64,
  applyDiff,
  EventEmitterDelegate
} from '@openai/agents/utils';

Basic Usage

import { Agent, run } from '@openai/agents';

// Create a simple agent
const agent = new Agent({
  name: 'Assistant',
  instructions: 'You are a helpful assistant',
});

// Run the agent
const result = await run(
  agent,
  'Write a haiku about recursion in programming.',
);
console.log(result.finalOutput);

With tools:

import { z } from 'zod';
import { Agent, run, tool } from '@openai/agents';

const getWeatherTool = tool({
  name: 'get_weather',
  description: 'Get the weather for a given city',
  parameters: z.object({ city: z.string() }),
  execute: async (input) => {
    return `The weather in ${input.city} is sunny`;
  },
});

const agent = new Agent({
  name: 'Weather Agent',
  instructions: 'You are a weather assistant',
  tools: [getWeatherTool],
});

const result = await run(agent, 'What is the weather in Tokyo?');
console.log(result.finalOutput);

Architecture

The OpenAI Agents SDK is built around several key components:

  • Agent System: Core Agent class for defining AI agents with instructions, tools, and configuration. Agents can be orchestrated using the Runner class or the convenience run() function.
  • Tool Integration: Flexible tool system for integrating external functions, computer use, shell commands, and hosted OpenAI tools (web search, code interpreter, etc.)
  • Handoff System: Specialized mechanism for transferring control between agents during execution
  • Guardrails: Input and output validation system for safety and reliability
  • MCP Protocol: Model Context Protocol support for integrating local and remote tool servers
  • Session Management: Persistent conversation history with multiple backend options
  • Streaming: Real-time event streams for monitoring agent execution
  • Tracing: Built-in tracing system for debugging and visualization
  • Realtime Voice Agents: WebRTC/WebSocket-based voice agent capabilities
  • Provider Abstraction: Model provider interface supporting OpenAI and custom providers

Capabilities

Agents and Execution

Core agent orchestration for running LLM-powered agents with tools, handoffs, and structured outputs.

class Agent<TContext = any, TOutput = any> {
  constructor(config: AgentConfiguration<TContext, TOutput>);
  static create<TOutput, Handoffs>(
    config: AgentConfiguration
  ): Agent<any, ResolvedAgentOutput<TOutput, Handoffs>>;
  clone(config: Partial<AgentConfiguration<TContext, TOutput>>): Agent<TContext, TOutput>;
  asTool(options?: AgentAsToolOptions): FunctionTool<TContext, Agent>;
}

interface AgentConfiguration<TContext = any, TOutput = any> {
  name: string;
  instructions?: string | ((runContext: RunContext<TContext>, agent: Agent) => string | Promise<string>);
  prompt?: Prompt | ((runContext: RunContext<TContext>, agent: Agent) => Prompt | Promise<Prompt>);
  handoffDescription?: string;
  handoffs?: (Agent | Handoff)[];
  model?: string | Model;
  modelSettings?: ModelSettings;
  tools?: Tool[];
  mcpServers?: MCPServer[];
  inputGuardrails?: InputGuardrail[];
  outputGuardrails?: OutputGuardrail<TOutput>[];
  outputType?: AgentOutputType<TOutput>;
  toolUseBehavior?: ToolUseBehavior;
  resetToolChoice?: boolean;
}

function run<TContext = any, TOutput = any>(
  agent: Agent<TContext, TOutput>,
  input: string | AgentInputItem[] | RunState,
  options?: RunOptions<TContext>
): Promise<RunResult<TOutput>> | StreamedRunResult<TOutput>;

class Runner<TContext = any> {
  constructor(config?: RunnerConfig<TContext>);
  run<TOutput = any>(
    agent: Agent<TContext, TOutput>,
    input: string | AgentInputItem[] | RunState,
    options?: RunOptions<TContext>
  ): Promise<RunResult<TOutput>> | StreamedRunResult<TOutput>;
}

interface RunResult<TOutput = any> {
  input: string | AgentInputItem[];
  output: AgentOutputItem[];
  history: AgentInputItem[];
  finalOutput?: ResolvedAgentOutput<TOutput>;
  lastAgent?: Agent;
  state: RunState;
}

interface StreamedRunResult<TOutput = any> extends RunResult<TOutput> {
  completed: Promise<void>;
  toStream(): ReadableStream<RunStreamEvent>;
  toTextStream(options?: TextStreamOptions): ReadableStream<string>;
  [Symbol.asyncIterator](): AsyncIterator<RunStreamEvent>;
}

Agents and Execution

Tool Integration

Create and integrate custom tools, built-in tools, and hosted tools for agent use.

function tool<TParameters, TContext, TResult>(config: ToolConfig<TParameters, TContext, TResult>): FunctionTool<TContext, TResult>;

interface ToolConfig<TParameters, TContext, TResult> {
  name?: string;
  description: string;
  parameters: TParameters;
  strict?: boolean;
  execute: (
    input: z.infer<TParameters>,
    context?: RunContext<TContext>,
    details?: ToolCallDetails
  ) => TResult | Promise<TResult>;
  errorFunction?: (context: RunContext<TContext>, error: any) => string;
  needsApproval?: boolean | ((runContext: RunContext<TContext>, input: any, callId?: string) => Promise<boolean>);
  isEnabled?: boolean | ((options: { runContext: RunContext<TContext>; agent: Agent }) => boolean | Promise<boolean>);
}

function computerTool(computer: Computer): ComputerTool;
function shellTool(shell: Shell): ShellTool;
function applyPatchTool(editor: Editor): ApplyPatchTool;
function hostedMcpTool(config: HostedMcpToolConfig): HostedTool;

// OpenAI-specific hosted tools
function webSearchTool(options?: WebSearchOptions): HostedTool;
function fileSearchTool(vectorStoreIds: string[], options?: FileSearchOptions): HostedTool;
function codeInterpreterTool(options?: CodeInterpreterOptions): HostedTool;
function imageGenerationTool(options?: ImageGenerationOptions): HostedTool;

Tool Integration

Handoffs

Transfer control between specialized agents during execution.

function handoff<TContext, TOutput, TInputType>(
  agent: Agent<TContext, TOutput>,
  options?: HandoffOptions<TContext, TInputType>
): Handoff<TContext, TOutput>;

interface HandoffOptions<TContext, TInputType> {
  toolNameOverride?: string;
  toolDescriptionOverride?: string;
  onHandoff?: (context: RunContext<TContext>, input?: any) => void | Promise<void>;
  inputType?: TInputType;
  inputFilter?: (input: HandoffInputData) => HandoffInputData;
  isEnabled?: boolean | ((options: { runContext: RunContext<TContext>; agent: Agent }) => boolean | Promise<boolean>);
}

class Handoff<TContext = any, TOutput = any> {
  toolName: string;
  toolDescription: string;
  agent: Agent<TContext, TOutput>;
  onInvokeHandoff(
    context: RunContext<TContext>,
    args?: any
  ): Agent<TContext, TOutput> | Promise<Agent<TContext, TOutput>>;
}

Handoffs

Guardrails

Validate inputs and outputs for safety and reliability.

interface InputGuardrail<TContext = any> {
  name: string;
  execute: (args: {
    agent: Agent;
    input: AgentInputItem[];
    context: RunContext<TContext>;
  }) => Promise<GuardrailFunctionOutput>;
  runInParallel?: boolean;
}

interface OutputGuardrail<TOutput = any, TContext = any> {
  name: string;
  execute: (args: {
    agent: Agent;
    agentOutput: TOutput;
    context: RunContext<TContext>;
    details?: { rawOutput: AgentOutputItem[] };
  }) => Promise<GuardrailFunctionOutput>;
}

interface GuardrailFunctionOutput {
  tripwireTriggered: boolean;
  outputInfo: any;
}

function defineOutputGuardrail<TOutput = any, TContext = any>(
  config: OutputGuardrailConfig<TOutput, TContext>
): OutputGuardrail<TOutput, TContext>;

Guardrails

Model Context Protocol (MCP)

Integrate local and remote MCP servers for dynamic tool provisioning.

class MCPServerStdio implements MCPServer {
  constructor(options: MCPServerStdioOptions);
}

class MCPServerStreamableHttp implements MCPServer {
  constructor(options: MCPServerHttpOptions);
}

class MCPServerSSE implements MCPServer {
  constructor(options: MCPServerSSEOptions);
}

interface MCPServer {
  name: string;
  connect(): Promise<void>;
  close(): Promise<void>;
  listTools(): Promise<MCPTool[]>;
  callTool(toolName: string, args: any): Promise<CallToolResultContent>;
  invalidateToolsCache(): Promise<void>;
}

Model Context Protocol

Session Management

Persistent conversation history storage with multiple backend options.

interface Session {
  getSessionId(): Promise<string>;
  getItems(limit?: number): Promise<AgentInputItem[]>;
  addItems(items: AgentInputItem[]): Promise<void>;
  popItem(): Promise<AgentInputItem | undefined>;
  clearSession(): Promise<void>;
}

class MemorySession implements Session {
  constructor(options?: MemorySessionOptions);
}

class OpenAIConversationsSession implements Session {
  constructor(options?: OpenAIConversationsSessionOptions);
}

function startOpenAIConversationsSession(client?: OpenAI): Promise<string>;

Session Management

Realtime Voice Agents

Build realtime voice agents using WebRTC or WebSockets.

class RealtimeAgent<TContext = any> {
  constructor(config: RealtimeAgentConfig<TContext>);
}

class RealtimeSession<TBaseContext = any> {
  constructor(
    initialAgent: RealtimeAgent<TBaseContext>,
    options: RealtimeSessionOptions<TBaseContext>
  );
  connect(options: RealtimeConnectOptions): Promise<void>;
  close(): void;
  updateAgent(newAgent: RealtimeAgent<TBaseContext>): Promise<RealtimeAgent<TBaseContext>>;
  sendMessage(message: string, otherEventData?: any): void;
  sendAudio(audio: ArrayBuffer | Uint8Array, options?: { commit?: boolean }): void;
}

class OpenAIRealtimeWebRTC implements RealtimeTransportLayer {
  constructor(options?: RealtimeWebRTCOptions);
}

class OpenAIRealtimeWebSocket implements RealtimeTransportLayer {
  constructor(options?: RealtimeWebSocketOptions);
}

Realtime Voice Agents

Tracing and Debugging

Built-in tracing for visualizing and debugging agent runs.

class Trace {
  traceId: string;
  groupId?: string;
  metadata?: Record<string, any>;
}

class Span {
  spanId: string;
  name: string;
  startTime: number;
  endTime?: number;
}

function getCurrentTrace(): Trace | undefined;
function getCurrentSpan(): Span | undefined;
function getOrCreateTrace<T>(
  fn: (trace: Trace) => T | Promise<T>,
  options?: TraceOptions
): Promise<T>;

class OpenAITracingExporter {
  constructor(options?: OpenAITracingExporterOptions);
}

function setDefaultOpenAITracingExporter(): void;
function setTracingDisabled(disabled: boolean): void;

Tracing and Debugging

Model Providers

Configure and use different model providers.

class OpenAIProvider implements ModelProvider {
  constructor(options?: OpenAIProviderOptions);
  getModel(modelName?: string): Model;
}

interface ModelProvider {
  getModel(modelName?: string): Promise<Model> | Model;
}

interface Model {
  getResponse(request: ModelRequest): Promise<ModelResponse>;
  getStreamedResponse(request: ModelRequest): AsyncIterable<StreamEvent>;
}

function setDefaultModelProvider(provider: ModelProvider): void;
function getDefaultModel(): string;
function setDefaultOpenAIClient(client: OpenAI): void;
function setDefaultOpenAIKey(apiKey: string): void;
function setOpenAIAPI(api: 'chat_completions' | 'responses'): void;

Model Providers

Error Handling

Comprehensive error types for different failure modes.

class AgentsError extends Error {
  readonly type: string;
}

class UserError extends AgentsError {}
class SystemError extends AgentsError {}
class ModelBehaviorError extends AgentsError {}
class MaxTurnsExceededError extends AgentsError {}
class ToolCallError extends AgentsError {}
class GuardrailExecutionError extends AgentsError {}
class InputGuardrailTripwireTriggered extends AgentsError {}
class OutputGuardrailTripwireTriggered extends AgentsError {}

Error Handling

Types

RunContext

Context object passed to tools, guardrails, and callbacks.

class RunContext<TContext = any> {
  context: TContext;
  usage: Usage;
  isToolApproved(options: { toolName: string; callId: string }): boolean;
  approveTool(
    approvalItem: RunToolApprovalItem,
    options?: { alwaysApprove?: boolean }
  ): Promise<void>;
  rejectTool(
    approvalItem: RunToolApprovalItem,
    options?: { alwaysReject?: boolean }
  ): Promise<void>;
  toJSON(): any;
}

Usage

Token usage statistics.

class Usage {
  inputTokens: number;
  outputTokens: number;
  totalTokens: number;
  inputTokenDetails?: {
    audio?: number;
    text?: number;
    cached?: number;
  };
  outputTokenDetails?: {
    audio?: number;
    text?: number;
    reasoning?: number;
  };
  add(usage: Usage): void;
  toJSON(): object;
}

ModelSettings

Model configuration parameters.

interface ModelSettings {
  temperature?: number;
  topP?: number;
  frequencyPenalty?: number;
  presencePenalty?: number;
  toolChoice?: 'auto' | 'required' | 'none' | string;
  parallelToolCalls?: boolean;
  truncation?: 'auto' | 'disabled';
  maxTokens?: number;
  store?: boolean;
  promptCacheRetention?: 'in-memory' | '24h' | null;
  reasoning?: {
    effort?: 'none' | 'minimal' | 'low' | 'medium' | 'high' | null;
    summary?: 'auto' | 'concise' | 'detailed' | null;
  };
  text?: {
    verbosity?: 'low' | 'medium' | 'high' | null;
  };
  providerData?: Record<string, any>;
}

Message Helpers

Helper functions for creating message items.

function user(
  input: string | UserContent[],
  options?: Record<string, any>
): UserMessageItem;

function assistant(
  content: string | AssistantContent[],
  options?: Record<string, any>
): AssistantMessageItem;

function system(
  input: string,
  options?: Record<string, any>
): SystemMessageItem;

Run Items

Classes representing items in agent execution history.

class RunMessageOutputItem {
  agent: Agent;
  message: AssistantMessageItem;
}

class RunToolCallItem {
  agent: Agent;
  toolCall: FunctionCallItem | ComputerUseCallItem | ShellCallItem | ApplyPatchCallItem;
}

class RunToolCallOutputItem {
  agent: Agent;
  toolCallOutput: FunctionCallResultItem | ComputerCallResultItem | ShellCallResultItem | ApplyPatchCallResultItem;
}

class RunReasoningItem {
  agent: Agent;
  reasoning: ReasoningItem;
}

class RunHandoffCallItem {
  agent: Agent;
  handoffCall: FunctionCallItem;
}

class RunHandoffOutputItem {
  agent: Agent;
  handoffOutput: FunctionCallResultItem;
}

class RunToolApprovalItem {
  agent: Agent;
  toolCall: FunctionCallItem;
  approved: boolean | null;
}

function extractAllTextOutput(items: (AgentOutputItem | RunItem)[]): string;

Utility Functions

Helper utilities available from @openai/agents/utils.

/**
 * Check if a value is a Zod object schema
 * @param value - Value to check
 * @returns True if value is a Zod object schema
 */
function isZodObject(value: any): boolean;

/**
 * Convert various types to a smart string representation
 * @param value - Value to convert
 * @returns String representation
 */
function toSmartString(value: any): string;

/**
 * Encode Uint8Array to base64 string
 * @param data - Uint8Array to encode
 * @returns Base64 encoded string
 */
function encodeUint8ArrayToBase64(data: Uint8Array): string;

/**
 * Apply a unified diff patch to original text
 * @param original - Original text
 * @param diff - Unified diff string
 * @returns Patched text
 */
function applyDiff(original: string, diff: string): string;

/**
 * Base class for delegating event emitter functionality
 */
class EventEmitterDelegate<TEvents> {
  on<K extends keyof TEvents>(event: K, handler: (...args: TEvents[K]) => void): void;
  off<K extends keyof TEvents>(event: K, handler: (...args: TEvents[K]) => void): void;
  emit<K extends keyof TEvents>(event: K, ...args: TEvents[K]): void;
}

Supported Environments

  • Node.js 22 or later
  • Deno
  • Bun
  • Cloudflare Workers (experimental, requires nodejs_compat)

Configuration

Set up the OpenAI API key:

import { setDefaultOpenAIKey } from '@openai/agents';

setDefaultOpenAIKey('your-api-key');

Or use environment variable:

export OPENAI_API_KEY='your-api-key'

Configure default model:

import { setDefaultModelProvider, OpenAIProvider } from '@openai/agents';

setDefaultModelProvider(new OpenAIProvider({
  apiKey: 'your-api-key',
  baseURL: 'https://api.openai.com/v1',
}));