or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

core-agents.mdguardrails.mdhandoffs.mdindex.mditems-streaming.mdlifecycle.mdmcp.mdmemory-sessions.mdmodel-providers.mdrealtime.mdresults-exceptions.mdtools.mdtracing.mdvoice-pipeline.md

index.mddocs/

0

# OpenAI Agents SDK

1

2

A lightweight yet powerful Python framework for building multi-agent workflows with LLMs. The SDK provides a provider-agnostic foundation supporting OpenAI's Responses and Chat Completions APIs, as well as 100+ other LLMs through provider integrations. Built with extensibility in mind, it enables sophisticated agent orchestration with handoffs, guardrails, tool use, conversation memory, and built-in tracing.

3

4

## Package Information

5

6

- **Package Name**: openai-agents

7

- **Package Type**: pypi

8

- **Language**: Python

9

- **Minimum Python Version**: 3.9

10

- **Installation**: `pip install openai-agents`

11

- **With Voice Support**: `pip install 'openai-agents[voice]'`

12

- **With Redis Support**: `pip install 'openai-agents[redis]'`

13

14

## Core Imports

15

16

```python

17

from agents import Agent, Runner

18

```

19

20

Common imports for specific functionality:

21

22

```python

23

# Tools

24

from agents import function_tool, FunctionTool, FileSearchTool, WebSearchTool, ComputerTool

25

26

# Handoffs

27

from agents import Handoff, handoff

28

29

# Guardrails

30

from agents import InputGuardrail, OutputGuardrail, input_guardrail, output_guardrail

31

32

# Memory/Sessions

33

from agents import Session, SQLiteSession, OpenAIConversationsSession

34

35

# Model Configuration

36

from agents import ModelSettings, RunConfig

37

38

# Results and Items

39

from agents import RunResult, RunResultStreaming, RunItem, ModelResponse

40

41

# Tracing

42

from agents.tracing import trace, Trace, Span

43

44

# MCP

45

from agents.mcp import MCPServer, MCPServerStdio

46

47

# Realtime

48

from agents.realtime import RealtimeAgent, RealtimeRunner

49

50

# Voice

51

from agents.voice import VoicePipeline, STTModel, TTSModel

52

```

53

54

## Basic Usage

55

56

```python

57

from agents import Agent, Runner

58

59

# Create a simple agent

60

agent = Agent(

61

name="Assistant",

62

instructions="You are a helpful assistant"

63

)

64

65

# Run synchronously

66

result = Runner.run_sync(agent, "Write a haiku about recursion in programming.")

67

print(result.final_output)

68

69

# Run asynchronously

70

import asyncio

71

72

async def main():

73

result = await Runner.run(agent, "What is the weather like today?")

74

print(result.final_output)

75

76

asyncio.run(main())

77

```

78

79

Simple agent with tool:

80

81

```python

82

from agents import Agent, Runner, function_tool

83

84

@function_tool

85

def get_weather(city: str) -> str:

86

"""Get the weather for a city."""

87

return f"The weather in {city} is sunny."

88

89

agent = Agent(

90

name="Weather Agent",

91

instructions="You help users check the weather.",

92

tools=[get_weather]

93

)

94

95

result = Runner.run_sync(agent, "What's the weather in Tokyo?")

96

print(result.final_output) # The weather in Tokyo is sunny.

97

```

98

99

Multi-agent handoff:

100

101

```python

102

from agents import Agent, Runner

103

104

spanish_agent = Agent(

105

name="Spanish Agent",

106

instructions="You only speak Spanish."

107

)

108

109

english_agent = Agent(

110

name="English Agent",

111

instructions="You only speak English"

112

)

113

114

triage_agent = Agent(

115

name="Triage Agent",

116

instructions="Handoff to the appropriate agent based on language.",

117

handoffs=[spanish_agent, english_agent]

118

)

119

120

result = Runner.run_sync(triage_agent, "Hola, ¿cómo estás?")

121

print(result.final_output) # ¡Hola! Estoy bien, gracias...

122

```

123

124

## Architecture

125

126

The OpenAI Agents SDK follows a modular architecture with several key design patterns:

127

128

### Agent Loop

129

130

When you call `Runner.run()`, the SDK executes a loop until reaching a final output:

131

132

1. **LLM Call**: Calls the LLM using the agent's model, settings, and message history

133

2. **Response Processing**: Receives response which may include tool calls or handoffs

134

3. **Final Output Check**: If response contains final output (based on `output_type` or plain text without tool calls), return and end

135

4. **Handoff Handling**: If response contains handoff, switch to target agent and restart loop

136

5. **Tool Execution**: Process any tool calls, append results to history, and restart loop

137

138

A `max_turns` parameter limits loop iterations (default: 10).

139

140

### Final Output Determination

141

142

- **With `output_type`**: Loop continues until agent produces structured output matching the specified type (using structured outputs)

143

- **Without `output_type`**: Loop continues until agent produces a message without tool calls or handoffs

144

145

### Component Hierarchy

146

147

- **Runner**: Orchestrates agent execution, manages the agent loop, handles context and sessions

148

- **Agent**: Defines behavior through instructions, tools, handoffs, guardrails, and model settings

149

- **Tools**: Executable functions (Python functions, hosted tools, MCP tools)

150

- **Handoffs**: Specialized tool calls for transferring control between agents

151

- **Guardrails**: Input/output validation with tripwire mechanisms for safety

152

- **Sessions**: Conversation history management across multiple runs

153

- **Tracing**: Observability layer tracking spans for each operation

154

155

### Extensibility Points

156

157

- **ModelProvider**: Custom LLM providers (OpenAI, LiteLLM, custom)

158

- **Session**: Custom conversation storage backends

159

- **TracingProcessor**: Custom observability destinations

160

- **Hooks**: Lifecycle callbacks for fine-grained control

161

162

## Capabilities

163

164

### Core Agent System

165

166

Create and configure agents with instructions, tools, handoffs, guardrails, and model settings. Run agents synchronously or asynchronously with the Runner.

167

168

```python { .api }

169

class Agent[TContext]:

170

name: str

171

instructions: str | Callable | None

172

prompt: Prompt | DynamicPromptFunction | None

173

tools: list[Tool]

174

handoffs: list[Agent | Handoff]

175

model: str | Model | None

176

model_settings: ModelSettings

177

mcp_servers: list[MCPServer]

178

mcp_config: MCPConfig

179

input_guardrails: list[InputGuardrail]

180

output_guardrails: list[OutputGuardrail]

181

output_type: type[Any] | AgentOutputSchemaBase | None

182

hooks: AgentHooks | None

183

tool_use_behavior: Literal | StopAtTools | ToolsToFinalOutputFunction

184

reset_tool_choice: bool

185

handoff_description: str | None

186

187

def clone(**kwargs) -> Agent: ...

188

def as_tool(...) -> Tool: ...

189

def get_system_prompt(context) -> str | None: ...

190

def get_all_tools(context) -> list[Tool]: ...

191

192

class Runner:

193

@classmethod

194

async def run(starting_agent, input, *, context, max_turns, hooks,

195

run_config, previous_response_id, conversation_id,

196

session) -> RunResult: ...

197

198

@classmethod

199

def run_sync(...) -> RunResult: ...

200

201

@classmethod

202

def run_streamed(...) -> RunResultStreaming: ...

203

204

class RunConfig:

205

model: str | Model | None

206

model_provider: ModelProvider

207

model_settings: ModelSettings | None

208

handoff_input_filter: HandoffInputFilter | None

209

nest_handoff_history: bool

210

input_guardrails: list[InputGuardrail] | None

211

output_guardrails: list[OutputGuardrail] | None

212

tracing_disabled: bool

213

workflow_name: str

214

trace_id: str | None

215

...

216

```

217

218

[Core Agent System](./core-agents.md)

219

220

### Tools

221

222

Function tools, hosted tools (file search, web search, computer use, image generation, code interpreter), shell tools, MCP tools, and tool output types.

223

224

```python { .api }

225

@function_tool

226

def my_function(param: str) -> str:

227

"""Function description."""

228

...

229

230

class FunctionTool:

231

name: str

232

description: str

233

params_json_schema: dict[str, Any]

234

on_invoke_tool: Callable

235

strict_json_schema: bool

236

is_enabled: bool | Callable

237

tool_input_guardrails: list[ToolInputGuardrail] | None

238

tool_output_guardrails: list[ToolOutputGuardrail] | None

239

240

class FileSearchTool:

241

vector_store_ids: list[str]

242

max_num_results: int | None

243

include_search_results: bool

244

ranking_options: RankingOptions | None

245

filters: Filters | None

246

247

class WebSearchTool:

248

user_location: UserLocation | None

249

filters: WebSearchToolFilters | None

250

search_context_size: Literal["low", "medium", "high"]

251

252

class ComputerTool:

253

computer: Computer | AsyncComputer

254

on_safety_check: Callable | None

255

256

class ShellTool:

257

executor: ShellExecutor

258

name: str

259

260

class ApplyPatchTool:

261

editor: ApplyPatchEditor

262

name: str

263

264

class HostedMCPTool:

265

tool_config: Mcp

266

on_approval_request: MCPToolApprovalFunction | None

267

268

class CodeInterpreterTool:

269

tool_config: CodeInterpreter

270

271

class ImageGenerationTool:

272

tool_config: ImageGeneration

273

274

class LocalShellTool:

275

executor: LocalShellExecutor

276

```

277

278

[Tools](./tools.md)

279

280

### Handoffs

281

282

Agent-to-agent delegation with input filtering, history management, and custom handoff configurations.

283

284

```python { .api }

285

class Handoff[TContext, TAgent]:

286

tool_name: str

287

tool_description: str

288

input_json_schema: dict[str, Any]

289

on_invoke_handoff: Callable

290

agent_name: str

291

input_filter: HandoffInputFilter | None

292

nest_handoff_history: bool | None

293

strict_json_schema: bool

294

is_enabled: bool | Callable

295

296

def get_transfer_message(agent) -> str: ...

297

298

def handoff(agent, tool_name_override, tool_description_override,

299

on_handoff, input_type, input_filter, nest_handoff_history,

300

is_enabled) -> Handoff: ...

301

302

class HandoffInputData:

303

input_history: str | tuple[TResponseInputItem, ...]

304

pre_handoff_items: tuple[RunItem, ...]

305

new_items: tuple[RunItem, ...]

306

run_context: RunContextWrapper | None

307

308

def clone(**kwargs) -> HandoffInputData: ...

309

310

HandoffHistoryMapper = Callable[[list[TResponseInputItem]], list[TResponseInputItem]]

311

312

def nest_handoff_history() -> list[TResponseInputItem]: ...

313

def default_handoff_history_mapper() -> list[TResponseInputItem]: ...

314

```

315

316

[Handoffs](./handoffs.md)

317

318

### Guardrails

319

320

Input and output validation with configurable safety checks, tool-specific guardrails, and tripwire mechanisms.

321

322

```python { .api }

323

class InputGuardrail[TContext]:

324

guardrail_function: Callable

325

name: str | None

326

run_in_parallel: bool

327

328

def get_name() -> str: ...

329

async def run(agent, input, context) -> InputGuardrailResult: ...

330

331

class OutputGuardrail[TContext]:

332

guardrail_function: Callable

333

name: str | None

334

335

def get_name() -> str: ...

336

async def run(context, agent, agent_output) -> OutputGuardrailResult: ...

337

338

@input_guardrail

339

def my_input_check(input: str) -> GuardrailFunctionOutput:

340

"""Check input before agent processes it."""

341

...

342

343

@output_guardrail

344

def my_output_check(output: str) -> GuardrailFunctionOutput:

345

"""Check output before returning to user."""

346

...

347

348

class ToolInputGuardrail[TContext]:

349

guardrail_function: Callable

350

name: str | None

351

352

async def run(data) -> ToolGuardrailFunctionOutput: ...

353

354

class ToolOutputGuardrail[TContext]:

355

guardrail_function: Callable

356

name: str | None

357

358

async def run(data) -> ToolGuardrailFunctionOutput: ...

359

360

class GuardrailFunctionOutput:

361

output_info: Any

362

tripwire_triggered: bool

363

364

class ToolGuardrailFunctionOutput:

365

output_info: Any

366

behavior: RejectContentBehavior | RaiseExceptionBehavior | AllowBehavior

367

368

@classmethod

369

def allow(output_info) -> ToolGuardrailFunctionOutput: ...

370

371

@classmethod

372

def reject_content(message, output_info) -> ToolGuardrailFunctionOutput: ...

373

374

@classmethod

375

def raise_exception(output_info) -> ToolGuardrailFunctionOutput: ...

376

```

377

378

[Guardrails](./guardrails.md)

379

380

### Memory and Sessions

381

382

Conversation history management across agent runs with built-in session implementations and custom session support.

383

384

```python { .api }

385

class SessionABC:

386

async def get_items() -> list[TResponseInputItem]: ...

387

async def add_items(items) -> None: ...

388

async def clear() -> None: ...

389

390

class SQLiteSession(SessionABC):

391

def __init__(session_id, db_path): ...

392

393

class OpenAIConversationsSession(SessionABC):

394

def __init__(conversation_id, client): ...

395

```

396

397

Advanced session implementations:

398

399

```python { .api }

400

# In agents.extensions.memory

401

class RedisSession(SessionABC): ...

402

class SQLAlchemySession(SessionABC): ...

403

class AdvancedSQLiteSession(SessionABC): ...

404

class DaprSession(SessionABC): ...

405

class EncryptSession: ... # Wrapper for encrypted sessions

406

```

407

408

[Memory and Sessions](./memory-sessions.md)

409

410

### Model Providers

411

412

Support for OpenAI models and 100+ LLMs through provider abstraction and LiteLLM integration.

413

414

```python { .api }

415

class Model:

416

async def get_response(...) -> ModelResponse: ...

417

async def stream_response(...) -> AsyncIterator[TResponseStreamEvent]: ...

418

419

class ModelProvider:

420

def get_model(model_name) -> Model: ...

421

422

class OpenAIProvider(ModelProvider):

423

def get_model(model_name) -> Model: ...

424

425

class MultiProvider(ModelProvider):

426

def get_model(model_name) -> Model: ...

427

428

class OpenAIChatCompletionsModel(Model): ...

429

class OpenAIResponsesModel(Model): ...

430

431

class ModelSettings:

432

temperature: float | None

433

top_p: float | None

434

frequency_penalty: float | None

435

presence_penalty: float | None

436

tool_choice: ToolChoice | None

437

parallel_tool_calls: bool | None

438

max_tokens: int | None

439

reasoning: Reasoning | None

440

verbosity: Literal["low", "medium", "high"] | None

441

...

442

443

def resolve(override) -> ModelSettings: ...

444

```

445

446

LiteLLM provider (in extensions):

447

448

```python { .api }

449

# In agents.extensions.models

450

class LiteLLMProvider(ModelProvider): ...

451

```

452

453

[Model Providers](./model-providers.md)

454

455

### Tracing

456

457

Built-in distributed tracing with spans for all operations, custom trace processors, and integration with external observability platforms.

458

459

```python { .api }

460

class Trace:

461

trace_id: str

462

name: str

463

group_id: str | None

464

metadata: dict[str, Any] | None

465

466

def start(mark_as_current) -> None: ...

467

def finish(reset_current) -> None: ...

468

469

class Span[TSpanData]:

470

span_id: str

471

name: str

472

span_data: TSpanData

473

parent_span: Span | None

474

475

def start(mark_as_current) -> None: ...

476

def finish(reset_current) -> None: ...

477

478

def trace(workflow_name, *, trace_id, group_id, metadata, disabled) -> Trace: ...

479

480

def agent_span(name, *, handoffs, output_type) -> Span[AgentSpanData]: ...

481

def function_span(name, *, input, output) -> Span[FunctionSpanData]: ...

482

def generation_span(name, *, model, input, output, usage) -> Span[GenerationSpanData]: ...

483

def guardrail_span(name, *, guardrail_type, input, output) -> Span[GuardrailSpanData]: ...

484

def handoff_span(name, *, from_agent, to_agent) -> Span[HandoffSpanData]: ...

485

def response_span(name, *, response_data) -> Span[ResponseSpanData]: ...

486

def speech_span(name, *, speech_data) -> Span[SpeechSpanData]: ...

487

def speech_group_span(name, *, group_data) -> Span[SpeechGroupSpanData]: ...

488

def transcription_span(name, *, transcription_data) -> Span[TranscriptionSpanData]: ...

489

def mcp_tools_span(name, *, server_label, tools) -> Span[MCPListToolsSpanData]: ...

490

def custom_span(name, *, custom_data) -> Span[CustomSpanData]: ...

491

492

class SpanData(abc.ABC):

493

"""Base class for span data types."""

494

def export() -> dict[str, Any]: ...

495

@property

496

def type() -> str: ...

497

498

class AgentSpanData(SpanData): ...

499

class CustomSpanData(SpanData): ...

500

class FunctionSpanData(SpanData): ...

501

class GenerationSpanData(SpanData): ...

502

class GuardrailSpanData(SpanData): ...

503

class HandoffSpanData(SpanData): ...

504

class MCPListToolsSpanData(SpanData): ...

505

class SpeechSpanData(SpanData): ...

506

class SpeechGroupSpanData(SpanData): ...

507

class TranscriptionSpanData(SpanData): ...

508

509

class SpanError:

510

message: str

511

data: dict | None

512

513

def get_current_trace() -> Trace | None: ...

514

def get_current_span() -> Span | None: ...

515

516

class TracingProcessor:

517

async def on_trace_start(trace) -> None: ...

518

async def on_trace_end(trace) -> None: ...

519

async def on_span_start(span) -> None: ...

520

async def on_span_end(span) -> None: ...

521

522

def add_trace_processor(processor: TracingProcessor) -> None: ...

523

def set_trace_processors(processors: list[TracingProcessor]) -> None: ...

524

def set_tracing_disabled(disabled: bool) -> None: ...

525

def set_tracing_export_api_key(api_key: str) -> None: ...

526

527

class TraceProvider:

528

"""Provider for trace creation."""

529

def create_trace(...) -> Trace: ...

530

def create_span(...) -> Span: ...

531

def register_processor(processor) -> None: ...

532

def set_processors(processors) -> None: ...

533

def set_disabled(disabled) -> None: ...

534

def shutdown() -> None: ...

535

536

class DefaultTraceProvider(TraceProvider):

537

"""Default trace provider implementation."""

538

...

539

540

def get_trace_provider() -> TraceProvider: ...

541

```

542

543

[Tracing](./tracing.md)

544

545

### Realtime API

546

547

Real-time audio/voice agent functionality with event-driven architecture.

548

549

```python { .api }

550

class RealtimeAgent:

551

"""Agent for real-time audio interactions."""

552

...

553

554

class RealtimeRunner:

555

"""Runner for realtime agents."""

556

@classmethod

557

async def run(...): ...

558

559

class RealtimeSession:

560

"""Session for realtime interactions."""

561

...

562

```

563

564

[Realtime API](./realtime.md)

565

566

### Voice Pipeline

567

568

Voice processing with speech-to-text and text-to-speech capabilities.

569

570

```python { .api }

571

class VoicePipeline:

572

"""Pipeline for voice processing."""

573

...

574

575

class STTModel:

576

"""Speech-to-text model interface."""

577

async def transcribe(...): ...

578

579

class TTSModel:

580

"""Text-to-speech model interface."""

581

async def synthesize(...): ...

582

```

583

584

[Voice Pipeline](./voice-pipeline.md)

585

586

### Model Context Protocol (MCP)

587

588

Integration with MCP servers for extended tool capabilities.

589

590

```python { .api }

591

class MCPServerStdio:

592

def __init__(params: MCPServerStdioParams): ...

593

async def connect(): ...

594

async def cleanup(): ...

595

596

class MCPServerSse:

597

def __init__(params: MCPServerSseParams): ...

598

599

class MCPServerStreamableHttp:

600

def __init__(params: MCPServerStreamableHttpParams): ...

601

602

class MCPUtil:

603

@staticmethod

604

async def get_all_function_tools(...) -> list[Tool]: ...

605

606

def create_static_tool_filter(allowed_tools, denied_tools) -> ToolFilterStatic: ...

607

608

class ToolFilterContext:

609

tool_name: str

610

server_label: str

611

```

612

613

[Model Context Protocol (MCP)](./mcp.md)

614

615

### Items and Streaming

616

617

Run items representing agent operations and streaming events for real-time updates.

618

619

```python { .api }

620

class MessageOutputItem:

621

raw_item: ResponseOutputMessage

622

agent: Agent

623

type: Literal["message_output_item"]

624

625

class ToolCallItem:

626

raw_item: ToolCallItemTypes

627

agent: Agent

628

type: Literal["tool_call_item"]

629

630

class ToolCallOutputItem:

631

raw_item: ToolCallOutputTypes

632

agent: Agent

633

output: Any

634

type: Literal["tool_call_output_item"]

635

636

class HandoffCallItem:

637

raw_item: ResponseFunctionToolCall

638

agent: Agent

639

type: Literal["handoff_call_item"]

640

641

class HandoffOutputItem:

642

raw_item: TResponseInputItem

643

agent: Agent

644

source_agent: Agent

645

target_agent: Agent

646

type: Literal["handoff_output_item"]

647

648

class ReasoningItem:

649

raw_item: ResponseReasoningItem

650

agent: Agent

651

type: Literal["reasoning_item"]

652

653

class ModelResponse:

654

output: list[TResponseOutputItem]

655

usage: Usage

656

response_id: str | None

657

658

def to_input_items() -> list[TResponseInputItem]: ...

659

660

class RawResponsesStreamEvent:

661

data: TResponseStreamEvent

662

type: Literal["raw_response_event"]

663

664

class RunItemStreamEvent:

665

name: Literal[...]

666

item: RunItem

667

type: Literal["run_item_stream_event"]

668

669

class AgentUpdatedStreamEvent:

670

new_agent: Agent

671

type: Literal["agent_updated_stream_event"]

672

673

class ItemHelpers:

674

@classmethod

675

def extract_last_content(message) -> str: ...

676

677

@classmethod

678

def extract_last_text(message) -> str | None: ...

679

680

@classmethod

681

def input_to_new_input_list(input) -> list[TResponseInputItem]: ...

682

683

@classmethod

684

def text_message_outputs(items) -> str: ...

685

```

686

687

[Items and Streaming](./items-streaming.md)

688

689

### Results and Exceptions

690

691

Run results with output, usage tracking, and comprehensive exception hierarchy.

692

693

```python { .api }

694

class RunResult:

695

input: str | list[TResponseInputItem]

696

new_items: list[RunItem]

697

raw_responses: list[ModelResponse]

698

final_output: Any

699

input_guardrail_results: list[InputGuardrailResult]

700

output_guardrail_results: list[OutputGuardrailResult]

701

tool_input_guardrail_results: list[ToolInputGuardrailResult]

702

tool_output_guardrail_results: list[ToolOutputGuardrailResult]

703

context_wrapper: RunContextWrapper

704

705

@property

706

def last_agent() -> Agent: ...

707

708

@property

709

def last_response_id() -> str | None: ...

710

711

def final_output_as(cls, raise_if_incorrect_type) -> T: ...

712

def to_input_list() -> list[TResponseInputItem]: ...

713

714

class RunResultStreaming:

715

current_agent: Agent

716

current_turn: int

717

max_turns: int

718

is_complete: bool

719

trace: Trace | None

720

721

async def stream_events() -> AsyncIterator[StreamEvent]: ...

722

def cancel(mode) -> None: ...

723

724

class AgentsException(Exception):

725

run_data: RunErrorDetails | None

726

727

class MaxTurnsExceeded(AgentsException):

728

message: str

729

730

class ModelBehaviorError(AgentsException):

731

message: str

732

733

class UserError(AgentsException):

734

message: str

735

736

class InputGuardrailTripwireTriggered(AgentsException):

737

guardrail_result: InputGuardrailResult

738

739

class OutputGuardrailTripwireTriggered(AgentsException):

740

guardrail_result: OutputGuardrailResult

741

742

class ToolInputGuardrailTripwireTriggered(AgentsException):

743

guardrail: ToolInputGuardrail

744

output: ToolGuardrailFunctionOutput

745

746

class ToolOutputGuardrailTripwireTriggered(AgentsException):

747

guardrail: ToolOutputGuardrail

748

output: ToolGuardrailFunctionOutput

749

750

class Usage:

751

requests: int

752

input_tokens: int

753

input_tokens_details: InputTokensDetails

754

output_tokens: int

755

output_tokens_details: OutputTokensDetails

756

total_tokens: int

757

request_usage_entries: list[RequestUsage]

758

759

def add(other: Usage) -> None: ...

760

```

761

762

[Results and Exceptions](./results-exceptions.md)

763

764

### Lifecycle Hooks

765

766

Callbacks for observability and control at key points in agent execution.

767

768

```python { .api }

769

class RunHooks[TContext]:

770

async def on_llm_start(context, agent, system_prompt, input_items): ...

771

async def on_llm_end(context, agent, response): ...

772

async def on_agent_start(context, agent): ...

773

async def on_agent_end(context, agent, output): ...

774

async def on_handoff(context, from_agent, to_agent): ...

775

async def on_tool_start(context, agent, tool): ...

776

async def on_tool_end(context, agent, tool, result): ...

777

778

class AgentHooks[TContext]:

779

async def on_start(context, agent): ...

780

async def on_end(context, agent, output): ...

781

async def on_handoff(context, agent, source): ...

782

async def on_tool_start(context, agent, tool): ...

783

async def on_tool_end(context, agent, tool, result): ...

784

async def on_llm_start(context, agent, system_prompt, input_items): ...

785

async def on_llm_end(context, agent, response): ...

786

```

787

788

[Lifecycle Hooks](./lifecycle.md)

789

790

## Configuration Functions

791

792

Global configuration for the SDK:

793

794

```python { .api }

795

def set_default_openai_key(key: str, use_for_tracing: bool = True) -> None:

796

"""

797

Set default OpenAI API key for LLM requests and optionally tracing.

798

799

Parameters:

800

- key: OpenAI API key string

801

- use_for_tracing: Whether to use this key for tracing (default: True)

802

"""

803

804

def set_default_openai_client(client: AsyncOpenAI, use_for_tracing: bool = True) -> None:

805

"""

806

Set default OpenAI client for LLM requests.

807

808

Parameters:

809

- client: AsyncOpenAI client instance

810

- use_for_tracing: Whether to use this client for tracing (default: True)

811

"""

812

813

def set_default_openai_api(api: Literal["chat_completions", "responses"]) -> None:

814

"""

815

Set default API mode (responses or chat_completions).

816

817

Parameters:

818

- api: API mode to use

819

"""

820

821

def enable_verbose_stdout_logging() -> None:

822

"""Enable verbose logging to stdout for debugging."""

823

```

824

825

## Utility Functions

826

827

Additional utility functions:

828

829

```python { .api }

830

ApplyDiffMode = Literal["default", "create"]

831

832

def apply_diff(input: str, diff: str, mode: ApplyDiffMode) -> str:

833

"""

834

Apply V4A diff to text.

835

836

Parameters:

837

- input: Original text

838

- diff: Diff to apply

839

- mode: Application mode ("default" or "create")

840

841

Returns:

842

- Modified text

843

"""

844

845

async def run_demo_loop(agent, *, stream, context) -> None:

846

"""

847

Run simple REPL loop for testing agents.

848

849

Parameters:

850

- agent: Starting agent

851

- stream: Whether to stream output

852

- context: Context object

853

"""

854

```

855

856

## Computer Control

857

858

Abstract interfaces for computer control (used with ComputerTool):

859

860

```python { .api }

861

Environment = Literal["mac", "windows", "ubuntu", "browser"]

862

Button = Literal["left", "right", "wheel", "back", "forward"]

863

864

class Computer:

865

"""Synchronous computer control interface."""

866

867

@property

868

def environment(self) -> Environment: ...

869

870

@property

871

def dimensions(self) -> tuple[int, int]: ...

872

873

def screenshot() -> str: ...

874

def click(x, y, button) -> None: ...

875

def double_click(x, y) -> None: ...

876

def scroll(x, y, scroll_x, scroll_y) -> None: ...

877

def type(text) -> None: ...

878

def wait() -> None: ...

879

def move(x, y) -> None: ...

880

def keypress(keys) -> None: ...

881

def drag(path) -> None: ...

882

883

class AsyncComputer:

884

"""Asynchronous computer control interface."""

885

886

@property

887

def environment(self) -> Environment: ...

888

889

@property

890

def dimensions(self) -> tuple[int, int]: ...

891

892

async def screenshot() -> str: ...

893

async def click(x, y, button) -> None: ...

894

async def double_click(x, y) -> None: ...

895

async def scroll(x, y, scroll_x, scroll_y) -> None: ...

896

async def type(text) -> None: ...

897

async def wait() -> None: ...

898

async def move(x, y) -> None: ...

899

async def keypress(keys) -> None: ...

900

async def drag(path) -> None: ...

901

```

902

903

## Version

904

905

```python { .api }

906

__version__: str # Package version string

907

```

908