0
# Core Agent System
1
2
The core agent system provides the fundamental building blocks for creating and running AI agents. This includes the Agent class for configuration, the Runner for execution, and supporting types for model settings, prompts, and agent output schemas.
3
4
## Capabilities
5
6
### Agent Class
7
8
The primary class for defining agent behavior, tools, handoffs, and configuration.
9
10
```python { .api }
11
class Agent[TContext]:
12
"""
13
Main agent class with instructions, tools, guardrails, and handoffs.
14
15
Type Parameters:
16
- TContext: Type of context object passed to agent
17
18
Attributes:
19
- name: str - Agent name for identification
20
- instructions: str | Callable | None - System prompt or dynamic function
21
- prompt: Prompt | DynamicPromptFunction | None - Prompt configuration
22
- tools: list[Tool] - Available tools for agent to use
23
- handoffs: list[Agent | Handoff] - Sub-agents for delegation
24
- model: str | Model | None - Model identifier or instance
25
- model_settings: ModelSettings - Model configuration
26
- mcp_servers: list[MCPServer] - MCP servers for extended tools
27
- mcp_config: MCPConfig - MCP configuration
28
- input_guardrails: list[InputGuardrail] - Input validation checks
29
- output_guardrails: list[OutputGuardrail] - Output validation checks
30
- output_type: type[Any] | AgentOutputSchemaBase | None - Structured output schema
31
- hooks: AgentHooks | None - Lifecycle callbacks
32
- tool_use_behavior: Literal | StopAtTools | ToolsToFinalOutputFunction - Tool handling
33
- reset_tool_choice: bool - Reset tool choice after call
34
- handoff_description: str | None - Description for handoffs to this agent
35
"""
36
37
def clone(**kwargs) -> Agent:
38
"""
39
Create modified copy of agent.
40
41
Parameters:
42
- **kwargs: Agent attributes to override
43
44
Returns:
45
- Agent: New agent instance with specified changes
46
"""
47
48
def as_tool(...) -> Tool:
49
"""
50
Convert agent to tool for use by other agents.
51
52
Returns:
53
- Tool: Tool representation of this agent
54
"""
55
56
def get_system_prompt(context) -> str | None:
57
"""
58
Get resolved system prompt for agent.
59
60
Parameters:
61
- context: Context object
62
63
Returns:
64
- str | None: Resolved system prompt
65
"""
66
67
def get_prompt(context) -> ResponsePromptParam | None:
68
"""
69
Get prompt configuration for agent.
70
71
Parameters:
72
- context: Context object
73
74
Returns:
75
- ResponsePromptParam | None: Prompt configuration
76
"""
77
78
def get_all_tools(context) -> list[Tool]:
79
"""
80
Get all enabled tools including MCP tools.
81
82
Parameters:
83
- context: Context object
84
85
Returns:
86
- list[Tool]: All available tools
87
"""
88
89
def get_mcp_tools(context) -> list[Tool]:
90
"""
91
Get MCP tools for agent.
92
93
Parameters:
94
- context: Context object
95
96
Returns:
97
- list[Tool]: MCP tools
98
"""
99
```
100
101
Usage example:
102
103
```python
104
from agents import Agent, function_tool
105
106
@function_tool
107
def search_knowledge_base(query: str) -> str:
108
"""Search the knowledge base."""
109
return f"Results for: {query}"
110
111
agent = Agent(
112
name="Research Assistant",
113
instructions="You help users find information.",
114
tools=[search_knowledge_base],
115
model="gpt-4o",
116
model_settings=ModelSettings(temperature=0.7)
117
)
118
119
# Clone with modifications
120
strict_agent = agent.clone(
121
name="Strict Research Assistant",
122
model_settings=ModelSettings(temperature=0.0)
123
)
124
```
125
126
### Agent Base Class
127
128
Base class for Agent and RealtimeAgent with shared functionality.
129
130
```python { .api }
131
class AgentBase[TContext]:
132
"""
133
Base class for Agent and RealtimeAgent.
134
135
Type Parameters:
136
- TContext: Type of context object
137
138
Attributes:
139
- name: str - Agent name
140
- handoff_description: str | None - Description for handoffs
141
- tools: list[Tool] - Available tools
142
- mcp_servers: list[MCPServer] - MCP servers
143
- mcp_config: MCPConfig - MCP configuration
144
"""
145
146
def get_mcp_tools(context) -> list[Tool]:
147
"""
148
Get MCP tools for agent.
149
150
Parameters:
151
- context: Context object
152
153
Returns:
154
- list[Tool]: MCP tools
155
"""
156
157
def get_all_tools(context) -> list[Tool]:
158
"""
159
Get all enabled tools.
160
161
Parameters:
162
- context: Context object
163
164
Returns:
165
- list[Tool]: All enabled tools
166
"""
167
```
168
169
### Runner Class
170
171
Main class for running agent workflows with synchronous, asynchronous, and streaming modes.
172
173
```python { .api }
174
class Runner:
175
"""Main class for running agent workflows."""
176
177
@classmethod
178
async def run(
179
starting_agent: Agent,
180
input: str | list[TResponseInputItem],
181
*,
182
context: TContext | None = None,
183
max_turns: int = 10,
184
hooks: RunHooks | None = None,
185
run_config: RunConfig | None = None,
186
previous_response_id: str | None = None,
187
conversation_id: str | None = None,
188
session: Session | None = None
189
) -> RunResult:
190
"""
191
Run agent workflow asynchronously.
192
193
Parameters:
194
- starting_agent: Agent to start with
195
- input: User input as string or message list
196
- context: Optional context object
197
- max_turns: Maximum turns in agent loop (default: 10)
198
- hooks: Lifecycle hooks for observability
199
- run_config: Run-level configuration
200
- previous_response_id: Response ID for continuation
201
- conversation_id: Conversation ID for OpenAI Conversations API
202
- session: Session for conversation history
203
204
Returns:
205
- RunResult: Result containing output, items, and metadata
206
"""
207
208
@classmethod
209
def run_sync(
210
starting_agent: Agent,
211
input: str | list[TResponseInputItem],
212
*,
213
context: TContext | None = None,
214
max_turns: int = 10,
215
hooks: RunHooks | None = None,
216
run_config: RunConfig | None = None,
217
previous_response_id: str | None = None,
218
conversation_id: str | None = None,
219
session: Session | None = None
220
) -> RunResult:
221
"""
222
Run agent workflow synchronously.
223
224
Parameters:
225
- Same as run()
226
227
Returns:
228
- RunResult: Result containing output, items, and metadata
229
"""
230
231
@classmethod
232
def run_streamed(
233
starting_agent: Agent,
234
input: str | list[TResponseInputItem],
235
*,
236
context: TContext | None = None,
237
max_turns: int = 10,
238
hooks: RunHooks | None = None,
239
run_config: RunConfig | None = None,
240
previous_response_id: str | None = None,
241
conversation_id: str | None = None,
242
session: Session | None = None
243
) -> RunResultStreaming:
244
"""
245
Run agent workflow in streaming mode.
246
247
Parameters:
248
- Same as run()
249
250
Returns:
251
- RunResultStreaming: Streaming result with event iterator
252
"""
253
```
254
255
Usage example:
256
257
```python
258
from agents import Agent, Runner
259
import asyncio
260
261
agent = Agent(
262
name="Assistant",
263
instructions="You are helpful."
264
)
265
266
# Asynchronous
267
async def main():
268
result = await Runner.run(agent, "Hello!")
269
print(result.final_output)
270
271
asyncio.run(main())
272
273
# Synchronous
274
result = Runner.run_sync(agent, "Hello!")
275
print(result.final_output)
276
277
# Streaming
278
async def stream_main():
279
result = Runner.run_streamed(agent, "Tell me a story")
280
async for event in result.stream_events():
281
if event.type == "raw_response_event":
282
print(event.data)
283
284
asyncio.run(stream_main())
285
```
286
287
### Run Configuration
288
289
Configuration for entire agent run with model overrides, guardrails, and tracing settings.
290
291
```python { .api }
292
class RunConfig:
293
"""
294
Configuration for entire agent run.
295
296
Attributes:
297
- model: str | Model | None - Override model for all agents
298
- model_provider: ModelProvider - Model provider (default: MultiProvider)
299
- model_settings: ModelSettings | None - Global model settings
300
- handoff_input_filter: HandoffInputFilter | None - Global handoff filter
301
- nest_handoff_history: bool - Wrap history in single message
302
- handoff_history_mapper: HandoffHistoryMapper | None - Custom history mapper
303
- input_guardrails: list[InputGuardrail] | None - Run-level input guardrails
304
- output_guardrails: list[OutputGuardrail] | None - Run-level output guardrails
305
- tracing_disabled: bool - Disable tracing
306
- trace_include_sensitive_data: bool - Include sensitive data in traces
307
- workflow_name: str - Name for tracing
308
- trace_id: str | None - Custom trace ID
309
- group_id: str | None - Grouping identifier for traces
310
- trace_metadata: dict[str, Any] | None - Additional trace metadata
311
- session_input_callback: SessionInputCallback | None - Session history handler
312
- call_model_input_filter: CallModelInputFilter | None - Pre-model filter
313
"""
314
```
315
316
Usage example:
317
318
```python
319
from agents import Agent, Runner, RunConfig, ModelSettings, input_guardrail
320
321
@input_guardrail
322
def content_filter(input: str):
323
"""Filter inappropriate content."""
324
# Check content
325
...
326
327
config = RunConfig(
328
model="gpt-4o-mini",
329
model_settings=ModelSettings(temperature=0.5),
330
input_guardrails=[content_filter],
331
workflow_name="customer_service",
332
trace_include_sensitive_data=False
333
)
334
335
result = Runner.run_sync(agent, "Hello", run_config=config)
336
```
337
338
### Model Settings
339
340
LLM configuration settings for temperature, token limits, and more.
341
342
```python { .api }
343
class MCPToolChoice:
344
"""MCP-specific tool choice configuration."""
345
server_label: str
346
name: str
347
348
ToolChoice = Literal["auto", "required", "none"] | str | MCPToolChoice | None
349
350
class ModelSettings:
351
"""
352
LLM configuration settings.
353
354
Attributes:
355
- temperature: float | None - Sampling temperature (0-2)
356
- top_p: float | None - Nucleus sampling parameter
357
- frequency_penalty: float | None - Frequency penalty (-2 to 2)
358
- presence_penalty: float | None - Presence penalty (-2 to 2)
359
- tool_choice: ToolChoice | None - Tool selection mode
360
- parallel_tool_calls: bool | None - Allow parallel tool calls
361
- truncation: Literal["auto", "disabled"] | None - Truncation strategy
362
- max_tokens: int | None - Max output tokens
363
- reasoning: Reasoning | None - Reasoning configuration
364
- verbosity: Literal["low", "medium", "high"] | None - Response verbosity
365
- metadata: dict[str, str] | None - Request metadata
366
- store: bool | None - Store response
367
- prompt_cache_retention: Literal["in_memory", "24h"] | None - Cache retention
368
- include_usage: bool | None - Include usage chunk
369
- response_include: list[ResponseIncludable | str] | None - Additional output data
370
- top_logprobs: int | None - Number of top logprobs
371
- extra_query: Query | None - Additional query fields
372
- extra_body: Body | None - Additional body fields
373
- extra_headers: Headers | None - Additional headers
374
- extra_args: dict[str, Any] | None - Arbitrary kwargs
375
"""
376
377
def resolve(override: ModelSettings) -> ModelSettings:
378
"""
379
Merge with override settings.
380
381
Parameters:
382
- override: Settings to override with
383
384
Returns:
385
- ModelSettings: Merged settings
386
"""
387
388
def to_json_dict() -> dict[str, Any]:
389
"""
390
Convert to JSON dict.
391
392
Returns:
393
- dict: JSON-serializable dictionary
394
"""
395
```
396
397
Usage example:
398
399
```python
400
from agents import ModelSettings
401
402
settings = ModelSettings(
403
temperature=0.7,
404
max_tokens=1000,
405
tool_choice="auto",
406
parallel_tool_calls=True
407
)
408
409
# Override specific settings
410
strict_settings = settings.resolve(
411
ModelSettings(temperature=0.0)
412
)
413
```
414
415
### Tool Use Behavior
416
417
Configuration for controlling tool use behavior.
418
419
```python { .api }
420
class StopAtTools:
421
"""
422
Configuration to stop agent at specific tool calls.
423
424
Attributes:
425
- stop_at_tool_names: list[str] - Tool names that trigger stop
426
"""
427
428
class ToolsToFinalOutputResult:
429
"""
430
Result of tools-to-final-output function.
431
432
Attributes:
433
- is_final_output: bool - Whether this is final output
434
- final_output: Any | None - The final output value
435
"""
436
```
437
438
Type alias for custom tool-to-output conversion:
439
440
```python { .api }
441
ToolsToFinalOutputFunction = Callable[
442
[RunContextWrapper, list[FunctionToolResult]],
443
MaybeAwaitable[ToolsToFinalOutputResult]
444
]
445
```
446
447
Usage example:
448
449
```python
450
from agents import Agent, StopAtTools
451
452
# Stop when specific tool is called
453
agent = Agent(
454
name="Assistant",
455
tools=[get_weather, book_flight],
456
tool_use_behavior=StopAtTools(
457
stop_at_tool_names=["book_flight"]
458
)
459
)
460
461
# Custom tool result handler
462
async def tools_to_output(ctx, results):
463
# Process tool results
464
return ToolsToFinalOutputResult(
465
is_final_output=True,
466
final_output=results[0].output
467
)
468
469
agent = Agent(
470
name="Assistant",
471
tools=[search],
472
tool_use_behavior=tools_to_output
473
)
474
```
475
476
### MCP Configuration
477
478
Configuration for MCP server integration.
479
480
```python { .api }
481
class MCPConfig:
482
"""
483
Configuration for MCP servers.
484
485
Attributes:
486
- convert_schemas_to_strict: NotRequired[bool] - Convert to strict schemas
487
"""
488
```
489
490
### Prompts
491
492
Prompt configuration and dynamic prompt generation.
493
494
```python { .api }
495
class Prompt:
496
"""
497
Prompt configuration for OpenAI models.
498
499
Attributes:
500
- id: str - Prompt ID
501
- version: NotRequired[str] - Prompt version
502
- variables: NotRequired[dict[str, ResponsesPromptVariables]] - Prompt variables
503
"""
504
505
class GenerateDynamicPromptData:
506
"""
507
Input to dynamic prompt function.
508
509
Attributes:
510
- context: RunContextWrapper - Run context
511
- agent: Agent - Agent for prompt
512
"""
513
```
514
515
Type alias for dynamic prompt generation:
516
517
```python { .api }
518
DynamicPromptFunction = Callable[
519
[GenerateDynamicPromptData],
520
MaybeAwaitable[Prompt]
521
]
522
```
523
524
Utility class:
525
526
```python { .api }
527
class PromptUtil:
528
"""Utility for prompt handling."""
529
530
@staticmethod
531
def to_model_input(
532
prompt: Prompt,
533
context: RunContextWrapper,
534
agent: Agent
535
) -> ResponsePromptParam | None:
536
"""
537
Convert to model input.
538
539
Parameters:
540
- prompt: Prompt configuration
541
- context: Run context
542
- agent: Agent instance
543
544
Returns:
545
- ResponsePromptParam | None: Model input format
546
"""
547
```
548
549
Usage example:
550
551
```python
552
from agents import Agent, Prompt
553
554
# Static prompt
555
agent = Agent(
556
name="Assistant",
557
prompt=Prompt(
558
id="my-prompt-id",
559
version="1.0",
560
variables={"style": "professional"}
561
)
562
)
563
564
# Dynamic prompt
565
async def generate_prompt(data):
566
# Generate prompt based on context
567
return Prompt(
568
id="dynamic-prompt",
569
variables={"context": data.context.state}
570
)
571
572
agent = Agent(
573
name="Dynamic Assistant",
574
prompt=generate_prompt
575
)
576
```
577
578
### Agent Output Schema
579
580
JSON schema configuration for structured agent outputs.
581
582
```python { .api }
583
class AgentOutputSchemaBase:
584
"""Base class for output schemas."""
585
586
def is_plain_text() -> bool:
587
"""Check if plain text output."""
588
589
def name() -> str:
590
"""Get type name."""
591
592
def json_schema() -> dict[str, Any]:
593
"""Get JSON schema."""
594
595
def is_strict_json_schema() -> bool:
596
"""Check if strict mode."""
597
598
def validate_json(json_str: str) -> Any:
599
"""Validate and parse JSON."""
600
601
class AgentOutputSchema(AgentOutputSchemaBase):
602
"""
603
JSON schema for agent output.
604
605
Attributes:
606
- output_type: type[Any] - Output type
607
"""
608
609
def is_plain_text() -> bool:
610
"""
611
Check if plain text.
612
613
Returns:
614
- bool: True if plain text
615
"""
616
617
def json_schema() -> dict[str, Any]:
618
"""
619
Get JSON schema.
620
621
Returns:
622
- dict: JSON schema
623
"""
624
625
def is_strict_json_schema() -> bool:
626
"""
627
Check if strict mode.
628
629
Returns:
630
- bool: True if strict
631
"""
632
633
def validate_json(json_str: str) -> Any:
634
"""
635
Validate and parse JSON.
636
637
Parameters:
638
- json_str: JSON string
639
640
Returns:
641
- Any: Parsed and validated object
642
"""
643
644
def name() -> str:
645
"""
646
Get type name.
647
648
Returns:
649
- str: Type name
650
"""
651
```
652
653
Usage example:
654
655
```python
656
from agents import Agent
657
from pydantic import BaseModel
658
659
class MovieRecommendation(BaseModel):
660
title: str
661
year: int
662
rating: float
663
reason: str
664
665
agent = Agent(
666
name="Movie Recommender",
667
instructions="Recommend movies based on user preferences.",
668
output_type=MovieRecommendation
669
)
670
671
result = Runner.run_sync(agent, "Recommend a sci-fi movie")
672
recommendation = result.final_output_as(MovieRecommendation)
673
print(f"{recommendation.title} ({recommendation.year})")
674
```
675
676
### Run Context
677
678
Context wrapper providing access to agent state and utilities during execution.
679
680
```python { .api }
681
class RunContextWrapper[TContext]:
682
"""
683
Context wrapper for agent execution.
684
685
Type Parameters:
686
- TContext: Type of user context
687
688
Provides access to:
689
- User context object
690
- Current agent
691
- Run configuration
692
- Trace information
693
"""
694
```
695
696
### Call Model Data
697
698
Data structures for model input filtering.
699
700
```python { .api }
701
class CallModelData[TContext]:
702
"""
703
Data passed to call_model_input_filter.
704
705
Attributes:
706
- model_data: ModelInputData - Model input data
707
- agent: Agent[TContext] - Current agent
708
- context: TContext | None - Context object
709
"""
710
711
class ModelInputData:
712
"""
713
Container for model input.
714
715
Attributes:
716
- input: list[TResponseInputItem] - Input items
717
- instructions: str | None - System instructions
718
"""
719
```
720
721
Type alias:
722
723
```python { .api }
724
CallModelInputFilter = Callable[
725
[CallModelData],
726
MaybeAwaitable[ModelInputData]
727
]
728
```
729
730
### Run Options
731
732
Type definition for run parameters.
733
734
```python { .api }
735
class RunOptions[TContext]:
736
"""
737
Arguments for Runner methods.
738
739
Attributes:
740
- context: TContext | None - Context object
741
- max_turns: int - Maximum turns
742
- hooks: RunHooks | None - Lifecycle hooks
743
- run_config: RunConfig | None - Run configuration
744
- previous_response_id: str | None - Response ID for continuation
745
- conversation_id: str | None - Conversation ID
746
- session: Session | None - Session for history
747
"""
748
```
749
750
### Constants
751
752
```python { .api }
753
DEFAULT_MAX_TURNS: int = 10
754
```
755
756
The default maximum number of turns for an agent run.
757
758
## Type Aliases
759
760
```python { .api }
761
TContext = TypeVar("TContext") # User-defined context type
762
```
763