0
# Items and Streaming
1
2
Run items represent individual operations and outputs during agent execution, while streaming events provide real-time updates as agents process inputs. These enable fine-grained control over agent workflows and real-time UX.
3
4
## Capabilities
5
6
### Run Item Types
7
8
Union type encompassing all possible items during agent execution.
9
10
```python { .api }
11
RunItem = Union[
12
MessageOutputItem,
13
HandoffCallItem,
14
HandoffOutputItem,
15
ToolCallItem,
16
ToolCallOutputItem,
17
ReasoningItem,
18
MCPListToolsItem,
19
MCPApprovalRequestItem,
20
MCPApprovalResponseItem
21
]
22
```
23
24
### Message Output Item
25
26
LLM text responses.
27
28
```python { .api }
29
class MessageOutputItem:
30
"""
31
Message from LLM.
32
33
Attributes:
34
- raw_item: ResponseOutputMessage - Raw OpenAI message
35
- agent: Agent - Agent that produced message
36
- type: Literal["message_output_item"] - Item type
37
"""
38
```
39
40
### Tool Call Item
41
42
Tool invocations from the LLM.
43
44
```python { .api }
45
class ToolCallItem:
46
"""
47
Tool call (function, computer, etc.).
48
49
Attributes:
50
- raw_item: ToolCallItemTypes - Raw tool call
51
- agent: Agent - Agent making the call
52
- type: Literal["tool_call_item"] - Item type
53
"""
54
```
55
56
### Tool Call Output Item
57
58
Results from tool executions.
59
60
```python { .api }
61
class ToolCallOutputItem:
62
"""
63
Output of tool call.
64
65
Attributes:
66
- raw_item: ToolCallOutputTypes - Raw tool output
67
- agent: Agent - Agent that executed tool
68
- output: Any - Actual output value (parsed)
69
- type: Literal["tool_call_output_item"] - Item type
70
"""
71
```
72
73
### Handoff Items
74
75
Items related to agent handoffs.
76
77
```python { .api }
78
class HandoffCallItem:
79
"""
80
Tool call for handoff.
81
82
Attributes:
83
- raw_item: ResponseFunctionToolCall - Raw handoff call
84
- agent: Agent - Agent initiating handoff
85
- type: Literal["handoff_call_item"] - Item type
86
"""
87
88
class HandoffOutputItem:
89
"""
90
Output of handoff.
91
92
Attributes:
93
- raw_item: TResponseInputItem - Raw handoff output
94
- agent: Agent - Current agent after handoff
95
- source_agent: Agent - Agent that initiated handoff
96
- target_agent: Agent - Agent that received handoff
97
- type: Literal["handoff_output_item"] - Item type
98
"""
99
```
100
101
### Reasoning Item
102
103
Reasoning content from models that support reasoning.
104
105
```python { .api }
106
class ReasoningItem:
107
"""
108
Reasoning item from model.
109
110
Attributes:
111
- raw_item: ResponseReasoningItem - Raw reasoning content
112
- agent: Agent - Agent producing reasoning
113
- type: Literal["reasoning_item"] - Item type
114
"""
115
```
116
117
### MCP Items
118
119
Items related to MCP operations.
120
121
```python { .api }
122
class MCPListToolsItem:
123
"""
124
MCP list tools call.
125
126
Attributes:
127
- raw_item: McpListTools - Raw MCP tools list
128
- agent: Agent - Agent listing tools
129
- type: Literal["mcp_list_tools_item"] - Item type
130
"""
131
132
class MCPApprovalRequestItem:
133
"""
134
MCP approval request.
135
136
Attributes:
137
- raw_item: McpApprovalRequest - Raw approval request
138
- agent: Agent - Agent requesting approval
139
- type: Literal["mcp_approval_request_item"] - Item type
140
"""
141
142
class MCPApprovalResponseItem:
143
"""
144
MCP approval response.
145
146
Attributes:
147
- raw_item: McpApprovalResponse - Raw approval response
148
- agent: Agent - Agent receiving response
149
- type: Literal["mcp_approval_response_item"] - Item type
150
"""
151
```
152
153
### Model Response
154
155
Container for LLM responses with usage tracking.
156
157
```python { .api }
158
class ModelResponse:
159
"""
160
LLM response with usage.
161
162
Attributes:
163
- output: list[TResponseOutputItem] - Model outputs
164
- usage: Usage - Usage information
165
- response_id: str | None - Response ID for continuation
166
"""
167
168
def to_input_items() -> list[TResponseInputItem]:
169
"""
170
Convert to input format for next turn.
171
172
Returns:
173
- list[TResponseInputItem]: Items formatted as inputs
174
"""
175
```
176
177
### Item Helpers
178
179
Utility functions for working with items.
180
181
```python { .api }
182
class ItemHelpers:
183
"""Utility class for item manipulation."""
184
185
@classmethod
186
def extract_last_content(message: MessageOutputItem) -> str:
187
"""
188
Extract text or refusal from message.
189
190
Parameters:
191
- message: Message item
192
193
Returns:
194
- str: Text content or refusal
195
"""
196
197
@classmethod
198
def extract_last_text(message: MessageOutputItem) -> str | None:
199
"""
200
Extract text only from message.
201
202
Parameters:
203
- message: Message item
204
205
Returns:
206
- str | None: Text content or None
207
"""
208
209
@classmethod
210
def input_to_new_input_list(input: str | list[TResponseInputItem]) -> list[TResponseInputItem]:
211
"""
212
Convert to input list format.
213
214
Parameters:
215
- input: String or input items
216
217
Returns:
218
- list[TResponseInputItem]: Normalized input list
219
"""
220
221
@classmethod
222
def text_message_outputs(items: list[RunItem]) -> str:
223
"""
224
Concatenate text from items.
225
226
Parameters:
227
- items: Run items
228
229
Returns:
230
- str: Concatenated text
231
"""
232
233
@classmethod
234
def text_message_output(message: MessageOutputItem) -> str:
235
"""
236
Extract text from message.
237
238
Parameters:
239
- message: Message item
240
241
Returns:
242
- str: Text content
243
"""
244
245
@classmethod
246
def tool_call_output_item(tool_call, output) -> FunctionCallOutput:
247
"""
248
Create output item for tool call.
249
250
Parameters:
251
- tool_call: Tool call
252
- output: Tool output
253
254
Returns:
255
- FunctionCallOutput: Output item
256
"""
257
```
258
259
## Streaming
260
261
### Stream Events
262
263
Event types emitted during streaming execution.
264
265
```python { .api }
266
StreamEvent = Union[
267
RawResponsesStreamEvent,
268
RunItemStreamEvent,
269
AgentUpdatedStreamEvent
270
]
271
```
272
273
### Raw Responses Stream Event
274
275
Raw streaming events from LLM.
276
277
```python { .api }
278
class RawResponsesStreamEvent:
279
"""
280
Raw streaming event from LLM.
281
282
Attributes:
283
- data: TResponseStreamEvent - Raw event data
284
- type: Literal["raw_response_event"] - Event type
285
"""
286
```
287
288
### Run Item Stream Event
289
290
Events wrapping run items.
291
292
```python { .api }
293
class RunItemStreamEvent:
294
"""
295
Event wrapping a RunItem.
296
297
Attributes:
298
- name: Literal[...] - Event name (e.g., "message_output_created", "tool_called")
299
- item: RunItem - The created item
300
- type: Literal["run_item_stream_event"] - Event type
301
"""
302
```
303
304
Event names:
305
- `"message_output_created"`
306
- `"tool_called"`
307
- `"tool_output_created"`
308
- `"handoff_called"`
309
- `"handoff_output_created"`
310
- `"reasoning_created"`
311
- `"mcp_list_tools_created"`
312
- `"mcp_approval_request_created"`
313
- `"mcp_approval_response_created"`
314
315
### Agent Updated Stream Event
316
317
Events for agent changes (handoffs).
318
319
```python { .api }
320
class AgentUpdatedStreamEvent:
321
"""
322
Event for new agent.
323
324
Attributes:
325
- new_agent: Agent - The new agent
326
- type: Literal["agent_updated_stream_event"] - Event type
327
"""
328
```
329
330
### Streaming Usage
331
332
Stream agent execution in real-time:
333
334
```python
335
from agents import Agent, Runner
336
import asyncio
337
338
agent = Agent(name="Assistant", instructions="Tell a story")
339
340
async def stream_example():
341
result = Runner.run_streamed(agent, "Tell me a short story")
342
343
async for event in result.stream_events():
344
if event.type == "raw_response_event":
345
# Raw LLM chunks
346
if hasattr(event.data, 'delta'):
347
print(event.data.delta, end='', flush=True)
348
349
elif event.type == "run_item_stream_event":
350
# High-level items
351
if event.name == "message_output_created":
352
print(f"\nMessage: {event.item}")
353
elif event.name == "tool_called":
354
print(f"\nTool called: {event.item}")
355
356
elif event.type == "agent_updated_stream_event":
357
# Agent changed (handoff)
358
print(f"\nNow running: {event.new_agent.name}")
359
360
asyncio.run(stream_example())
361
```
362
363
### Streaming with Progress
364
365
Show progress during long-running operations:
366
367
```python
368
async def stream_with_progress():
369
result = Runner.run_streamed(agent, "Complex task")
370
371
tool_calls = 0
372
async for event in result.stream_events():
373
if event.type == "run_item_stream_event":
374
if event.name == "tool_called":
375
tool_calls += 1
376
print(f"Tool calls: {tool_calls}", end='\r')
377
378
print(f"\nCompleted with {tool_calls} tool calls")
379
print(f"Final output: {result.final_output}")
380
```
381
382
### Cancelling Streaming
383
384
Cancel streaming execution:
385
386
```python
387
async def cancellable_stream():
388
result = Runner.run_streamed(agent, "Long task")
389
390
try:
391
count = 0
392
async for event in result.stream_events():
393
count += 1
394
if count > 100:
395
# Cancel after 100 events
396
result.cancel(mode="immediate") # or "after_turn"
397
break
398
except asyncio.CancelledError:
399
print("Stream cancelled")
400
```
401
402
## Type Aliases
403
404
```python { .api }
405
TResponseInputItem = ... # OpenAI SDK type
406
TResponseOutputItem = ... # OpenAI SDK type
407
TResponseStreamEvent = ... # OpenAI SDK type
408
TResponse = ... # OpenAI SDK type
409
410
ToolCallItemTypes = Union[...] # Union of tool call types
411
ToolCallOutputTypes = Union[...] # Union of tool output types
412
```
413
414
## Usage Patterns
415
416
### Inspecting Run Items
417
418
```python
419
result = Runner.run_sync(agent, "What's 2+2?")
420
421
# All items from the run
422
for item in result.new_items:
423
if isinstance(item, MessageOutputItem):
424
print(f"Message: {ItemHelpers.extract_last_text(item)}")
425
elif isinstance(item, ToolCallItem):
426
print(f"Tool called: {item.raw_item}")
427
elif isinstance(item, ToolCallOutputItem):
428
print(f"Tool output: {item.output}")
429
```
430
431
### Building Conversation History
432
433
```python
434
# Use to_input_list() for manual history management
435
result1 = Runner.run_sync(agent, "Hello")
436
history = result1.to_input_list()
437
438
result2 = Runner.run_sync(agent, history + ["How are you?"])
439
```
440
441
### Streaming UI Updates
442
443
```python
444
async def stream_to_ui(user_input):
445
"""Stream updates to UI in real-time."""
446
result = Runner.run_streamed(agent, user_input)
447
448
current_message = ""
449
async for event in result.stream_events():
450
if event.type == "raw_response_event":
451
if hasattr(event.data, 'delta'):
452
# Update UI with new text
453
current_message += event.data.delta
454
ui.update_message(current_message)
455
456
elif event.type == "run_item_stream_event":
457
if event.name == "tool_called":
458
ui.show_tool_notification(event.item)
459
460
ui.mark_complete(result.final_output)
461
```
462
463
## Best Practices
464
465
1. **Use Streaming**: Enable real-time UX for long-running agents
466
2. **Handle All Event Types**: Process all stream event types for robustness
467
3. **Progress Indicators**: Show progress during tool calls and handoffs
468
4. **Error Handling**: Handle cancellation and errors gracefully
469
5. **Item Inspection**: Use ItemHelpers for consistent item processing
470
6. **History Management**: Use sessions instead of manual history for most cases
471
7. **Type Checking**: Use isinstance() to safely handle different item types
472
8. **Performance**: Stream events are async generators, handle efficiently
473