CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-openai-agents

Lightweight framework for building multi-agent workflows with LLMs, supporting handoffs, guardrails, tools, and 100+ LLM providers

Overview
Eval results
Files

items-streaming.mddocs/

Items and Streaming

Run items represent individual operations and outputs during agent execution, while streaming events provide real-time updates as agents process inputs. These enable fine-grained control over agent workflows and real-time UX.

Capabilities

Run Item Types

Union type encompassing all possible items during agent execution.

RunItem = Union[
    MessageOutputItem,
    HandoffCallItem,
    HandoffOutputItem,
    ToolCallItem,
    ToolCallOutputItem,
    ReasoningItem,
    MCPListToolsItem,
    MCPApprovalRequestItem,
    MCPApprovalResponseItem
]

Message Output Item

LLM text responses.

class MessageOutputItem:
    """
    Message from LLM.

    Attributes:
    - raw_item: ResponseOutputMessage - Raw OpenAI message
    - agent: Agent - Agent that produced message
    - type: Literal["message_output_item"] - Item type
    """

Tool Call Item

Tool invocations from the LLM.

class ToolCallItem:
    """
    Tool call (function, computer, etc.).

    Attributes:
    - raw_item: ToolCallItemTypes - Raw tool call
    - agent: Agent - Agent making the call
    - type: Literal["tool_call_item"] - Item type
    """

Tool Call Output Item

Results from tool executions.

class ToolCallOutputItem:
    """
    Output of tool call.

    Attributes:
    - raw_item: ToolCallOutputTypes - Raw tool output
    - agent: Agent - Agent that executed tool
    - output: Any - Actual output value (parsed)
    - type: Literal["tool_call_output_item"] - Item type
    """

Handoff Items

Items related to agent handoffs.

class HandoffCallItem:
    """
    Tool call for handoff.

    Attributes:
    - raw_item: ResponseFunctionToolCall - Raw handoff call
    - agent: Agent - Agent initiating handoff
    - type: Literal["handoff_call_item"] - Item type
    """

class HandoffOutputItem:
    """
    Output of handoff.

    Attributes:
    - raw_item: TResponseInputItem - Raw handoff output
    - agent: Agent - Current agent after handoff
    - source_agent: Agent - Agent that initiated handoff
    - target_agent: Agent - Agent that received handoff
    - type: Literal["handoff_output_item"] - Item type
    """

Reasoning Item

Reasoning content from models that support reasoning.

class ReasoningItem:
    """
    Reasoning item from model.

    Attributes:
    - raw_item: ResponseReasoningItem - Raw reasoning content
    - agent: Agent - Agent producing reasoning
    - type: Literal["reasoning_item"] - Item type
    """

MCP Items

Items related to MCP operations.

class MCPListToolsItem:
    """
    MCP list tools call.

    Attributes:
    - raw_item: McpListTools - Raw MCP tools list
    - agent: Agent - Agent listing tools
    - type: Literal["mcp_list_tools_item"] - Item type
    """

class MCPApprovalRequestItem:
    """
    MCP approval request.

    Attributes:
    - raw_item: McpApprovalRequest - Raw approval request
    - agent: Agent - Agent requesting approval
    - type: Literal["mcp_approval_request_item"] - Item type
    """

class MCPApprovalResponseItem:
    """
    MCP approval response.

    Attributes:
    - raw_item: McpApprovalResponse - Raw approval response
    - agent: Agent - Agent receiving response
    - type: Literal["mcp_approval_response_item"] - Item type
    """

Model Response

Container for LLM responses with usage tracking.

class ModelResponse:
    """
    LLM response with usage.

    Attributes:
    - output: list[TResponseOutputItem] - Model outputs
    - usage: Usage - Usage information
    - response_id: str | None - Response ID for continuation
    """

    def to_input_items() -> list[TResponseInputItem]:
        """
        Convert to input format for next turn.

        Returns:
        - list[TResponseInputItem]: Items formatted as inputs
        """

Item Helpers

Utility functions for working with items.

class ItemHelpers:
    """Utility class for item manipulation."""

    @classmethod
    def extract_last_content(message: MessageOutputItem) -> str:
        """
        Extract text or refusal from message.

        Parameters:
        - message: Message item

        Returns:
        - str: Text content or refusal
        """

    @classmethod
    def extract_last_text(message: MessageOutputItem) -> str | None:
        """
        Extract text only from message.

        Parameters:
        - message: Message item

        Returns:
        - str | None: Text content or None
        """

    @classmethod
    def input_to_new_input_list(input: str | list[TResponseInputItem]) -> list[TResponseInputItem]:
        """
        Convert to input list format.

        Parameters:
        - input: String or input items

        Returns:
        - list[TResponseInputItem]: Normalized input list
        """

    @classmethod
    def text_message_outputs(items: list[RunItem]) -> str:
        """
        Concatenate text from items.

        Parameters:
        - items: Run items

        Returns:
        - str: Concatenated text
        """

    @classmethod
    def text_message_output(message: MessageOutputItem) -> str:
        """
        Extract text from message.

        Parameters:
        - message: Message item

        Returns:
        - str: Text content
        """

    @classmethod
    def tool_call_output_item(tool_call, output) -> FunctionCallOutput:
        """
        Create output item for tool call.

        Parameters:
        - tool_call: Tool call
        - output: Tool output

        Returns:
        - FunctionCallOutput: Output item
        """

Streaming

Stream Events

Event types emitted during streaming execution.

StreamEvent = Union[
    RawResponsesStreamEvent,
    RunItemStreamEvent,
    AgentUpdatedStreamEvent
]

Raw Responses Stream Event

Raw streaming events from LLM.

class RawResponsesStreamEvent:
    """
    Raw streaming event from LLM.

    Attributes:
    - data: TResponseStreamEvent - Raw event data
    - type: Literal["raw_response_event"] - Event type
    """

Run Item Stream Event

Events wrapping run items.

class RunItemStreamEvent:
    """
    Event wrapping a RunItem.

    Attributes:
    - name: Literal[...] - Event name (e.g., "message_output_created", "tool_called")
    - item: RunItem - The created item
    - type: Literal["run_item_stream_event"] - Event type
    """

Event names:

  • "message_output_created"
  • "tool_called"
  • "tool_output_created"
  • "handoff_called"
  • "handoff_output_created"
  • "reasoning_created"
  • "mcp_list_tools_created"
  • "mcp_approval_request_created"
  • "mcp_approval_response_created"

Agent Updated Stream Event

Events for agent changes (handoffs).

class AgentUpdatedStreamEvent:
    """
    Event for new agent.

    Attributes:
    - new_agent: Agent - The new agent
    - type: Literal["agent_updated_stream_event"] - Event type
    """

Streaming Usage

Stream agent execution in real-time:

from agents import Agent, Runner
import asyncio

agent = Agent(name="Assistant", instructions="Tell a story")

async def stream_example():
    result = Runner.run_streamed(agent, "Tell me a short story")

    async for event in result.stream_events():
        if event.type == "raw_response_event":
            # Raw LLM chunks
            if hasattr(event.data, 'delta'):
                print(event.data.delta, end='', flush=True)

        elif event.type == "run_item_stream_event":
            # High-level items
            if event.name == "message_output_created":
                print(f"\nMessage: {event.item}")
            elif event.name == "tool_called":
                print(f"\nTool called: {event.item}")

        elif event.type == "agent_updated_stream_event":
            # Agent changed (handoff)
            print(f"\nNow running: {event.new_agent.name}")

asyncio.run(stream_example())

Streaming with Progress

Show progress during long-running operations:

async def stream_with_progress():
    result = Runner.run_streamed(agent, "Complex task")

    tool_calls = 0
    async for event in result.stream_events():
        if event.type == "run_item_stream_event":
            if event.name == "tool_called":
                tool_calls += 1
                print(f"Tool calls: {tool_calls}", end='\r')

    print(f"\nCompleted with {tool_calls} tool calls")
    print(f"Final output: {result.final_output}")

Cancelling Streaming

Cancel streaming execution:

async def cancellable_stream():
    result = Runner.run_streamed(agent, "Long task")

    try:
        count = 0
        async for event in result.stream_events():
            count += 1
            if count > 100:
                # Cancel after 100 events
                result.cancel(mode="immediate")  # or "after_turn"
                break
    except asyncio.CancelledError:
        print("Stream cancelled")

Type Aliases

TResponseInputItem = ...  # OpenAI SDK type
TResponseOutputItem = ...  # OpenAI SDK type
TResponseStreamEvent = ...  # OpenAI SDK type
TResponse = ...  # OpenAI SDK type

ToolCallItemTypes = Union[...]  # Union of tool call types
ToolCallOutputTypes = Union[...]  # Union of tool output types

Usage Patterns

Inspecting Run Items

result = Runner.run_sync(agent, "What's 2+2?")

# All items from the run
for item in result.new_items:
    if isinstance(item, MessageOutputItem):
        print(f"Message: {ItemHelpers.extract_last_text(item)}")
    elif isinstance(item, ToolCallItem):
        print(f"Tool called: {item.raw_item}")
    elif isinstance(item, ToolCallOutputItem):
        print(f"Tool output: {item.output}")

Building Conversation History

# Use to_input_list() for manual history management
result1 = Runner.run_sync(agent, "Hello")
history = result1.to_input_list()

result2 = Runner.run_sync(agent, history + ["How are you?"])

Streaming UI Updates

async def stream_to_ui(user_input):
    """Stream updates to UI in real-time."""
    result = Runner.run_streamed(agent, user_input)

    current_message = ""
    async for event in result.stream_events():
        if event.type == "raw_response_event":
            if hasattr(event.data, 'delta'):
                # Update UI with new text
                current_message += event.data.delta
                ui.update_message(current_message)

        elif event.type == "run_item_stream_event":
            if event.name == "tool_called":
                ui.show_tool_notification(event.item)

    ui.mark_complete(result.final_output)

Best Practices

  1. Use Streaming: Enable real-time UX for long-running agents
  2. Handle All Event Types: Process all stream event types for robustness
  3. Progress Indicators: Show progress during tool calls and handoffs
  4. Error Handling: Handle cancellation and errors gracefully
  5. Item Inspection: Use ItemHelpers for consistent item processing
  6. History Management: Use sessions instead of manual history for most cases
  7. Type Checking: Use isinstance() to safely handle different item types
  8. Performance: Stream events are async generators, handle efficiently

Install with Tessl CLI

npx tessl i tessl/pypi-openai-agents

docs

core-agents.md

guardrails.md

handoffs.md

index.md

items-streaming.md

lifecycle.md

mcp.md

memory-sessions.md

model-providers.md

realtime.md

results-exceptions.md

tools.md

tracing.md

voice-pipeline.md

tile.json