CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-openai-agents

Lightweight framework for building multi-agent workflows with LLMs, supporting handoffs, guardrails, tools, and 100+ LLM providers

Overview
Eval results
Files

core-agents.mddocs/

Core Agent System

The core agent system provides the fundamental building blocks for creating and running AI agents. This includes the Agent class for configuration, the Runner for execution, and supporting types for model settings, prompts, and agent output schemas.

Capabilities

Agent Class

The primary class for defining agent behavior, tools, handoffs, and configuration.

class Agent[TContext]:
    """
    Main agent class with instructions, tools, guardrails, and handoffs.

    Type Parameters:
    - TContext: Type of context object passed to agent

    Attributes:
    - name: str - Agent name for identification
    - instructions: str | Callable | None - System prompt or dynamic function
    - prompt: Prompt | DynamicPromptFunction | None - Prompt configuration
    - tools: list[Tool] - Available tools for agent to use
    - handoffs: list[Agent | Handoff] - Sub-agents for delegation
    - model: str | Model | None - Model identifier or instance
    - model_settings: ModelSettings - Model configuration
    - mcp_servers: list[MCPServer] - MCP servers for extended tools
    - mcp_config: MCPConfig - MCP configuration
    - input_guardrails: list[InputGuardrail] - Input validation checks
    - output_guardrails: list[OutputGuardrail] - Output validation checks
    - output_type: type[Any] | AgentOutputSchemaBase | None - Structured output schema
    - hooks: AgentHooks | None - Lifecycle callbacks
    - tool_use_behavior: Literal | StopAtTools | ToolsToFinalOutputFunction - Tool handling
    - reset_tool_choice: bool - Reset tool choice after call
    - handoff_description: str | None - Description for handoffs to this agent
    """

    def clone(**kwargs) -> Agent:
        """
        Create modified copy of agent.

        Parameters:
        - **kwargs: Agent attributes to override

        Returns:
        - Agent: New agent instance with specified changes
        """

    def as_tool(...) -> Tool:
        """
        Convert agent to tool for use by other agents.

        Returns:
        - Tool: Tool representation of this agent
        """

    def get_system_prompt(context) -> str | None:
        """
        Get resolved system prompt for agent.

        Parameters:
        - context: Context object

        Returns:
        - str | None: Resolved system prompt
        """

    def get_prompt(context) -> ResponsePromptParam | None:
        """
        Get prompt configuration for agent.

        Parameters:
        - context: Context object

        Returns:
        - ResponsePromptParam | None: Prompt configuration
        """

    def get_all_tools(context) -> list[Tool]:
        """
        Get all enabled tools including MCP tools.

        Parameters:
        - context: Context object

        Returns:
        - list[Tool]: All available tools
        """

    def get_mcp_tools(context) -> list[Tool]:
        """
        Get MCP tools for agent.

        Parameters:
        - context: Context object

        Returns:
        - list[Tool]: MCP tools
        """

Usage example:

from agents import Agent, function_tool

@function_tool
def search_knowledge_base(query: str) -> str:
    """Search the knowledge base."""
    return f"Results for: {query}"

agent = Agent(
    name="Research Assistant",
    instructions="You help users find information.",
    tools=[search_knowledge_base],
    model="gpt-4o",
    model_settings=ModelSettings(temperature=0.7)
)

# Clone with modifications
strict_agent = agent.clone(
    name="Strict Research Assistant",
    model_settings=ModelSettings(temperature=0.0)
)

Agent Base Class

Base class for Agent and RealtimeAgent with shared functionality.

class AgentBase[TContext]:
    """
    Base class for Agent and RealtimeAgent.

    Type Parameters:
    - TContext: Type of context object

    Attributes:
    - name: str - Agent name
    - handoff_description: str | None - Description for handoffs
    - tools: list[Tool] - Available tools
    - mcp_servers: list[MCPServer] - MCP servers
    - mcp_config: MCPConfig - MCP configuration
    """

    def get_mcp_tools(context) -> list[Tool]:
        """
        Get MCP tools for agent.

        Parameters:
        - context: Context object

        Returns:
        - list[Tool]: MCP tools
        """

    def get_all_tools(context) -> list[Tool]:
        """
        Get all enabled tools.

        Parameters:
        - context: Context object

        Returns:
        - list[Tool]: All enabled tools
        """

Runner Class

Main class for running agent workflows with synchronous, asynchronous, and streaming modes.

class Runner:
    """Main class for running agent workflows."""

    @classmethod
    async def run(
        starting_agent: Agent,
        input: str | list[TResponseInputItem],
        *,
        context: TContext | None = None,
        max_turns: int = 10,
        hooks: RunHooks | None = None,
        run_config: RunConfig | None = None,
        previous_response_id: str | None = None,
        conversation_id: str | None = None,
        session: Session | None = None
    ) -> RunResult:
        """
        Run agent workflow asynchronously.

        Parameters:
        - starting_agent: Agent to start with
        - input: User input as string or message list
        - context: Optional context object
        - max_turns: Maximum turns in agent loop (default: 10)
        - hooks: Lifecycle hooks for observability
        - run_config: Run-level configuration
        - previous_response_id: Response ID for continuation
        - conversation_id: Conversation ID for OpenAI Conversations API
        - session: Session for conversation history

        Returns:
        - RunResult: Result containing output, items, and metadata
        """

    @classmethod
    def run_sync(
        starting_agent: Agent,
        input: str | list[TResponseInputItem],
        *,
        context: TContext | None = None,
        max_turns: int = 10,
        hooks: RunHooks | None = None,
        run_config: RunConfig | None = None,
        previous_response_id: str | None = None,
        conversation_id: str | None = None,
        session: Session | None = None
    ) -> RunResult:
        """
        Run agent workflow synchronously.

        Parameters:
        - Same as run()

        Returns:
        - RunResult: Result containing output, items, and metadata
        """

    @classmethod
    def run_streamed(
        starting_agent: Agent,
        input: str | list[TResponseInputItem],
        *,
        context: TContext | None = None,
        max_turns: int = 10,
        hooks: RunHooks | None = None,
        run_config: RunConfig | None = None,
        previous_response_id: str | None = None,
        conversation_id: str | None = None,
        session: Session | None = None
    ) -> RunResultStreaming:
        """
        Run agent workflow in streaming mode.

        Parameters:
        - Same as run()

        Returns:
        - RunResultStreaming: Streaming result with event iterator
        """

Usage example:

from agents import Agent, Runner
import asyncio

agent = Agent(
    name="Assistant",
    instructions="You are helpful."
)

# Asynchronous
async def main():
    result = await Runner.run(agent, "Hello!")
    print(result.final_output)

asyncio.run(main())

# Synchronous
result = Runner.run_sync(agent, "Hello!")
print(result.final_output)

# Streaming
async def stream_main():
    result = Runner.run_streamed(agent, "Tell me a story")
    async for event in result.stream_events():
        if event.type == "raw_response_event":
            print(event.data)

asyncio.run(stream_main())

Run Configuration

Configuration for entire agent run with model overrides, guardrails, and tracing settings.

class RunConfig:
    """
    Configuration for entire agent run.

    Attributes:
    - model: str | Model | None - Override model for all agents
    - model_provider: ModelProvider - Model provider (default: MultiProvider)
    - model_settings: ModelSettings | None - Global model settings
    - handoff_input_filter: HandoffInputFilter | None - Global handoff filter
    - nest_handoff_history: bool - Wrap history in single message
    - handoff_history_mapper: HandoffHistoryMapper | None - Custom history mapper
    - input_guardrails: list[InputGuardrail] | None - Run-level input guardrails
    - output_guardrails: list[OutputGuardrail] | None - Run-level output guardrails
    - tracing_disabled: bool - Disable tracing
    - trace_include_sensitive_data: bool - Include sensitive data in traces
    - workflow_name: str - Name for tracing
    - trace_id: str | None - Custom trace ID
    - group_id: str | None - Grouping identifier for traces
    - trace_metadata: dict[str, Any] | None - Additional trace metadata
    - session_input_callback: SessionInputCallback | None - Session history handler
    - call_model_input_filter: CallModelInputFilter | None - Pre-model filter
    """

Usage example:

from agents import Agent, Runner, RunConfig, ModelSettings, input_guardrail

@input_guardrail
def content_filter(input: str):
    """Filter inappropriate content."""
    # Check content
    ...

config = RunConfig(
    model="gpt-4o-mini",
    model_settings=ModelSettings(temperature=0.5),
    input_guardrails=[content_filter],
    workflow_name="customer_service",
    trace_include_sensitive_data=False
)

result = Runner.run_sync(agent, "Hello", run_config=config)

Model Settings

LLM configuration settings for temperature, token limits, and more.

class MCPToolChoice:
    """MCP-specific tool choice configuration."""
    server_label: str
    name: str

ToolChoice = Literal["auto", "required", "none"] | str | MCPToolChoice | None

class ModelSettings:
    """
    LLM configuration settings.

    Attributes:
    - temperature: float | None - Sampling temperature (0-2)
    - top_p: float | None - Nucleus sampling parameter
    - frequency_penalty: float | None - Frequency penalty (-2 to 2)
    - presence_penalty: float | None - Presence penalty (-2 to 2)
    - tool_choice: ToolChoice | None - Tool selection mode
    - parallel_tool_calls: bool | None - Allow parallel tool calls
    - truncation: Literal["auto", "disabled"] | None - Truncation strategy
    - max_tokens: int | None - Max output tokens
    - reasoning: Reasoning | None - Reasoning configuration
    - verbosity: Literal["low", "medium", "high"] | None - Response verbosity
    - metadata: dict[str, str] | None - Request metadata
    - store: bool | None - Store response
    - prompt_cache_retention: Literal["in_memory", "24h"] | None - Cache retention
    - include_usage: bool | None - Include usage chunk
    - response_include: list[ResponseIncludable | str] | None - Additional output data
    - top_logprobs: int | None - Number of top logprobs
    - extra_query: Query | None - Additional query fields
    - extra_body: Body | None - Additional body fields
    - extra_headers: Headers | None - Additional headers
    - extra_args: dict[str, Any] | None - Arbitrary kwargs
    """

    def resolve(override: ModelSettings) -> ModelSettings:
        """
        Merge with override settings.

        Parameters:
        - override: Settings to override with

        Returns:
        - ModelSettings: Merged settings
        """

    def to_json_dict() -> dict[str, Any]:
        """
        Convert to JSON dict.

        Returns:
        - dict: JSON-serializable dictionary
        """

Usage example:

from agents import ModelSettings

settings = ModelSettings(
    temperature=0.7,
    max_tokens=1000,
    tool_choice="auto",
    parallel_tool_calls=True
)

# Override specific settings
strict_settings = settings.resolve(
    ModelSettings(temperature=0.0)
)

Tool Use Behavior

Configuration for controlling tool use behavior.

class StopAtTools:
    """
    Configuration to stop agent at specific tool calls.

    Attributes:
    - stop_at_tool_names: list[str] - Tool names that trigger stop
    """

class ToolsToFinalOutputResult:
    """
    Result of tools-to-final-output function.

    Attributes:
    - is_final_output: bool - Whether this is final output
    - final_output: Any | None - The final output value
    """

Type alias for custom tool-to-output conversion:

ToolsToFinalOutputFunction = Callable[
    [RunContextWrapper, list[FunctionToolResult]],
    MaybeAwaitable[ToolsToFinalOutputResult]
]

Usage example:

from agents import Agent, StopAtTools

# Stop when specific tool is called
agent = Agent(
    name="Assistant",
    tools=[get_weather, book_flight],
    tool_use_behavior=StopAtTools(
        stop_at_tool_names=["book_flight"]
    )
)

# Custom tool result handler
async def tools_to_output(ctx, results):
    # Process tool results
    return ToolsToFinalOutputResult(
        is_final_output=True,
        final_output=results[0].output
    )

agent = Agent(
    name="Assistant",
    tools=[search],
    tool_use_behavior=tools_to_output
)

MCP Configuration

Configuration for MCP server integration.

class MCPConfig:
    """
    Configuration for MCP servers.

    Attributes:
    - convert_schemas_to_strict: NotRequired[bool] - Convert to strict schemas
    """

Prompts

Prompt configuration and dynamic prompt generation.

class Prompt:
    """
    Prompt configuration for OpenAI models.

    Attributes:
    - id: str - Prompt ID
    - version: NotRequired[str] - Prompt version
    - variables: NotRequired[dict[str, ResponsesPromptVariables]] - Prompt variables
    """

class GenerateDynamicPromptData:
    """
    Input to dynamic prompt function.

    Attributes:
    - context: RunContextWrapper - Run context
    - agent: Agent - Agent for prompt
    """

Type alias for dynamic prompt generation:

DynamicPromptFunction = Callable[
    [GenerateDynamicPromptData],
    MaybeAwaitable[Prompt]
]

Utility class:

class PromptUtil:
    """Utility for prompt handling."""

    @staticmethod
    def to_model_input(
        prompt: Prompt,
        context: RunContextWrapper,
        agent: Agent
    ) -> ResponsePromptParam | None:
        """
        Convert to model input.

        Parameters:
        - prompt: Prompt configuration
        - context: Run context
        - agent: Agent instance

        Returns:
        - ResponsePromptParam | None: Model input format
        """

Usage example:

from agents import Agent, Prompt

# Static prompt
agent = Agent(
    name="Assistant",
    prompt=Prompt(
        id="my-prompt-id",
        version="1.0",
        variables={"style": "professional"}
    )
)

# Dynamic prompt
async def generate_prompt(data):
    # Generate prompt based on context
    return Prompt(
        id="dynamic-prompt",
        variables={"context": data.context.state}
    )

agent = Agent(
    name="Dynamic Assistant",
    prompt=generate_prompt
)

Agent Output Schema

JSON schema configuration for structured agent outputs.

class AgentOutputSchemaBase:
    """Base class for output schemas."""

    def is_plain_text() -> bool:
        """Check if plain text output."""

    def name() -> str:
        """Get type name."""

    def json_schema() -> dict[str, Any]:
        """Get JSON schema."""

    def is_strict_json_schema() -> bool:
        """Check if strict mode."""

    def validate_json(json_str: str) -> Any:
        """Validate and parse JSON."""

class AgentOutputSchema(AgentOutputSchemaBase):
    """
    JSON schema for agent output.

    Attributes:
    - output_type: type[Any] - Output type
    """

    def is_plain_text() -> bool:
        """
        Check if plain text.

        Returns:
        - bool: True if plain text
        """

    def json_schema() -> dict[str, Any]:
        """
        Get JSON schema.

        Returns:
        - dict: JSON schema
        """

    def is_strict_json_schema() -> bool:
        """
        Check if strict mode.

        Returns:
        - bool: True if strict
        """

    def validate_json(json_str: str) -> Any:
        """
        Validate and parse JSON.

        Parameters:
        - json_str: JSON string

        Returns:
        - Any: Parsed and validated object
        """

    def name() -> str:
        """
        Get type name.

        Returns:
        - str: Type name
        """

Usage example:

from agents import Agent
from pydantic import BaseModel

class MovieRecommendation(BaseModel):
    title: str
    year: int
    rating: float
    reason: str

agent = Agent(
    name="Movie Recommender",
    instructions="Recommend movies based on user preferences.",
    output_type=MovieRecommendation
)

result = Runner.run_sync(agent, "Recommend a sci-fi movie")
recommendation = result.final_output_as(MovieRecommendation)
print(f"{recommendation.title} ({recommendation.year})")

Run Context

Context wrapper providing access to agent state and utilities during execution.

class RunContextWrapper[TContext]:
    """
    Context wrapper for agent execution.

    Type Parameters:
    - TContext: Type of user context

    Provides access to:
    - User context object
    - Current agent
    - Run configuration
    - Trace information
    """

Call Model Data

Data structures for model input filtering.

class CallModelData[TContext]:
    """
    Data passed to call_model_input_filter.

    Attributes:
    - model_data: ModelInputData - Model input data
    - agent: Agent[TContext] - Current agent
    - context: TContext | None - Context object
    """

class ModelInputData:
    """
    Container for model input.

    Attributes:
    - input: list[TResponseInputItem] - Input items
    - instructions: str | None - System instructions
    """

Type alias:

CallModelInputFilter = Callable[
    [CallModelData],
    MaybeAwaitable[ModelInputData]
]

Run Options

Type definition for run parameters.

class RunOptions[TContext]:
    """
    Arguments for Runner methods.

    Attributes:
    - context: TContext | None - Context object
    - max_turns: int - Maximum turns
    - hooks: RunHooks | None - Lifecycle hooks
    - run_config: RunConfig | None - Run configuration
    - previous_response_id: str | None - Response ID for continuation
    - conversation_id: str | None - Conversation ID
    - session: Session | None - Session for history
    """

Constants

DEFAULT_MAX_TURNS: int = 10

The default maximum number of turns for an agent run.

Type Aliases

TContext = TypeVar("TContext")  # User-defined context type

Install with Tessl CLI

npx tessl i tessl/pypi-openai-agents

docs

core-agents.md

guardrails.md

handoffs.md

index.md

items-streaming.md

lifecycle.md

mcp.md

memory-sessions.md

model-providers.md

realtime.md

results-exceptions.md

tools.md

tracing.md

voice-pipeline.md

tile.json