- Spec files
pypi-pydantic-ai
Describes: pkg:pypi/pydantic-ai@0.8.x
- Description
- Agent Framework / shim to use Pydantic with LLMs
- Author
- tessl
- Last updated
index.md docs/
1# Pydantic AI23A comprehensive Python agent framework designed to make building production-grade applications with Generative AI less painful and more ergonomic. Built by the Pydantic team, Pydantic AI offers a FastAPI-like development experience for GenAI applications, featuring model-agnostic support for major LLM providers, seamless Pydantic Logfire integration for debugging and monitoring, type-safe design with powerful static type checking, Python-centric control flow, structured response validation using Pydantic models, optional dependency injection system for testable and maintainable code, streaming capabilities with immediate validation, and graph support for complex application flows.45## Package Information67- **Package Name**: pydantic-ai8- **Language**: Python9- **Installation**: `pip install pydantic-ai`10- **Requirements**: Python 3.9+1112## Core Imports1314```python15from pydantic_ai import Agent16```1718Common imports for building agents:1920```python21from pydantic_ai import Agent, RunContext, Tool22from pydantic_ai.models import OpenAIModel, AnthropicModel23```2425For structured outputs:2627```python28from pydantic_ai import Agent, StructuredDict29from pydantic import BaseModel30```3132## Basic Usage3334```python35from pydantic_ai import Agent36from pydantic_ai.models import OpenAIModel3738# Create a simple agent39agent = Agent(40model=OpenAIModel('gpt-4'),41instructions='You are a helpful assistant.'42)4344# Run the agent45result = agent.run_sync('What is the capital of France?')46print(result.data)47# Output: Paris4849# Create an agent with structured output50from pydantic import BaseModel5152class CityInfo(BaseModel):53name: str54country: str55population: int5657agent = Agent(58model=OpenAIModel('gpt-4'),59instructions='Extract city information.',60output_type=CityInfo61)6263result = agent.run_sync('Tell me about Tokyo')64print(result.data.name) # Tokyo65print(result.data.population) # 37,000,00066```6768## Architecture6970Pydantic AI is built around several key components that work together to provide a flexible and type-safe agent framework:7172- **Agent**: The central class that orchestrates interactions between users, models, and tools73- **Models**: Abstraction layer supporting 10+ LLM providers (OpenAI, Anthropic, Google, etc.)74- **Tools**: Function-based capabilities that agents can call to perform actions75- **Messages**: Rich message system supporting text, images, audio, video, and documents76- **Output Types**: Flexible output handling including structured data, text, and tool-based outputs77- **Run Context**: Dependency injection system for testable and maintainable code78- **Streaming**: Real-time response processing with immediate validation7980This architecture enables building production-grade AI applications with full type safety, comprehensive error handling, and seamless integration with the Python ecosystem.8182## Capabilities8384### Core Agent Framework8586The foundational agent system for creating AI agents with typed dependencies, structured outputs, and comprehensive error handling. Includes the main Agent class, run management, and result handling.8788```python { .api }89class Agent[AgentDepsT, OutputDataT]:90def __init__(91self,92model: Model | KnownModelName | str | None = None,93*,94output_type: OutputSpec[OutputDataT] = str,95instructions: str | SystemPromptFunc[AgentDepsT] | Sequence[str | SystemPromptFunc[AgentDepsT]] | None = None,96system_prompt: str | Sequence[str] = (),97deps_type: type[AgentDepsT] = NoneType,98name: str | None = None,99model_settings: ModelSettings | None = None,100retries: int = 1,101output_retries: int | None = None,102tools: Sequence[Tool[AgentDepsT] | ToolFuncEither[AgentDepsT, ...]] = (),103builtin_tools: Sequence[AbstractBuiltinTool] = (),104prepare_tools: ToolsPrepareFunc[AgentDepsT] | None = None,105prepare_output_tools: ToolsPrepareFunc[AgentDepsT] | None = None,106toolsets: Sequence[AbstractToolset[AgentDepsT] | ToolsetFunc[AgentDepsT]] | None = None,107defer_model_check: bool = False108): ...109110def run_sync(111self,112user_prompt: str,113*,114message_history: list[ModelMessage] | None = None,115deps: AgentDepsT = None,116model_settings: ModelSettings | None = None117) -> AgentRunResult[OutputDataT]: ...118119async def run(120self,121user_prompt: str,122*,123message_history: list[ModelMessage] | None = None,124deps: AgentDepsT = None,125model_settings: ModelSettings | None = None126) -> AgentRunResult[OutputDataT]: ...127```128129[Core Agent Framework](./agent.md)130131### Model Integration132133Comprehensive model abstraction supporting 10+ LLM providers including OpenAI, Anthropic, Google, Groq, Cohere, Mistral, and more. Provides unified interface with provider-specific optimizations and fallback capabilities.134135```python { .api }136class OpenAIModel:137def __init__(138self,139model_name: str,140*,141api_key: str | None = None,142base_url: str | None = None,143openai_client: OpenAI | None = None,144timeout: float | None = None145): ...146147class AnthropicModel:148def __init__(149self,150model_name: str,151*,152api_key: str | None = None,153base_url: str | None = None,154anthropic_client: Anthropic | None = None,155timeout: float | None = None156): ...157158def infer_model(model: Model | KnownModelName) -> Model: ...159```160161[Model Integration](./models.md)162163### Tools and Function Calling164165Flexible tool system enabling agents to call Python functions, access APIs, execute code, and perform web searches. Supports both built-in tools and custom function definitions with full type safety.166167```python { .api }168class Tool[AgentDepsT]:169def __init__(170self,171function: ToolFuncEither[AgentDepsT, Any],172*,173name: str | None = None,174description: str | None = None,175prepare: ToolPrepareFunc[AgentDepsT] | None = None176): ...177178class RunContext[AgentDepsT]:179deps: AgentDepsT180retry: int181tool_name: str182183def set_messages(self, messages: list[ModelMessage]) -> None: ...184185class WebSearchTool:186def __init__(187self,188*,189max_results: int = 5,190request_timeout: float = 10.0191): ...192193class CodeExecutionTool:194def __init__(195self,196*,197timeout: float = 30.0,198allowed_packages: list[str] | None = None199): ...200```201202[Tools and Function Calling](./tools.md)203204### Messages and Media205206Rich message system supporting text, images, audio, video, documents, and binary content. Includes comprehensive streaming support and delta updates for real-time interactions.207208```python { .api }209class ImageUrl:210def __init__(211self,212url: str,213*,214alt: str | None = None,215media_type: ImageMediaType | None = None216): ...217218class AudioUrl:219def __init__(220self,221url: str,222*,223media_type: AudioMediaType | None = None224): ...225226class ModelRequest:227parts: list[ModelRequestPart]228kind: Literal['request']229230class ModelResponse:231parts: list[ModelResponsePart]232timestamp: datetime233kind: Literal['response']234```235236[Messages and Media](./messages.md)237238### Output Types and Validation239240Flexible output handling supporting structured data validation using Pydantic models, text outputs, tool-based outputs, and native model outputs with comprehensive type safety.241242```python { .api }243class ToolOutput[OutputDataT]:244tools: list[Tool]245defer: bool = False246247class NativeOutput[OutputDataT]:248...249250class PromptedOutput[OutputDataT]:251...252253class TextOutput[OutputDataT]:254converter: TextOutputFunc[OutputDataT] | None = None255256def StructuredDict() -> type[dict[str, Any]]: ...257```258259[Output Types and Validation](./output.md)260261### Streaming and Async262263Comprehensive streaming support for real-time interactions with immediate validation, delta updates, and event handling. Includes both async and sync streaming interfaces.264265```python { .api }266class AgentStream[AgentDepsT, OutputDataT]:267async def __anext__(self) -> AgentStreamEvent[AgentDepsT, OutputDataT]: ...268269async def get_final_result(self) -> FinalResult[OutputDataT]: ...270271async def run_stream(272self,273user_prompt: str,274*,275message_history: list[ModelMessage] | None = None,276deps: AgentDepsT = None,277model_settings: ModelSettings | None = None278) -> AgentStream[AgentDepsT, OutputDataT]: ...279```280281[Streaming and Async](./streaming.md)282283### Settings and Configuration284285Model settings, usage tracking, and configuration options for fine-tuning agent behavior, monitoring resource consumption, and setting usage limits.286287```python { .api }288class ModelSettings(TypedDict, total=False):289max_tokens: int290temperature: float291top_p: float292timeout: float | Timeout293parallel_tool_calls: bool294seed: int295presence_penalty: float296frequency_penalty: float297logit_bias: dict[str, int]298stop_sequences: list[str]299extra_headers: dict[str, str]300extra_body: object301302class RunUsage:303request_count: int304input_tokens: int | None305output_tokens: int | None306cache_creation_input_tokens: int | None307cache_read_input_tokens: int | None308total_tokens: int | None309310class UsageLimits:311request_limit: int | None = None312input_token_limit: int | None = None313output_token_limit: int | None = None314total_token_limit: int | None = None315```316317[Settings and Configuration](./settings.md)