or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

tessl/pypi-pydantic-ai

Agent Framework / shim to use Pydantic with LLMs

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
pypipkg:pypi/pydantic-ai@0.8.x

To install, run

npx @tessl/cli install tessl/pypi-pydantic-ai@0.8.0

0

# Pydantic AI

1

2

A comprehensive Python agent framework designed to make building production-grade applications with Generative AI less painful and more ergonomic. Built by the Pydantic team, Pydantic AI offers a FastAPI-like development experience for GenAI applications, featuring model-agnostic support for major LLM providers, seamless Pydantic Logfire integration for debugging and monitoring, type-safe design with powerful static type checking, Python-centric control flow, structured response validation using Pydantic models, optional dependency injection system for testable and maintainable code, streaming capabilities with immediate validation, and graph support for complex application flows.

3

4

## Package Information

5

6

- **Package Name**: pydantic-ai

7

- **Language**: Python

8

- **Installation**: `pip install pydantic-ai`

9

- **Requirements**: Python 3.9+

10

11

## Core Imports

12

13

```python

14

from pydantic_ai import Agent

15

```

16

17

Common imports for building agents:

18

19

```python

20

from pydantic_ai import Agent, RunContext, Tool

21

from pydantic_ai.models import OpenAIModel, AnthropicModel

22

```

23

24

For structured outputs:

25

26

```python

27

from pydantic_ai import Agent, StructuredDict

28

from pydantic import BaseModel

29

```

30

31

## Basic Usage

32

33

```python

34

from pydantic_ai import Agent

35

from pydantic_ai.models import OpenAIModel

36

37

# Create a simple agent

38

agent = Agent(

39

model=OpenAIModel('gpt-4'),

40

instructions='You are a helpful assistant.'

41

)

42

43

# Run the agent

44

result = agent.run_sync('What is the capital of France?')

45

print(result.data)

46

# Output: Paris

47

48

# Create an agent with structured output

49

from pydantic import BaseModel

50

51

class CityInfo(BaseModel):

52

name: str

53

country: str

54

population: int

55

56

agent = Agent(

57

model=OpenAIModel('gpt-4'),

58

instructions='Extract city information.',

59

output_type=CityInfo

60

)

61

62

result = agent.run_sync('Tell me about Tokyo')

63

print(result.data.name) # Tokyo

64

print(result.data.population) # 37,000,000

65

```

66

67

## Architecture

68

69

Pydantic AI is built around several key components that work together to provide a flexible and type-safe agent framework:

70

71

- **Agent**: The central class that orchestrates interactions between users, models, and tools

72

- **Models**: Abstraction layer supporting 10+ LLM providers (OpenAI, Anthropic, Google, etc.)

73

- **Tools**: Function-based capabilities that agents can call to perform actions

74

- **Messages**: Rich message system supporting text, images, audio, video, and documents

75

- **Output Types**: Flexible output handling including structured data, text, and tool-based outputs

76

- **Run Context**: Dependency injection system for testable and maintainable code

77

- **Streaming**: Real-time response processing with immediate validation

78

79

This architecture enables building production-grade AI applications with full type safety, comprehensive error handling, and seamless integration with the Python ecosystem.

80

81

## Capabilities

82

83

### Core Agent Framework

84

85

The foundational agent system for creating AI agents with typed dependencies, structured outputs, and comprehensive error handling. Includes the main Agent class, run management, and result handling.

86

87

```python { .api }

88

class Agent[AgentDepsT, OutputDataT]:

89

def __init__(

90

self,

91

model: Model | KnownModelName | str | None = None,

92

*,

93

output_type: OutputSpec[OutputDataT] = str,

94

instructions: str | SystemPromptFunc[AgentDepsT] | Sequence[str | SystemPromptFunc[AgentDepsT]] | None = None,

95

system_prompt: str | Sequence[str] = (),

96

deps_type: type[AgentDepsT] = NoneType,

97

name: str | None = None,

98

model_settings: ModelSettings | None = None,

99

retries: int = 1,

100

output_retries: int | None = None,

101

tools: Sequence[Tool[AgentDepsT] | ToolFuncEither[AgentDepsT, ...]] = (),

102

builtin_tools: Sequence[AbstractBuiltinTool] = (),

103

prepare_tools: ToolsPrepareFunc[AgentDepsT] | None = None,

104

prepare_output_tools: ToolsPrepareFunc[AgentDepsT] | None = None,

105

toolsets: Sequence[AbstractToolset[AgentDepsT] | ToolsetFunc[AgentDepsT]] | None = None,

106

defer_model_check: bool = False

107

): ...

108

109

def run_sync(

110

self,

111

user_prompt: str,

112

*,

113

message_history: list[ModelMessage] | None = None,

114

deps: AgentDepsT = None,

115

model_settings: ModelSettings | None = None

116

) -> AgentRunResult[OutputDataT]: ...

117

118

async def run(

119

self,

120

user_prompt: str,

121

*,

122

message_history: list[ModelMessage] | None = None,

123

deps: AgentDepsT = None,

124

model_settings: ModelSettings | None = None

125

) -> AgentRunResult[OutputDataT]: ...

126

```

127

128

[Core Agent Framework](./agent.md)

129

130

### Model Integration

131

132

Comprehensive model abstraction supporting 10+ LLM providers including OpenAI, Anthropic, Google, Groq, Cohere, Mistral, and more. Provides unified interface with provider-specific optimizations and fallback capabilities.

133

134

```python { .api }

135

class OpenAIModel:

136

def __init__(

137

self,

138

model_name: str,

139

*,

140

api_key: str | None = None,

141

base_url: str | None = None,

142

openai_client: OpenAI | None = None,

143

timeout: float | None = None

144

): ...

145

146

class AnthropicModel:

147

def __init__(

148

self,

149

model_name: str,

150

*,

151

api_key: str | None = None,

152

base_url: str | None = None,

153

anthropic_client: Anthropic | None = None,

154

timeout: float | None = None

155

): ...

156

157

def infer_model(model: Model | KnownModelName) -> Model: ...

158

```

159

160

[Model Integration](./models.md)

161

162

### Tools and Function Calling

163

164

Flexible tool system enabling agents to call Python functions, access APIs, execute code, and perform web searches. Supports both built-in tools and custom function definitions with full type safety.

165

166

```python { .api }

167

class Tool[AgentDepsT]:

168

def __init__(

169

self,

170

function: ToolFuncEither[AgentDepsT, Any],

171

*,

172

name: str | None = None,

173

description: str | None = None,

174

prepare: ToolPrepareFunc[AgentDepsT] | None = None

175

): ...

176

177

class RunContext[AgentDepsT]:

178

deps: AgentDepsT

179

retry: int

180

tool_name: str

181

182

def set_messages(self, messages: list[ModelMessage]) -> None: ...

183

184

class WebSearchTool:

185

def __init__(

186

self,

187

*,

188

max_results: int = 5,

189

request_timeout: float = 10.0

190

): ...

191

192

class CodeExecutionTool:

193

def __init__(

194

self,

195

*,

196

timeout: float = 30.0,

197

allowed_packages: list[str] | None = None

198

): ...

199

```

200

201

[Tools and Function Calling](./tools.md)

202

203

### Messages and Media

204

205

Rich message system supporting text, images, audio, video, documents, and binary content. Includes comprehensive streaming support and delta updates for real-time interactions.

206

207

```python { .api }

208

class ImageUrl:

209

def __init__(

210

self,

211

url: str,

212

*,

213

alt: str | None = None,

214

media_type: ImageMediaType | None = None

215

): ...

216

217

class AudioUrl:

218

def __init__(

219

self,

220

url: str,

221

*,

222

media_type: AudioMediaType | None = None

223

): ...

224

225

class ModelRequest:

226

parts: list[ModelRequestPart]

227

kind: Literal['request']

228

229

class ModelResponse:

230

parts: list[ModelResponsePart]

231

timestamp: datetime

232

kind: Literal['response']

233

```

234

235

[Messages and Media](./messages.md)

236

237

### Output Types and Validation

238

239

Flexible output handling supporting structured data validation using Pydantic models, text outputs, tool-based outputs, and native model outputs with comprehensive type safety.

240

241

```python { .api }

242

class ToolOutput[OutputDataT]:

243

tools: list[Tool]

244

defer: bool = False

245

246

class NativeOutput[OutputDataT]:

247

...

248

249

class PromptedOutput[OutputDataT]:

250

...

251

252

class TextOutput[OutputDataT]:

253

converter: TextOutputFunc[OutputDataT] | None = None

254

255

def StructuredDict() -> type[dict[str, Any]]: ...

256

```

257

258

[Output Types and Validation](./output.md)

259

260

### Streaming and Async

261

262

Comprehensive streaming support for real-time interactions with immediate validation, delta updates, and event handling. Includes both async and sync streaming interfaces.

263

264

```python { .api }

265

class AgentStream[AgentDepsT, OutputDataT]:

266

async def __anext__(self) -> AgentStreamEvent[AgentDepsT, OutputDataT]: ...

267

268

async def get_final_result(self) -> FinalResult[OutputDataT]: ...

269

270

async def run_stream(

271

self,

272

user_prompt: str,

273

*,

274

message_history: list[ModelMessage] | None = None,

275

deps: AgentDepsT = None,

276

model_settings: ModelSettings | None = None

277

) -> AgentStream[AgentDepsT, OutputDataT]: ...

278

```

279

280

[Streaming and Async](./streaming.md)

281

282

### Settings and Configuration

283

284

Model settings, usage tracking, and configuration options for fine-tuning agent behavior, monitoring resource consumption, and setting usage limits.

285

286

```python { .api }

287

class ModelSettings(TypedDict, total=False):

288

max_tokens: int

289

temperature: float

290

top_p: float

291

timeout: float | Timeout

292

parallel_tool_calls: bool

293

seed: int

294

presence_penalty: float

295

frequency_penalty: float

296

logit_bias: dict[str, int]

297

stop_sequences: list[str]

298

extra_headers: dict[str, str]

299

extra_body: object

300

301

class RunUsage:

302

request_count: int

303

input_tokens: int | None

304

output_tokens: int | None

305

cache_creation_input_tokens: int | None

306

cache_read_input_tokens: int | None

307

total_tokens: int | None

308

309

class UsageLimits:

310

request_limit: int | None = None

311

input_token_limit: int | None = None

312

output_token_limit: int | None = None

313

total_token_limit: int | None = None

314

```

315

316

[Settings and Configuration](./settings.md)