or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

agent.mdindex.mdmessages.mdmodels.mdoutput.mdsettings.mdstreaming.mdtools.md

models.mddocs/

0

# Model Integration

1

2

Comprehensive model abstraction supporting 10+ LLM providers including OpenAI, Anthropic, Google, Groq, Cohere, Mistral, and more. Provides unified interface with provider-specific optimizations and fallback capabilities.

3

4

## Capabilities

5

6

### OpenAI Models

7

8

Integration with OpenAI's GPT models including GPT-4, GPT-3.5-turbo, and other OpenAI models.

9

10

```python { .api }

11

class OpenAIModel:

12

"""

13

OpenAI model integration supporting GPT-4, GPT-3.5-turbo, and other OpenAI models.

14

"""

15

def __init__(

16

self,

17

model_name: str,

18

*,

19

api_key: str | None = None,

20

base_url: str | None = None,

21

openai_client: OpenAI | None = None,

22

timeout: float | None = None

23

):

24

"""

25

Initialize OpenAI model.

26

27

Parameters:

28

- model_name: OpenAI model name (e.g., 'gpt-4', 'gpt-3.5-turbo')

29

- api_key: OpenAI API key (defaults to OPENAI_API_KEY env var)

30

- base_url: Custom base URL for OpenAI API

31

- openai_client: Pre-configured OpenAI client instance

32

- timeout: Request timeout in seconds

33

"""

34

```

35

36

### Anthropic Models

37

38

Integration with Anthropic's Claude models including Claude-3.5, Claude-3, and other Anthropic models.

39

40

```python { .api }

41

class AnthropicModel:

42

"""

43

Anthropic model integration supporting Claude-3.5, Claude-3, and other Anthropic models.

44

"""

45

def __init__(

46

self,

47

model_name: str,

48

*,

49

api_key: str | None = None,

50

base_url: str | None = None,

51

anthropic_client: Anthropic | None = None,

52

timeout: float | None = None

53

):

54

"""

55

Initialize Anthropic model.

56

57

Parameters:

58

- model_name: Anthropic model name (e.g., 'claude-3-5-sonnet-20241022')

59

- api_key: Anthropic API key (defaults to ANTHROPIC_API_KEY env var)

60

- base_url: Custom base URL for Anthropic API

61

- anthropic_client: Pre-configured Anthropic client instance

62

- timeout: Request timeout in seconds

63

"""

64

```

65

66

### Google Models

67

68

Integration with Google's Gemini and other Google AI models.

69

70

```python { .api }

71

class GeminiModel:

72

"""

73

Google Gemini model integration.

74

"""

75

def __init__(

76

self,

77

model_name: str,

78

*,

79

api_key: str | None = None,

80

timeout: float | None = None

81

):

82

"""

83

Initialize Gemini model.

84

85

Parameters:

86

- model_name: Gemini model name (e.g., 'gemini-1.5-pro')

87

- api_key: Google API key (defaults to GOOGLE_API_KEY env var)

88

- timeout: Request timeout in seconds

89

"""

90

91

class GoogleModel:

92

"""

93

Google AI model integration for Vertex AI and other Google models.

94

"""

95

def __init__(

96

self,

97

model_name: str,

98

*,

99

project_id: str | None = None,

100

location: str = 'us-central1',

101

credentials: dict | None = None,

102

timeout: float | None = None

103

):

104

"""

105

Initialize Google AI model.

106

107

Parameters:

108

- model_name: Google model name

109

- project_id: Google Cloud project ID

110

- location: Google Cloud region

111

- credentials: Service account credentials

112

- timeout: Request timeout in seconds

113

"""

114

```

115

116

### Other Model Providers

117

118

Support for additional LLM providers with consistent interface.

119

120

```python { .api }

121

class GroqModel:

122

"""

123

Groq model integration for fast inference.

124

"""

125

def __init__(

126

self,

127

model_name: str,

128

*,

129

api_key: str | None = None,

130

timeout: float | None = None

131

): ...

132

133

class CohereModel:

134

"""

135

Cohere model integration.

136

"""

137

def __init__(

138

self,

139

model_name: str,

140

*,

141

api_key: str | None = None,

142

timeout: float | None = None

143

): ...

144

145

class MistralModel:

146

"""

147

Mistral AI model integration.

148

"""

149

def __init__(

150

self,

151

model_name: str,

152

*,

153

api_key: str | None = None,

154

timeout: float | None = None

155

): ...

156

157

class HuggingFaceModel:

158

"""

159

HuggingFace model integration.

160

"""

161

def __init__(

162

self,

163

model_name: str,

164

*,

165

api_key: str | None = None,

166

base_url: str | None = None,

167

timeout: float | None = None

168

): ...

169

```

170

171

### AWS Bedrock Integration

172

173

Integration with AWS Bedrock for accessing various models through AWS infrastructure.

174

175

```python { .api }

176

class BedrockModel:

177

"""

178

AWS Bedrock model integration.

179

"""

180

def __init__(

181

self,

182

model_name: str,

183

*,

184

region: str | None = None,

185

aws_access_key_id: str | None = None,

186

aws_secret_access_key: str | None = None,

187

aws_session_token: str | None = None,

188

profile: str | None = None,

189

timeout: float | None = None

190

):

191

"""

192

Initialize Bedrock model.

193

194

Parameters:

195

- model_name: Bedrock model ID

196

- region: AWS region

197

- aws_access_key_id: AWS access key

198

- aws_secret_access_key: AWS secret key

199

- aws_session_token: AWS session token

200

- profile: AWS profile name

201

- timeout: Request timeout in seconds

202

"""

203

```

204

205

### Model Abstractions

206

207

Core model interface and utilities for working with models.

208

209

```python { .api }

210

class Model:

211

"""

212

Abstract model interface that all model implementations must follow.

213

"""

214

def name(self) -> str: ...

215

216

async def request(

217

self,

218

messages: list[ModelMessage],

219

model_settings: ModelSettings | None = None

220

) -> ModelResponse: ...

221

222

async def request_stream(

223

self,

224

messages: list[ModelMessage],

225

model_settings: ModelSettings | None = None

226

) -> StreamedResponse: ...

227

228

class StreamedResponse:

229

"""

230

Streamed model response for real-time processing.

231

"""

232

async def __aiter__(self) -> AsyncIterator[ModelResponseStreamEvent]: ...

233

234

async def get_final_response(self) -> ModelResponse: ...

235

236

class InstrumentedModel:

237

"""

238

Model wrapper with OpenTelemetry instrumentation.

239

"""

240

def __init__(

241

self,

242

model: Model,

243

settings: InstrumentationSettings

244

): ...

245

```

246

247

### Model Utilities

248

249

Helper functions for working with models and model names.

250

251

```python { .api }

252

def infer_model(model: Model | KnownModelName) -> Model:

253

"""

254

Infer model instance from string name or return existing model.

255

256

Parameters:

257

- model: Model instance or known model name string

258

259

Returns:

260

Model instance ready for use

261

"""

262

263

def instrument_model(

264

model: Model,

265

settings: InstrumentationSettings | None = None

266

) -> InstrumentedModel:

267

"""

268

Add OpenTelemetry instrumentation to a model.

269

270

Parameters:

271

- model: Model to instrument

272

- settings: Instrumentation configuration

273

274

Returns:

275

Instrumented model wrapper

276

"""

277

```

278

279

### Fallback Models

280

281

Model that automatically falls back to alternative models on failure.

282

283

```python { .api }

284

class FallbackModel:

285

"""

286

Model that falls back to alternative models on failure.

287

"""

288

def __init__(

289

self,

290

models: list[Model],

291

*,

292

max_retries: int = 3

293

):

294

"""

295

Initialize fallback model.

296

297

Parameters:

298

- models: List of models to try in order

299

- max_retries: Maximum retry attempts per model

300

"""

301

```

302

303

### Test Models

304

305

Models designed for testing and development.

306

307

```python { .api }

308

class TestModel:

309

"""

310

Test model implementation for testing and development.

311

"""

312

def __init__(

313

self,

314

*,

315

custom_result_text: str | None = None,

316

custom_result_tool_calls: list[ToolCallPart] | None = None,

317

custom_result_structured: Any = None

318

):

319

"""

320

Initialize test model with predefined responses.

321

322

Parameters:

323

- custom_result_text: Fixed text response

324

- custom_result_tool_calls: Fixed tool calls to make

325

- custom_result_structured: Fixed structured response

326

"""

327

328

class FunctionModel:

329

"""

330

Function-based model for custom logic during testing.

331

"""

332

def __init__(

333

self,

334

function: Callable[[list[ModelMessage]], ModelResponse | str],

335

*,

336

stream_function: Callable | None = None

337

):

338

"""

339

Initialize function model.

340

341

Parameters:

342

- function: Function that processes messages and returns response

343

- stream_function: Optional function for streaming responses

344

"""

345

```

346

347

### Known Model Names

348

349

Type alias for all supported model name strings.

350

351

```python { .api }

352

KnownModelName = Literal[

353

# OpenAI models

354

'gpt-4o',

355

'gpt-4o-mini',

356

'gpt-4-turbo',

357

'gpt-4',

358

'gpt-3.5-turbo',

359

'o1-preview',

360

'o1-mini',

361

362

# Anthropic models

363

'claude-3-5-sonnet-20241022',

364

'claude-3-5-haiku-20241022',

365

'claude-3-opus-20240229',

366

'claude-3-sonnet-20240229',

367

'claude-3-haiku-20240307',

368

369

# Google models

370

'gemini-1.5-pro',

371

'gemini-1.5-flash',

372

'gemini-1.0-pro',

373

374

# And many more...

375

]

376

```

377

378

## Usage Examples

379

380

### Basic Model Usage

381

382

```python

383

from pydantic_ai import Agent

384

from pydantic_ai.models import OpenAIModel, AnthropicModel

385

386

# Using OpenAI

387

openai_agent = Agent(

388

model=OpenAIModel('gpt-4'),

389

system_prompt='You are a helpful assistant.'

390

)

391

392

# Using Anthropic

393

anthropic_agent = Agent(

394

model=AnthropicModel('claude-3-5-sonnet-20241022'),

395

system_prompt='You are a helpful assistant.'

396

)

397

398

# Using model name directly (auto-inferred)

399

agent = Agent(

400

model='gpt-4',

401

system_prompt='You are a helpful assistant.'

402

)

403

```

404

405

### Model with Custom Configuration

406

407

```python

408

from pydantic_ai.models import OpenAIModel

409

410

# Custom OpenAI configuration

411

model = OpenAIModel(

412

'gpt-4',

413

api_key='your-api-key',

414

base_url='https://custom-endpoint.com/v1',

415

timeout=30.0

416

)

417

418

agent = Agent(model=model, system_prompt='Custom configured agent.')

419

```

420

421

### Fallback Model Configuration

422

423

```python

424

from pydantic_ai.models import FallbackModel, OpenAIModel, AnthropicModel

425

426

# Create fallback model that tries OpenAI first, then Anthropic

427

fallback_model = FallbackModel([

428

OpenAIModel('gpt-4'),

429

AnthropicModel('claude-3-5-sonnet-20241022')

430

])

431

432

agent = Agent(

433

model=fallback_model,

434

system_prompt='Resilient agent with fallback.'

435

)

436

```

437

438

### Testing with Test Models

439

440

```python

441

from pydantic_ai.models import TestModel

442

443

# Test model with fixed response

444

test_model = TestModel(custom_result_text='Fixed test response')

445

446

agent = Agent(model=test_model, system_prompt='Test agent.')

447

result = agent.run_sync('Any input')

448

print(result.data) # "Fixed test response"

449

```