or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

agents.mdaudio.mdbatch.mdbeta.mdchat-completions.mdclassification.mdembeddings.mdfiles.mdfim.mdfine-tuning.mdindex.mdmodels.mdocr.md

index.mddocs/

0

# Mistral AI Python SDK

1

2

A comprehensive Python client SDK for interacting with the Mistral AI API. Provides access to chat completions, embeddings, fine-tuning, agents, file management, and advanced AI capabilities with both synchronous and asynchronous support.

3

4

## Package Information

5

6

- **Package Name**: mistralai

7

- **Language**: Python

8

- **Installation**: `pip install mistralai`

9

- **Documentation**: https://docs.mistral.ai

10

11

## Core Imports

12

13

```python

14

from mistralai import Mistral

15

```

16

17

For specific components:

18

19

```python

20

from mistralai.models import (

21

ChatCompletionRequest,

22

ChatCompletionResponse,

23

SystemMessage,

24

UserMessage,

25

AssistantMessage

26

)

27

```

28

29

## Basic Usage

30

31

```python

32

from mistralai import Mistral

33

from mistralai.models import UserMessage

34

35

# Initialize client

36

client = Mistral(api_key="your-api-key")

37

38

# Simple chat completion

39

messages = [

40

UserMessage(content="Hello, how are you?")

41

]

42

43

response = client.chat.complete(

44

model="mistral-small-latest",

45

messages=messages

46

)

47

48

print(response.choices[0].message.content)

49

50

# Async usage

51

async with Mistral(api_key="your-api-key") as client:

52

response = await client.chat.complete(

53

model="mistral-small-latest",

54

messages=messages

55

)

56

print(response.choices[0].message.content)

57

```

58

59

## Architecture

60

61

The SDK is built around a main `Mistral` class that provides access to specialized sub-APIs:

62

63

- **Main SDK**: Central client with authentication, configuration, and sub-API access

64

- **Sub-APIs**: Focused modules for specific functionality (chat, agents, embeddings, etc.)

65

- **Models**: Comprehensive type system for requests, responses, and data structures

66

- **Utilities**: Serialization, HTTP handling, streaming, and retry logic

67

- **Context Management**: Automatic resource cleanup and connection management

68

69

The design enables both simple usage patterns and advanced customization while maintaining type safety and comprehensive error handling.

70

71

## Capabilities

72

73

### Chat Completions

74

75

Create text completions using Mistral's language models with support for conversations, function calling, streaming responses, and structured outputs.

76

77

```python { .api }

78

def complete(

79

model: str,

80

messages: List[Union[SystemMessage, UserMessage, AssistantMessage, ToolMessage]],

81

temperature: Optional[float] = None,

82

top_p: Optional[float] = None,

83

max_tokens: Optional[int] = None,

84

stream: Optional[bool] = None,

85

tools: Optional[List[Tool]] = None,

86

tool_choice: Optional[Union[str, ToolChoice]] = None,

87

response_format: Optional[ResponseFormat] = None,

88

**kwargs

89

) -> ChatCompletionResponse: ...

90

91

def stream(

92

model: str,

93

messages: List[Union[SystemMessage, UserMessage, AssistantMessage, ToolMessage]],

94

**kwargs

95

) -> Iterator[CompletionChunk]: ...

96

```

97

98

[Chat Completions](./chat-completions.md)

99

100

### Agents

101

102

Generate completions using pre-configured AI agents with specialized tools and context.

103

104

```python { .api }

105

def complete(

106

messages: List[Union[SystemMessage, UserMessage, AssistantMessage, ToolMessage]],

107

agent_id: str,

108

max_tokens: Optional[int] = None,

109

stream: Optional[bool] = False,

110

tools: Optional[List[Tool]] = None,

111

tool_choice: Optional[Union[str, ToolChoice]] = None,

112

**kwargs

113

) -> ChatCompletionResponse: ...

114

115

def stream(

116

messages: List[Union[SystemMessage, UserMessage, AssistantMessage, ToolMessage]],

117

agent_id: str,

118

max_tokens: Optional[int] = None,

119

stream: Optional[bool] = True,

120

**kwargs

121

) -> Iterator[CompletionEvent]: ...

122

```

123

124

[Agents](./agents.md)

125

126

### Embeddings

127

128

Generate vector embeddings for text input with support for different models and output formats.

129

130

```python { .api }

131

def create(

132

model: str,

133

inputs: Union[str, List[str]],

134

output_dimension: Optional[int] = None,

135

output_dtype: Optional[str] = None,

136

encoding_format: Optional[str] = None,

137

**kwargs

138

) -> EmbeddingResponse: ...

139

```

140

141

[Embeddings](./embeddings.md)

142

143

### Models

144

145

List and manage available models including base models and fine-tuned models.

146

147

```python { .api }

148

def list(**kwargs) -> ModelList: ...

149

150

def retrieve(model_id: str, **kwargs) -> Union[BaseModelCard, FTModelCard]: ...

151

152

def delete(model_id: str, **kwargs) -> DeleteModelOut: ...

153

```

154

155

[Models](./models.md)

156

157

### Files

158

159

Upload, manage, and process files for use with fine-tuning, agents, and other AI capabilities.

160

161

```python { .api }

162

def upload(

163

file: Union[File, FileTypedDict],

164

purpose: Optional[FilePurpose] = None,

165

**kwargs

166

) -> UploadFileOut: ...

167

168

def list(

169

page: Optional[int] = 0,

170

page_size: Optional[int] = 100,

171

sample_type: Optional[List[SampleType]] = None,

172

source: Optional[List[Source]] = None,

173

search: Optional[str] = None,

174

purpose: Optional[FilePurpose] = None,

175

**kwargs

176

) -> ListFilesOut: ...

177

178

def retrieve(file_id: str, **kwargs) -> RetrieveFileOut: ...

179

180

def delete(file_id: str, **kwargs) -> DeleteFileOut: ...

181

182

def download(file_id: str, **kwargs) -> httpx.Response: ...

183

184

def get_signed_url(

185

file_id: str,

186

expiry: Optional[int] = 24,

187

**kwargs

188

) -> FileSignedURL: ...

189

```

190

191

[Files](./files.md)

192

193

### Fine-tuning

194

195

Create and manage fine-tuning jobs to customize models for specific use cases.

196

197

```python { .api }

198

def create(

199

model: str,

200

training_files: List[TrainingFile],

201

validation_files: Optional[List[TrainingFile]] = None,

202

hyperparameters: Optional[dict] = None,

203

**kwargs

204

) -> CompletionDetailedJobOut: ...

205

206

def list(**kwargs) -> JobsOut: ...

207

208

def get(job_id: str, **kwargs) -> CompletionDetailedJobOut: ...

209

210

def cancel(job_id: str, **kwargs) -> CompletionDetailedJobOut: ...

211

```

212

213

[Fine-tuning](./fine-tuning.md)

214

215

### Batch Processing

216

217

Submit and manage batch processing jobs for handling large volumes of requests efficiently.

218

219

```python { .api }

220

def create(

221

input_files: List[str],

222

endpoint: str,

223

completion_window: str,

224

**kwargs

225

) -> BatchJobOut: ...

226

227

def list(**kwargs) -> BatchJobsOut: ...

228

229

def get(batch_id: str, **kwargs) -> BatchJobOut: ...

230

231

def cancel(batch_id: str, **kwargs) -> BatchJobOut: ...

232

```

233

234

[Batch Processing](./batch.md)

235

236

### Fill-in-the-Middle (FIM)

237

238

Generate code completions using fill-in-the-middle models for code editing and completion tasks.

239

240

```python { .api }

241

def complete(

242

model: str,

243

prompt: str,

244

suffix: Optional[str] = None,

245

temperature: Optional[float] = None,

246

max_tokens: Optional[int] = None,

247

**kwargs

248

) -> FIMCompletionResponse: ...

249

```

250

251

[Fill-in-the-Middle](./fim.md)

252

253

### OCR

254

255

Process documents and images to extract text and structured data using optical character recognition.

256

257

```python { .api }

258

def process(

259

model: str,

260

document: Document,

261

pages: Optional[List[int]] = None,

262

**kwargs

263

) -> OCRResponse: ...

264

```

265

266

[OCR](./ocr.md)

267

268

### Content Classification

269

270

Moderate content and classify text using Mistral's safety and classification models.

271

272

```python { .api }

273

def moderate(

274

inputs: List[Union[str, Dict]],

275

model: Optional[str] = None,

276

**kwargs

277

) -> ModerationResponse: ...

278

279

def classify(

280

inputs: List[str],

281

model: str,

282

**kwargs

283

) -> ClassificationResponse: ...

284

```

285

286

[Classification](./classification.md)

287

288

### Audio Transcription

289

290

Transcribe audio files to text with support for various audio formats and streaming.

291

292

```python { .api }

293

def transcribe(

294

file: Union[str, BinaryIO],

295

model: str,

296

language: Optional[str] = None,

297

**kwargs

298

) -> TranscriptionResponse: ...

299

300

def transcribe_stream(

301

stream: Iterator[bytes],

302

model: str,

303

**kwargs

304

) -> Iterator[TranscriptionStreamEvents]: ...

305

```

306

307

[Audio](./audio.md)

308

309

### Beta APIs

310

311

Experimental and preview APIs providing access to advanced features including enhanced conversations, document libraries, and beta agent capabilities.

312

313

```python { .api }

314

# Conversations API

315

def start(

316

inputs: Union[ConversationInputs, dict],

317

instructions: Optional[str] = None,

318

tools: Optional[List[Tool]] = None,

319

**kwargs

320

) -> ConversationResponse: ...

321

322

def start_stream(

323

inputs: Union[ConversationInputs, dict],

324

**kwargs

325

) -> Iterator[ConversationEvents]: ...

326

327

# Libraries API

328

def create(

329

name: str,

330

description: Optional[str] = None,

331

**kwargs

332

) -> LibraryOut: ...

333

334

def list(**kwargs) -> ListLibraryOut: ...

335

336

# Beta Agents API (enhanced agent management)

337

def create(

338

name: str,

339

model: str,

340

**kwargs

341

) -> Agent: ...

342

343

def update_version(

344

agent_id: str,

345

version_data: dict,

346

**kwargs

347

) -> Agent: ...

348

```

349

350

[Beta APIs](./beta.md)

351

352

## Core Types

353

354

### Main SDK Class

355

356

```python { .api }

357

class Mistral:

358

def __init__(

359

self,

360

api_key: Optional[Union[str, Callable[[], str]]] = None,

361

server_url: Optional[str] = None,

362

client: Optional[HttpClient] = None,

363

async_client: Optional[AsyncHttpClient] = None,

364

retry_config: Optional[RetryConfig] = None,

365

timeout_ms: Optional[int] = None,

366

debug_logger: Optional[Logger] = None,

367

) -> None: ...

368

369

def __enter__(self) -> "Mistral": ...

370

def __exit__(self, exc_type, exc_val, exc_tb) -> None: ...

371

async def __aenter__(self) -> "Mistral": ...

372

async def __aexit__(self, exc_type, exc_val, exc_tb) -> None: ...

373

```

374

375

### Message Types

376

377

```python { .api }

378

class SystemMessage:

379

content: Union[str, List[SystemMessageContentChunk]]

380

role: Optional[Literal["system"]] = "system"

381

382

class UserMessage:

383

content: Optional[Union[str, List[ContentChunk]]]

384

role: Optional[Literal["user"]] = "user"

385

386

class AssistantMessage:

387

content: Optional[Union[str, List[ContentChunk]]]

388

tool_calls: Optional[List[ToolCall]]

389

prefix: Optional[bool] = False

390

role: Optional[Literal["assistant"]] = "assistant"

391

392

class ToolMessage:

393

content: Optional[Union[str, List[ContentChunk]]]

394

tool_call_id: Optional[str]

395

name: Optional[str]

396

role: Optional[Literal["tool"]] = "tool"

397

398

# Content chunk types for multimodal support

399

ContentChunk = Union[

400

TextChunk,

401

ImageURLChunk,

402

DocumentURLChunk,

403

FileChunk,

404

AudioChunk,

405

ReferenceChunk,

406

ThinkChunk

407

]

408

409

class TextChunk:

410

type: Literal["text"]

411

text: str

412

413

class ImageURLChunk:

414

type: Literal["image_url"]

415

image_url: str

416

417

class DocumentURLChunk:

418

type: Literal["document_url"]

419

document_url: str

420

421

class FileChunk:

422

type: Literal["file"]

423

file: str

424

425

class AudioChunk:

426

type: Literal["input_audio"]

427

input_audio: dict

428

429

class ReferenceChunk:

430

type: Literal["reference"]

431

reference: str

432

433

class ThinkChunk:

434

type: Literal["thinking"]

435

thinking: str

436

```

437

438

### Tool Types

439

440

```python { .api }

441

class FunctionTool:

442

type: Literal["function"]

443

function: Function

444

445

class CodeInterpreterTool:

446

type: Literal["code_interpreter"]

447

448

class WebSearchTool:

449

type: Literal["web_search"]

450

451

class WebSearchPremiumTool:

452

type: Literal["web_search_premium"]

453

454

class DocumentLibraryTool:

455

type: Literal["document_library"]

456

document_library: dict

457

458

class ImageGenerationTool:

459

type: Literal["image_generation"]

460

```

461

462

### Configuration Types

463

464

```python { .api }

465

class ResponseFormat:

466

type: Literal["json_object", "text"]

467

schema: Optional[dict]

468

469

class ToolChoice:

470

type: Literal["auto", "none", "any"]

471

function: Optional[FunctionName]

472

473

class RetryConfig:

474

strategy: str

475

backoff: dict

476

retries: int

477

```

478

479

### Response Types

480

481

```python { .api }

482

class ChatCompletionResponse:

483

id: str

484

object: str

485

created: int

486

model: str

487

choices: List[ChatCompletionChoice]

488

usage: Optional[UsageInfo]

489

490

class EmbeddingResponse:

491

id: str

492

object: str

493

data: List[EmbeddingResponseData]

494

model: str

495

usage: Optional[UsageInfo]

496

497

class UsageInfo:

498

prompt_tokens: int

499

completion_tokens: Optional[int]

500

total_tokens: int

501

```