or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

batches.mdcaching.mdchats.mdclient.mdcontent-generation.mdembeddings.mdfile-search-stores.mdfiles.mdimage-generation.mdindex.mdlive.mdmodels.mdoperations.mdtokens.mdtuning.mdvideo-generation.md

chats.mddocs/

0

# Multi-Turn Conversations

1

2

Create and manage chat sessions for multi-turn conversations with automatic history management. Chat sessions maintain conversation context and provide a convenient interface for back-and-forth interactions with the model.

3

4

## Capabilities

5

6

### Create Chat Session

7

8

Create a new chat session with optional configuration and initial history. The chat session automatically manages conversation history across multiple turns.

9

10

```python { .api }

11

class Chats:

12

"""Factory for creating synchronous chat sessions."""

13

14

def create(

15

self,

16

*,

17

model: str,

18

config: Optional[GenerateContentConfig] = None,

19

history: Optional[list[Content]] = None

20

) -> Chat:

21

"""

22

Create a new chat session.

23

24

Parameters:

25

model (str): Model identifier (e.g., 'gemini-2.0-flash', 'gemini-1.5-pro').

26

config (GenerateContentConfig, optional): Default configuration for all messages

27

in this chat. Includes system instructions, generation config, safety settings,

28

tools, etc. Can be overridden per message.

29

history (list[Content], optional): Initial conversation history. Each Content

30

should have a role ('user' or 'model') and parts.

31

32

Returns:

33

Chat: New chat session instance for sending messages.

34

"""

35

...

36

37

class AsyncChats:

38

"""Factory for creating asynchronous chat sessions."""

39

40

def create(

41

self,

42

*,

43

model: str,

44

config: Optional[GenerateContentConfig] = None,

45

history: Optional[list[Content]] = None

46

) -> AsyncChat:

47

"""

48

Create a new async chat session.

49

50

Parameters:

51

model (str): Model identifier.

52

config (GenerateContentConfig, optional): Default configuration.

53

history (list[Content], optional): Initial conversation history.

54

55

Returns:

56

AsyncChat: New async chat session instance.

57

"""

58

...

59

```

60

61

**Usage Example - Create Chat:**

62

63

```python

64

from google.genai import Client

65

66

client = Client(api_key='YOUR_API_KEY')

67

68

# Create chat session

69

chat = client.chats.create(model='gemini-2.0-flash')

70

71

# Send messages

72

response1 = chat.send_message('What is machine learning?')

73

print(response1.text)

74

75

response2 = chat.send_message('Can you give me an example?')

76

print(response2.text)

77

78

# History is automatically maintained

79

history = chat.get_history()

80

print(f"Conversation has {len(history)} messages")

81

```

82

83

**Usage Example - With Configuration:**

84

85

```python

86

from google.genai import Client

87

from google.genai.types import (

88

GenerateContentConfig,

89

SafetySetting,

90

HarmCategory,

91

HarmBlockThreshold

92

)

93

94

client = Client(api_key='YOUR_API_KEY')

95

96

config = GenerateContentConfig(

97

system_instruction='You are a helpful coding assistant.',

98

temperature=0.3,

99

max_output_tokens=512,

100

safety_settings=[

101

SafetySetting(

102

category=HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,

103

threshold=HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE

104

)

105

]

106

)

107

108

chat = client.chats.create(

109

model='gemini-2.0-flash',

110

config=config

111

)

112

113

response = chat.send_message('How do I sort a list in Python?')

114

print(response.text)

115

```

116

117

### Send Message

118

119

Send a message to the chat session and receive a response. The message and response are automatically added to the conversation history.

120

121

```python { .api }

122

class Chat:

123

"""Synchronous chat session for multi-turn conversations."""

124

125

def send_message(

126

self,

127

message: Union[str, Part, Image, list[Part]],

128

config: Optional[GenerateContentConfig] = None

129

) -> GenerateContentResponse:

130

"""

131

Send a message to the chat and receive a response.

132

133

Parameters:

134

message (Union[str, Part, Image, list[Part]]): Message to send. Can be:

135

- str: Simple text message

136

- Part: Single part (text, inline_data, etc.)

137

- Image: Image to send with implicit question

138

- list[Part]: Multiple parts (e.g., text + image)

139

config (GenerateContentConfig, optional): Configuration override for this

140

message only. Merged with chat's default config.

141

142

Returns:

143

GenerateContentResponse: Model response containing generated content,

144

usage metadata, and safety ratings.

145

146

Raises:

147

ClientError: For client errors (4xx status codes)

148

ServerError: For server errors (5xx status codes)

149

"""

150

...

151

152

class AsyncChat:

153

"""Asynchronous chat session for multi-turn conversations."""

154

155

async def send_message(

156

self,

157

message: Union[str, Part, Image, list[Part]],

158

config: Optional[GenerateContentConfig] = None

159

) -> GenerateContentResponse:

160

"""

161

Async version of send_message.

162

163

Parameters:

164

message (Union[str, Part, Image, list[Part]]): Message to send.

165

config (GenerateContentConfig, optional): Configuration override.

166

167

Returns:

168

GenerateContentResponse: Model response.

169

"""

170

...

171

```

172

173

**Usage Example - Send Text Message:**

174

175

```python

176

from google.genai import Client

177

178

client = Client(api_key='YOUR_API_KEY')

179

chat = client.chats.create(model='gemini-2.0-flash')

180

181

# Send text messages

182

response1 = chat.send_message('Tell me about Python.')

183

print(response1.text)

184

185

response2 = chat.send_message('What are its main features?')

186

print(response2.text)

187

```

188

189

**Usage Example - Send Multimodal Message:**

190

191

```python

192

from google.genai import Client

193

from google.genai.types import Part, Image

194

195

client = Client(api_key='YOUR_API_KEY')

196

chat = client.chats.create(model='gemini-2.0-flash')

197

198

# Send image with question

199

image = Image.from_file('diagram.png')

200

message_parts = [

201

Part(text='What does this diagram show?'),

202

Part(inline_data=image.blob)

203

]

204

205

response = chat.send_message(message_parts)

206

print(response.text)

207

208

# Continue conversation

209

followup = chat.send_message('Can you explain it in more detail?')

210

print(followup.text)

211

```

212

213

### Send Message Streaming

214

215

Send a message and receive the response as a stream of chunks, allowing for real-time display of the model's output.

216

217

```python { .api }

218

class Chat:

219

"""Synchronous chat session for multi-turn conversations."""

220

221

def send_message_stream(

222

self,

223

message: Union[str, Part, Image, list[Part]],

224

config: Optional[GenerateContentConfig] = None

225

) -> Iterator[GenerateContentResponse]:

226

"""

227

Send a message to the chat and receive a streaming response.

228

229

Parameters:

230

message (Union[str, Part, Image, list[Part]]): Message to send.

231

config (GenerateContentConfig, optional): Configuration override for this message.

232

233

Yields:

234

GenerateContentResponse: Streaming response chunks as they are generated.

235

236

Raises:

237

ClientError: For client errors (4xx status codes)

238

ServerError: For server errors (5xx status codes)

239

"""

240

...

241

242

class AsyncChat:

243

"""Asynchronous chat session for multi-turn conversations."""

244

245

async def send_message_stream(

246

self,

247

message: Union[str, Part, Image, list[Part]],

248

config: Optional[GenerateContentConfig] = None

249

) -> AsyncIterator[GenerateContentResponse]:

250

"""

251

Async version of send_message_stream.

252

253

Parameters:

254

message (Union[str, Part, Image, list[Part]]): Message to send.

255

config (GenerateContentConfig, optional): Configuration override.

256

257

Yields:

258

GenerateContentResponse: Streaming response chunks.

259

"""

260

...

261

```

262

263

**Usage Example - Streaming:**

264

265

```python

266

from google.genai import Client

267

268

client = Client(api_key='YOUR_API_KEY')

269

chat = client.chats.create(model='gemini-2.0-flash')

270

271

# First message

272

response1 = chat.send_message('What is neural network?')

273

print(response1.text)

274

275

# Streaming follow-up

276

print('\nModel response (streaming):')

277

stream = chat.send_message_stream('Explain backpropagation.')

278

for chunk in stream:

279

print(chunk.text, end='', flush=True)

280

print() # New line

281

```

282

283

**Usage Example - Async Streaming:**

284

285

```python

286

import asyncio

287

from google.genai import Client

288

289

async def main():

290

client = Client(api_key='YOUR_API_KEY')

291

chat = client.aio.chats.create(model='gemini-2.0-flash')

292

293

# First message

294

response = await chat.send_message('Hello!')

295

print(response.text)

296

297

# Streaming follow-up

298

print('\nStreaming response:')

299

stream = await chat.send_message_stream('Tell me about AI.')

300

async for chunk in stream:

301

print(chunk.text, end='', flush=True)

302

print()

303

304

asyncio.run(main())

305

```

306

307

### Get History

308

309

Retrieve the conversation history for the chat session.

310

311

```python { .api }

312

class Chat:

313

"""Synchronous chat session for multi-turn conversations."""

314

315

def get_history(self, curated: bool = False) -> list[Content]:

316

"""

317

Get the chat conversation history.

318

319

Parameters:

320

curated (bool): If True, returns only user and model messages, filtering out

321

intermediate function calls and responses. If False, returns complete

322

history including all function calling interactions. Defaults to False.

323

324

Returns:

325

list[Content]: List of Content objects representing the conversation history

326

in chronological order. Each Content has a role ('user' or 'model') and

327

list of parts.

328

"""

329

...

330

331

class AsyncChat:

332

"""Asynchronous chat session for multi-turn conversations."""

333

334

def get_history(self, curated: bool = False) -> list[Content]:

335

"""

336

Get the chat conversation history.

337

338

Note: This is a synchronous method even in AsyncChat as it doesn't require I/O.

339

340

Parameters:

341

curated (bool): If True, returns curated history without function calls.

342

343

Returns:

344

list[Content]: Conversation history.

345

"""

346

...

347

```

348

349

**Usage Example - Get History:**

350

351

```python

352

from google.genai import Client

353

354

client = Client(api_key='YOUR_API_KEY')

355

chat = client.chats.create(model='gemini-2.0-flash')

356

357

# Have a conversation

358

chat.send_message('What is Python?')

359

chat.send_message('What are its uses?')

360

chat.send_message('Is it beginner-friendly?')

361

362

# Get full history

363

history = chat.get_history()

364

print(f"Total messages: {len(history)}")

365

366

for i, content in enumerate(history):

367

role = content.role or 'unknown'

368

text = content.parts[0].text if content.parts else ''

369

print(f"{i+1}. {role}: {text[:50]}...")

370

371

# Get curated history (without function call details)

372

curated_history = chat.get_history(curated=True)

373

print(f"Curated messages: {len(curated_history)}")

374

```

375

376

## Types

377

378

```python { .api }

379

from typing import Optional, Union, List, Iterator, AsyncIterator, Any

380

from enum import Enum

381

382

# Core types (shared with content-generation.md)

383

class Content:

384

"""

385

Container for conversation content with role and parts.

386

387

Attributes:

388

parts (list[Part]): List of content parts

389

role (str, optional): Role ('user' or 'model')

390

"""

391

parts: list[Part]

392

role: Optional[str] = None

393

394

class Part:

395

"""

396

Individual content part.

397

398

Attributes:

399

text (str, optional): Text content

400

inline_data (Blob, optional): Inline binary data

401

file_data (FileData, optional): Reference to uploaded file

402

function_call (FunctionCall, optional): Function call from model

403

function_response (FunctionResponse, optional): Function execution result

404

executable_code (ExecutableCode, optional): Executable code

405

code_execution_result (CodeExecutionResult, optional): Code execution output

406

"""

407

text: Optional[str] = None

408

inline_data: Optional[Blob] = None

409

file_data: Optional[FileData] = None

410

function_call: Optional[FunctionCall] = None

411

function_response: Optional[FunctionResponse] = None

412

executable_code: Optional[ExecutableCode] = None

413

code_execution_result: Optional[CodeExecutionResult] = None

414

415

class Blob:

416

"""

417

Binary data with MIME type.

418

419

Attributes:

420

mime_type (str): MIME type

421

data (bytes): Binary data

422

"""

423

mime_type: str

424

data: bytes

425

426

class FileData:

427

"""

428

Reference to uploaded file.

429

430

Attributes:

431

file_uri (str): URI of uploaded file

432

mime_type (str): MIME type

433

"""

434

file_uri: str

435

mime_type: str

436

437

class Image:

438

"""Image data supporting multiple input formats."""

439

pass

440

441

class GenerateContentConfig:

442

"""

443

Configuration for content generation.

444

445

Attributes:

446

system_instruction (Union[str, Content], optional): System instructions

447

contents (Union[str, list[Content], Content], optional): Override contents

448

generation_config (GenerationConfig, optional): Generation parameters

449

safety_settings (list[SafetySetting], optional): Safety filtering

450

tools (list[Tool], optional): Function declarations

451

tool_config (ToolConfig, optional): Function calling config

452

cached_content (str, optional): Cached content reference

453

"""

454

system_instruction: Optional[Union[str, Content]] = None

455

contents: Optional[Union[str, list[Content], Content]] = None

456

generation_config: Optional[GenerationConfig] = None

457

safety_settings: Optional[list[SafetySetting]] = None

458

tools: Optional[list[Tool]] = None

459

tool_config: Optional[ToolConfig] = None

460

cached_content: Optional[str] = None

461

462

class GenerationConfig:

463

"""

464

Core generation parameters.

465

466

Attributes:

467

temperature (float, optional): Sampling temperature (0.0-2.0)

468

top_p (float, optional): Nucleus sampling (0.0-1.0)

469

top_k (int, optional): Top-k sampling

470

max_output_tokens (int, optional): Maximum response tokens

471

stop_sequences (list[str], optional): Stop sequences

472

"""

473

temperature: Optional[float] = None

474

top_p: Optional[float] = None

475

top_k: Optional[int] = None

476

max_output_tokens: Optional[int] = None

477

stop_sequences: Optional[list[str]] = None

478

479

class GenerateContentResponse:

480

"""

481

Response from content generation.

482

483

Attributes:

484

text (str): Text from first candidate

485

candidates (list[Candidate]): Generated candidates

486

usage_metadata (GenerateContentResponseUsageMetadata, optional): Usage stats

487

prompt_feedback (GenerateContentResponsePromptFeedback, optional): Prompt feedback

488

model_version (str, optional): Model version

489

"""

490

text: str

491

candidates: list[Candidate]

492

usage_metadata: Optional[GenerateContentResponseUsageMetadata] = None

493

prompt_feedback: Optional[GenerateContentResponsePromptFeedback] = None

494

model_version: Optional[str] = None

495

496

class Candidate:

497

"""

498

Generated candidate.

499

500

Attributes:

501

content (Content): Generated content

502

finish_reason (FinishReason, optional): Reason generation stopped

503

safety_ratings (list[SafetyRating], optional): Safety ratings

504

citation_metadata (CitationMetadata, optional): Citations

505

grounding_metadata (GroundingMetadata, optional): Grounding attribution

506

"""

507

content: Content

508

finish_reason: Optional[FinishReason] = None

509

safety_ratings: Optional[list[SafetyRating]] = None

510

citation_metadata: Optional[CitationMetadata] = None

511

grounding_metadata: Optional[GroundingMetadata] = None

512

513

class GenerateContentResponseUsageMetadata:

514

"""

515

Token usage statistics.

516

517

Attributes:

518

prompt_token_count (int): Prompt tokens

519

candidates_token_count (int): Generated tokens

520

total_token_count (int): Total tokens

521

cached_content_token_count (int, optional): Cached tokens

522

"""

523

prompt_token_count: int

524

candidates_token_count: int

525

total_token_count: int

526

cached_content_token_count: Optional[int] = None

527

528

class GenerateContentResponsePromptFeedback:

529

"""

530

Prompt feedback.

531

532

Attributes:

533

block_reason (BlockedReason, optional): Block reason

534

safety_ratings (list[SafetyRating], optional): Safety ratings

535

"""

536

block_reason: Optional[BlockedReason] = None

537

safety_ratings: Optional[list[SafetyRating]] = None

538

539

class SafetySetting:

540

"""

541

Safety filter configuration.

542

543

Attributes:

544

category (HarmCategory): Harm category

545

threshold (HarmBlockThreshold): Blocking threshold

546

"""

547

category: HarmCategory

548

threshold: HarmBlockThreshold

549

550

class SafetyRating:

551

"""

552

Safety rating.

553

554

Attributes:

555

category (HarmCategory): Harm category

556

probability (HarmProbability): Harm probability

557

blocked (bool): Whether blocked

558

"""

559

category: HarmCategory

560

probability: HarmProbability

561

blocked: bool

562

563

class Tool:

564

"""

565

Tool with function declarations.

566

567

Attributes:

568

function_declarations (list[FunctionDeclaration], optional): Functions

569

google_search (GoogleSearch, optional): Google Search tool

570

code_execution (ToolCodeExecution, optional): Code execution tool

571

"""

572

function_declarations: Optional[list[FunctionDeclaration]] = None

573

google_search: Optional[GoogleSearch] = None

574

code_execution: Optional[ToolCodeExecution] = None

575

576

class FunctionDeclaration:

577

"""

578

Function definition.

579

580

Attributes:

581

name (str): Function name

582

description (str): Function description

583

parameters (Schema, optional): Parameter schema

584

"""

585

name: str

586

description: str

587

parameters: Optional[Schema] = None

588

589

class FunctionCall:

590

"""

591

Function call from model.

592

593

Attributes:

594

name (str): Function name

595

args (dict[str, Any]): Arguments

596

id (str, optional): Call ID

597

"""

598

name: str

599

args: dict[str, Any]

600

id: Optional[str] = None

601

602

class FunctionResponse:

603

"""

604

Function execution result.

605

606

Attributes:

607

name (str): Function name

608

response (dict[str, Any]): Return value

609

id (str, optional): Call ID

610

"""

611

name: str

612

response: dict[str, Any]

613

id: Optional[str] = None

614

615

class ToolConfig:

616

"""Function calling configuration."""

617

function_calling_config: Optional[FunctionCallingConfig] = None

618

619

class FunctionCallingConfig:

620

"""

621

Function calling mode.

622

623

Attributes:

624

mode (FunctionCallingConfigMode): Calling mode

625

allowed_function_names (list[str], optional): Allowed functions

626

"""

627

mode: FunctionCallingConfigMode

628

allowed_function_names: Optional[list[str]] = None

629

630

class FunctionCallingConfigMode(Enum):

631

"""Function calling modes."""

632

MODE_UNSPECIFIED = 'MODE_UNSPECIFIED'

633

AUTO = 'AUTO'

634

ANY = 'ANY'

635

NONE = 'NONE'

636

637

class Schema:

638

"""JSON schema for parameters."""

639

type: Type

640

properties: Optional[dict[str, Schema]] = None

641

required: Optional[list[str]] = None

642

643

class Type(Enum):

644

"""JSON schema types."""

645

TYPE_UNSPECIFIED = 'TYPE_UNSPECIFIED'

646

STRING = 'STRING'

647

NUMBER = 'NUMBER'

648

INTEGER = 'INTEGER'

649

BOOLEAN = 'BOOLEAN'

650

ARRAY = 'ARRAY'

651

OBJECT = 'OBJECT'

652

653

class FinishReason(Enum):

654

"""Finish reasons."""

655

FINISH_REASON_UNSPECIFIED = 'FINISH_REASON_UNSPECIFIED'

656

STOP = 'STOP'

657

MAX_TOKENS = 'MAX_TOKENS'

658

SAFETY = 'SAFETY'

659

RECITATION = 'RECITATION'

660

661

class BlockedReason(Enum):

662

"""Blocked reasons."""

663

BLOCKED_REASON_UNSPECIFIED = 'BLOCKED_REASON_UNSPECIFIED'

664

SAFETY = 'SAFETY'

665

OTHER = 'OTHER'

666

667

class HarmCategory(Enum):

668

"""Harm categories."""

669

HARM_CATEGORY_UNSPECIFIED = 'HARM_CATEGORY_UNSPECIFIED'

670

HARM_CATEGORY_HARASSMENT = 'HARM_CATEGORY_HARASSMENT'

671

HARM_CATEGORY_HATE_SPEECH = 'HARM_CATEGORY_HATE_SPEECH'

672

HARM_CATEGORY_SEXUALLY_EXPLICIT = 'HARM_CATEGORY_SEXUALLY_EXPLICIT'

673

HARM_CATEGORY_DANGEROUS_CONTENT = 'HARM_CATEGORY_DANGEROUS_CONTENT'

674

675

class HarmBlockThreshold(Enum):

676

"""Block thresholds."""

677

HARM_BLOCK_THRESHOLD_UNSPECIFIED = 'HARM_BLOCK_THRESHOLD_UNSPECIFIED'

678

BLOCK_LOW_AND_ABOVE = 'BLOCK_LOW_AND_ABOVE'

679

BLOCK_MEDIUM_AND_ABOVE = 'BLOCK_MEDIUM_AND_ABOVE'

680

BLOCK_ONLY_HIGH = 'BLOCK_ONLY_HIGH'

681

BLOCK_NONE = 'BLOCK_NONE'

682

683

class HarmProbability(Enum):

684

"""Harm probabilities."""

685

HARM_PROBABILITY_UNSPECIFIED = 'HARM_PROBABILITY_UNSPECIFIED'

686

NEGLIGIBLE = 'NEGLIGIBLE'

687

LOW = 'LOW'

688

MEDIUM = 'MEDIUM'

689

HIGH = 'HIGH'

690

691

class CitationMetadata:

692

"""Citation information."""

693

citations: list[Citation]

694

695

class Citation:

696

"""Individual citation."""

697

start_index: int

698

end_index: int

699

uri: str

700

701

class GroundingMetadata:

702

"""Grounding attribution."""

703

grounding_chunks: Optional[list[GroundingChunk]] = None

704

705

class GroundingChunk:

706

"""Grounding chunk."""

707

pass

708

709

class GoogleSearch:

710

"""Google Search tool."""

711

pass

712

713

class ToolCodeExecution:

714

"""Code execution tool."""

715

pass

716

717

class ExecutableCode:

718

"""Executable code from model."""

719

language: Language

720

code: str

721

722

class Language(Enum):

723

"""Programming languages."""

724

LANGUAGE_UNSPECIFIED = 'LANGUAGE_UNSPECIFIED'

725

PYTHON = 'PYTHON'

726

727

class CodeExecutionResult:

728

"""Code execution result."""

729

outcome: Outcome

730

output: str

731

732

class Outcome(Enum):

733

"""Execution outcomes."""

734

OUTCOME_UNSPECIFIED = 'OUTCOME_UNSPECIFIED'

735

OUTCOME_OK = 'OUTCOME_OK'

736

OUTCOME_FAILED = 'OUTCOME_FAILED'

737

```

738