or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

batches.mdcaching.mdchats.mdclient.mdcontent-generation.mdembeddings.mdfile-search-stores.mdfiles.mdimage-generation.mdindex.mdlive.mdmodels.mdoperations.mdtokens.mdtuning.mdvideo-generation.md

content-generation.mddocs/

0

# Content Generation

1

2

Generate text and multimodal content using Gemini models with support for streaming, function calling, structured output, safety controls, and extensive configuration options. Content generation is the primary capability for creating AI-generated responses from text, image, audio, video, and document inputs.

3

4

## Capabilities

5

6

### Generate Content

7

8

Generate content synchronously from text and multimodal inputs. Supports single or multiple content inputs, system instructions, function calling, structured output, and safety settings.

9

10

```python { .api }

11

def generate_content(

12

*,

13

model: str,

14

contents: Union[str, list[Content], Content],

15

config: Optional[GenerateContentConfig] = None

16

) -> GenerateContentResponse:

17

"""

18

Generate content from the model.

19

20

Parameters:

21

model (str): Model identifier (e.g., 'gemini-2.0-flash', 'gemini-1.5-pro').

22

contents (Union[str, list[Content], Content]): Input content. Can be:

23

- str: Simple text prompt

24

- Content: Single content object with role and parts

25

- list[Content]: Multiple content objects for conversation history

26

config (GenerateContentConfig, optional): Generation configuration including:

27

- system_instruction: System-level instructions for the model

28

- generation_config: Parameters like temperature, top_p, max_tokens

29

- safety_settings: Content safety filtering configuration

30

- tools: Function declarations for function calling

31

- tool_config: Function calling behavior configuration

32

- cached_content: Reference to cached content for efficiency

33

34

Returns:

35

GenerateContentResponse: Response containing generated content, usage metadata,

36

and safety ratings.

37

38

Raises:

39

ClientError: For client errors (4xx status codes)

40

ServerError: For server errors (5xx status codes)

41

"""

42

...

43

44

async def generate_content(

45

*,

46

model: str,

47

contents: Union[str, list[Content], Content],

48

config: Optional[GenerateContentConfig] = None

49

) -> GenerateContentResponse:

50

"""Async version of generate_content."""

51

...

52

```

53

54

**Usage Example - Simple Text Generation:**

55

56

```python

57

from google.genai import Client

58

59

client = Client(api_key='YOUR_API_KEY')

60

61

response = client.models.generate_content(

62

model='gemini-2.0-flash',

63

contents='Explain quantum computing in simple terms.'

64

)

65

66

print(response.text)

67

```

68

69

**Usage Example - With Configuration:**

70

71

```python

72

from google.genai import Client

73

from google.genai.types import GenerateContentConfig

74

75

client = Client(api_key='YOUR_API_KEY')

76

77

config = GenerateContentConfig(

78

system_instruction='You are a helpful physics tutor.',

79

temperature=0.7,

80

top_p=0.95,

81

max_output_tokens=1024,

82

stop_sequences=['END']

83

)

84

85

response = client.models.generate_content(

86

model='gemini-2.0-flash',

87

contents='Explain relativity',

88

config=config

89

)

90

91

print(response.text)

92

```

93

94

**Usage Example - Multimodal Input:**

95

96

```python

97

from google.genai import Client

98

from google.genai.types import Content, Part, Image

99

100

client = Client(api_key='YOUR_API_KEY')

101

102

# Create multimodal content

103

image = Image.from_file('photo.jpg')

104

content = Content(

105

parts=[

106

Part(text='What is in this image?'),

107

Part(inline_data=image.blob)

108

]

109

)

110

111

response = client.models.generate_content(

112

model='gemini-2.0-flash',

113

contents=content

114

)

115

116

print(response.text)

117

```

118

119

### Generate Content Streaming

120

121

Generate content with streaming responses, allowing you to receive and process chunks as they are generated rather than waiting for the complete response.

122

123

```python { .api }

124

def generate_content_stream(

125

*,

126

model: str,

127

contents: Union[str, list[Content], Content],

128

config: Optional[GenerateContentConfig] = None

129

) -> Iterator[GenerateContentResponse]:

130

"""

131

Generate content in streaming mode, yielding response chunks as they are generated.

132

133

Parameters:

134

model (str): Model identifier (e.g., 'gemini-2.0-flash', 'gemini-1.5-pro').

135

contents (Union[str, list[Content], Content]): Input content.

136

config (GenerateContentConfig, optional): Generation configuration.

137

138

Yields:

139

GenerateContentResponse: Streaming response chunks. Each chunk contains:

140

- Partial or complete candidates with generated content

141

- Incremental usage metadata

142

- Safety ratings

143

144

Raises:

145

ClientError: For client errors (4xx status codes)

146

ServerError: For server errors (5xx status codes)

147

"""

148

...

149

150

async def generate_content_stream(

151

*,

152

model: str,

153

contents: Union[str, list[Content], Content],

154

config: Optional[GenerateContentConfig] = None

155

) -> AsyncIterator[GenerateContentResponse]:

156

"""Async version of generate_content_stream."""

157

...

158

```

159

160

**Usage Example - Streaming:**

161

162

```python

163

from google.genai import Client

164

165

client = Client(api_key='YOUR_API_KEY')

166

167

stream = client.models.generate_content_stream(

168

model='gemini-2.0-flash',

169

contents='Write a short story about a robot.'

170

)

171

172

for chunk in stream:

173

print(chunk.text, end='', flush=True)

174

print() # New line after streaming completes

175

```

176

177

**Usage Example - Async Streaming:**

178

179

```python

180

import asyncio

181

from google.genai import Client

182

183

async def main():

184

client = Client(api_key='YOUR_API_KEY')

185

186

stream = await client.aio.models.generate_content_stream(

187

model='gemini-2.0-flash',

188

contents='Explain neural networks.'

189

)

190

191

async for chunk in stream:

192

print(chunk.text, end='', flush=True)

193

print()

194

195

asyncio.run(main())

196

```

197

198

### Function Calling

199

200

Enable the model to call functions you define, allowing it to access external tools, APIs, and data sources. The model generates function calls based on the conversation context, and you execute them and return results.

201

202

**Usage Example - Function Calling:**

203

204

```python

205

from google.genai import Client

206

from google.genai.types import (

207

GenerateContentConfig,

208

Tool,

209

FunctionDeclaration,

210

Schema,

211

Type,

212

FunctionResponse,

213

Content,

214

Part

215

)

216

217

client = Client(api_key='YOUR_API_KEY')

218

219

# Define function declarations

220

get_weather = FunctionDeclaration(

221

name='get_weather',

222

description='Get the current weather for a location',

223

parameters=Schema(

224

type=Type.OBJECT,

225

properties={

226

'location': Schema(type=Type.STRING, description='City name'),

227

'unit': Schema(

228

type=Type.STRING,

229

enum=['celsius', 'fahrenheit'],

230

description='Temperature unit'

231

)

232

},

233

required=['location']

234

)

235

)

236

237

config = GenerateContentConfig(

238

tools=[Tool(function_declarations=[get_weather])]

239

)

240

241

# Initial request

242

response = client.models.generate_content(

243

model='gemini-2.0-flash',

244

contents='What is the weather in Tokyo?',

245

config=config

246

)

247

248

# Check if model wants to call function

249

if response.candidates[0].content.parts[0].function_call:

250

function_call = response.candidates[0].content.parts[0].function_call

251

252

# Execute function (example)

253

if function_call.name == 'get_weather':

254

weather_data = {'temperature': 22, 'condition': 'sunny'}

255

256

# Send function response back

257

function_response = FunctionResponse(

258

name='get_weather',

259

response=weather_data

260

)

261

262

# Continue conversation with function result

263

response2 = client.models.generate_content(

264

model='gemini-2.0-flash',

265

contents=[

266

response.candidates[0].content,

267

Content(parts=[Part(function_response=function_response)])

268

],

269

config=config

270

)

271

272

print(response2.text)

273

```

274

275

### Structured Output

276

277

Generate structured output conforming to a JSON schema, ensuring the model's response follows a specific format.

278

279

**Usage Example - Structured Output:**

280

281

```python

282

from google.genai import Client

283

from google.genai.types import (

284

GenerateContentConfig,

285

Schema,

286

Type

287

)

288

289

client = Client(api_key='YOUR_API_KEY')

290

291

# Define output schema

292

recipe_schema = Schema(

293

type=Type.OBJECT,

294

properties={

295

'recipe_name': Schema(type=Type.STRING),

296

'ingredients': Schema(

297

type=Type.ARRAY,

298

items=Schema(type=Type.STRING)

299

),

300

'steps': Schema(

301

type=Type.ARRAY,

302

items=Schema(type=Type.STRING)

303

),

304

'prep_time_minutes': Schema(type=Type.INTEGER)

305

},

306

required=['recipe_name', 'ingredients', 'steps']

307

)

308

309

config = GenerateContentConfig(

310

response_mime_type='application/json',

311

response_schema=recipe_schema

312

)

313

314

response = client.models.generate_content(

315

model='gemini-2.0-flash',

316

contents='Give me a recipe for chocolate chip cookies',

317

config=config

318

)

319

320

import json

321

recipe = json.loads(response.text)

322

print(recipe['recipe_name'])

323

print(f"Prep time: {recipe['prep_time_minutes']} minutes")

324

```

325

326

### Safety Settings

327

328

Configure content safety filtering to control what types of content are blocked or allowed in both inputs and outputs.

329

330

**Usage Example - Safety Settings:**

331

332

```python

333

from google.genai import Client

334

from google.genai.types import (

335

GenerateContentConfig,

336

SafetySetting,

337

HarmCategory,

338

HarmBlockThreshold

339

)

340

341

client = Client(api_key='YOUR_API_KEY')

342

343

config = GenerateContentConfig(

344

safety_settings=[

345

SafetySetting(

346

category=HarmCategory.HARM_CATEGORY_HARASSMENT,

347

threshold=HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE

348

),

349

SafetySetting(

350

category=HarmCategory.HARM_CATEGORY_HATE_SPEECH,

351

threshold=HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE

352

),

353

SafetySetting(

354

category=HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT,

355

threshold=HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE

356

),

357

SafetySetting(

358

category=HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,

359

threshold=HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE

360

)

361

]

362

)

363

364

response = client.models.generate_content(

365

model='gemini-2.0-flash',

366

contents='Your prompt here',

367

config=config

368

)

369

370

# Check safety ratings

371

for rating in response.candidates[0].safety_ratings:

372

print(f"{rating.category}: {rating.probability}")

373

```

374

375

## Types

376

377

```python { .api }

378

from typing import Optional, Union, List, Sequence, Any, Dict, Iterator, AsyncIterator

379

from enum import Enum

380

381

# Core content types

382

class Content:

383

"""

384

Container for conversation content with role and parts.

385

386

Attributes:

387

parts (list[Part]): List of content parts (text, images, function calls, etc.)

388

role (str, optional): Role of the content creator ('user' or 'model')

389

"""

390

parts: list[Part]

391

role: Optional[str] = None

392

393

class Part:

394

"""

395

Individual content part within a Content object.

396

397

Attributes:

398

text (str, optional): Text content

399

inline_data (Blob, optional): Inline binary data (images, audio, etc.)

400

file_data (FileData, optional): Reference to uploaded file

401

function_call (FunctionCall, optional): Function call from model

402

function_response (FunctionResponse, optional): Function execution result

403

executable_code (ExecutableCode, optional): Executable code from model

404

code_execution_result (CodeExecutionResult, optional): Code execution output

405

"""

406

text: Optional[str] = None

407

inline_data: Optional[Blob] = None

408

file_data: Optional[FileData] = None

409

function_call: Optional[FunctionCall] = None

410

function_response: Optional[FunctionResponse] = None

411

executable_code: Optional[ExecutableCode] = None

412

code_execution_result: Optional[CodeExecutionResult] = None

413

414

class Blob:

415

"""

416

Binary data with MIME type.

417

418

Attributes:

419

mime_type (str): MIME type (e.g., 'image/jpeg', 'audio/wav')

420

data (bytes): Binary data

421

"""

422

mime_type: str

423

data: bytes

424

425

class FileData:

426

"""

427

Reference to an uploaded file.

428

429

Attributes:

430

file_uri (str): URI of the uploaded file (e.g., 'gs://bucket/file')

431

mime_type (str): MIME type of the file

432

"""

433

file_uri: str

434

mime_type: str

435

436

class Image:

437

"""

438

Image data supporting multiple input formats.

439

440

Can be created from:

441

- URL: Image.from_url('https://...')

442

- File path: Image.from_file('path/to/image.jpg')

443

- Bytes: Image.from_bytes(data, mime_type='image/jpeg')

444

- PIL Image: Image.from_pil(pil_image)

445

- FileData: Image(file_data=FileData(...))

446

"""

447

pass

448

449

class Video:

450

"""

451

Video data supporting multiple input formats.

452

453

Can be created from:

454

- URL: Video.from_url('https://...')

455

- File path: Video.from_file('path/to/video.mp4')

456

- Bytes: Video.from_bytes(data, mime_type='video/mp4')

457

- FileData: Video(file_data=FileData(...))

458

"""

459

pass

460

461

# Generation configuration

462

class GenerateContentConfig:

463

"""

464

Configuration for content generation.

465

466

Attributes:

467

system_instruction (Union[str, Content], optional): System-level instructions

468

contents (Union[str, list[Content], Content], optional): Override input contents

469

temperature (float, optional): Sampling temperature (0.0-2.0). Higher = more random.

470

top_p (float, optional): Nucleus sampling threshold (0.0-1.0)

471

top_k (float, optional): Top-k sampling parameter

472

candidate_count (int, optional): Number of response candidates to generate

473

max_output_tokens (int, optional): Maximum tokens in generated response

474

stop_sequences (list[str], optional): Sequences that stop generation

475

response_mime_type (str, optional): MIME type for structured output ('application/json')

476

response_schema (Schema, optional): JSON schema for structured output

477

presence_penalty (float, optional): Penalty for token presence (-2.0 to 2.0)

478

frequency_penalty (float, optional): Penalty for token frequency (-2.0 to 2.0)

479

response_logprobs (bool, optional): Include log probabilities in response

480

logprobs (int, optional): Number of top logprobs to return per token

481

safety_settings (list[SafetySetting], optional): Safety filtering configuration

482

tools (list[Tool], optional): Function declarations for function calling

483

tool_config (ToolConfig, optional): Function calling behavior configuration

484

cached_content (str, optional): Reference to cached content by name

485

"""

486

system_instruction: Optional[Union[str, Content]] = None

487

contents: Optional[Union[str, list[Content], Content]] = None

488

temperature: Optional[float] = None

489

top_p: Optional[float] = None

490

top_k: Optional[float] = None

491

candidate_count: Optional[int] = None

492

max_output_tokens: Optional[int] = None

493

stop_sequences: Optional[list[str]] = None

494

response_mime_type: Optional[str] = None

495

response_schema: Optional[Schema] = None

496

presence_penalty: Optional[float] = None

497

frequency_penalty: Optional[float] = None

498

response_logprobs: Optional[bool] = None

499

logprobs: Optional[int] = None

500

safety_settings: Optional[list[SafetySetting]] = None

501

tools: Optional[list[Tool]] = None

502

tool_config: Optional[ToolConfig] = None

503

cached_content: Optional[str] = None

504

505

class GenerationConfig:

506

"""

507

NOTE: This type is not used directly. Generation parameters are passed directly

508

to GenerateContentConfig, not as a nested GenerationConfig object.

509

510

Core generation parameters controlling model behavior.

511

512

Attributes:

513

temperature (float, optional): Sampling temperature (0.0-2.0). Higher = more random.

514

top_p (float, optional): Nucleus sampling threshold (0.0-1.0)

515

top_k (int, optional): Top-k sampling parameter

516

candidate_count (int, optional): Number of response candidates to generate

517

max_output_tokens (int, optional): Maximum tokens in generated response

518

stop_sequences (list[str], optional): Sequences that stop generation

519

response_mime_type (str, optional): MIME type for structured output ('application/json')

520

response_schema (Schema, optional): JSON schema for structured output

521

presence_penalty (float, optional): Penalty for token presence (-2.0 to 2.0)

522

frequency_penalty (float, optional): Penalty for token frequency (-2.0 to 2.0)

523

response_logprobs (bool, optional): Include log probabilities in response

524

logprobs (int, optional): Number of top logprobs to return per token

525

"""

526

temperature: Optional[float] = None

527

top_p: Optional[float] = None

528

top_k: Optional[int] = None

529

candidate_count: Optional[int] = None

530

max_output_tokens: Optional[int] = None

531

stop_sequences: Optional[list[str]] = None

532

response_mime_type: Optional[str] = None

533

response_schema: Optional[Schema] = None

534

presence_penalty: Optional[float] = None

535

frequency_penalty: Optional[float] = None

536

response_logprobs: Optional[bool] = None

537

logprobs: Optional[int] = None

538

539

# Response types

540

class GenerateContentResponse:

541

"""

542

Response from content generation.

543

544

Attributes:

545

text (str): Convenience property returning text from first candidate

546

candidates (list[Candidate]): Generated candidates with content and metadata

547

usage_metadata (GenerateContentResponseUsageMetadata, optional): Token usage stats

548

prompt_feedback (GenerateContentResponsePromptFeedback, optional): Prompt feedback

549

model_version (str, optional): Model version used for generation

550

"""

551

text: str

552

candidates: list[Candidate]

553

usage_metadata: Optional[GenerateContentResponseUsageMetadata] = None

554

prompt_feedback: Optional[GenerateContentResponsePromptFeedback] = None

555

model_version: Optional[str] = None

556

557

class Candidate:

558

"""

559

Generated candidate with content and metadata.

560

561

Attributes:

562

content (Content): Generated content

563

finish_reason (FinishReason, optional): Reason generation stopped

564

safety_ratings (list[SafetyRating], optional): Safety ratings for content

565

citation_metadata (CitationMetadata, optional): Citation information

566

grounding_metadata (GroundingMetadata, optional): Grounding attribution

567

token_count (int, optional): Number of tokens in candidate

568

index (int, optional): Candidate index

569

logprobs_result (LogprobsResult, optional): Log probabilities

570

"""

571

content: Content

572

finish_reason: Optional[FinishReason] = None

573

safety_ratings: Optional[list[SafetyRating]] = None

574

citation_metadata: Optional[CitationMetadata] = None

575

grounding_metadata: Optional[GroundingMetadata] = None

576

token_count: Optional[int] = None

577

index: Optional[int] = None

578

logprobs_result: Optional[LogprobsResult] = None

579

580

class GenerateContentResponseUsageMetadata:

581

"""

582

Token usage statistics.

583

584

Attributes:

585

prompt_token_count (int): Tokens in the prompt

586

candidates_token_count (int): Tokens in generated candidates

587

total_token_count (int): Total tokens (prompt + candidates)

588

cached_content_token_count (int, optional): Tokens from cached content

589

"""

590

prompt_token_count: int

591

candidates_token_count: int

592

total_token_count: int

593

cached_content_token_count: Optional[int] = None

594

595

class GenerateContentResponsePromptFeedback:

596

"""

597

Feedback about the prompt.

598

599

Attributes:

600

block_reason (BlockedReason, optional): Reason prompt was blocked

601

safety_ratings (list[SafetyRating], optional): Safety ratings for prompt

602

"""

603

block_reason: Optional[BlockedReason] = None

604

safety_ratings: Optional[list[SafetyRating]] = None

605

606

# Safety types

607

class SafetySetting:

608

"""

609

Safety filter configuration.

610

611

Attributes:

612

category (HarmCategory): Harm category to configure

613

threshold (HarmBlockThreshold): Blocking threshold for this category

614

method (HarmBlockMethod, optional): Block based on probability or severity

615

"""

616

category: HarmCategory

617

threshold: HarmBlockThreshold

618

method: Optional[HarmBlockMethod] = None

619

620

class SafetyRating:

621

"""

622

Safety rating for content.

623

624

Attributes:

625

category (HarmCategory): Harm category

626

probability (HarmProbability): Probability of harm

627

severity (HarmSeverity, optional): Severity of harm

628

blocked (bool): Whether content was blocked

629

"""

630

category: HarmCategory

631

probability: HarmProbability

632

severity: Optional[HarmSeverity] = None

633

blocked: bool

634

635

class HarmCategory(Enum):

636

"""Harm categories for safety filtering."""

637

HARM_CATEGORY_UNSPECIFIED = 'HARM_CATEGORY_UNSPECIFIED'

638

HARM_CATEGORY_HARASSMENT = 'HARM_CATEGORY_HARASSMENT'

639

HARM_CATEGORY_HATE_SPEECH = 'HARM_CATEGORY_HATE_SPEECH'

640

HARM_CATEGORY_SEXUALLY_EXPLICIT = 'HARM_CATEGORY_SEXUALLY_EXPLICIT'

641

HARM_CATEGORY_DANGEROUS_CONTENT = 'HARM_CATEGORY_DANGEROUS_CONTENT'

642

HARM_CATEGORY_CIVIC_INTEGRITY = 'HARM_CATEGORY_CIVIC_INTEGRITY'

643

644

class HarmBlockThreshold(Enum):

645

"""Blocking thresholds for safety filtering."""

646

HARM_BLOCK_THRESHOLD_UNSPECIFIED = 'HARM_BLOCK_THRESHOLD_UNSPECIFIED'

647

BLOCK_LOW_AND_ABOVE = 'BLOCK_LOW_AND_ABOVE'

648

BLOCK_MEDIUM_AND_ABOVE = 'BLOCK_MEDIUM_AND_ABOVE'

649

BLOCK_ONLY_HIGH = 'BLOCK_ONLY_HIGH'

650

BLOCK_NONE = 'BLOCK_NONE'

651

OFF = 'OFF'

652

653

class HarmProbability(Enum):

654

"""Harm probability levels."""

655

HARM_PROBABILITY_UNSPECIFIED = 'HARM_PROBABILITY_UNSPECIFIED'

656

NEGLIGIBLE = 'NEGLIGIBLE'

657

LOW = 'LOW'

658

MEDIUM = 'MEDIUM'

659

HIGH = 'HIGH'

660

661

class HarmSeverity(Enum):

662

"""Harm severity levels."""

663

HARM_SEVERITY_UNSPECIFIED = 'HARM_SEVERITY_UNSPECIFIED'

664

HARM_SEVERITY_NEGLIGIBLE = 'HARM_SEVERITY_NEGLIGIBLE'

665

HARM_SEVERITY_LOW = 'HARM_SEVERITY_LOW'

666

HARM_SEVERITY_MEDIUM = 'HARM_SEVERITY_MEDIUM'

667

HARM_SEVERITY_HIGH = 'HARM_SEVERITY_HIGH'

668

669

class HarmBlockMethod(Enum):

670

"""Block method for safety filtering."""

671

HARM_BLOCK_METHOD_UNSPECIFIED = 'HARM_BLOCK_METHOD_UNSPECIFIED'

672

SEVERITY = 'SEVERITY'

673

PROBABILITY = 'PROBABILITY'

674

675

class FinishReason(Enum):

676

"""Reasons why model stopped generating."""

677

FINISH_REASON_UNSPECIFIED = 'FINISH_REASON_UNSPECIFIED'

678

STOP = 'STOP'

679

MAX_TOKENS = 'MAX_TOKENS'

680

SAFETY = 'SAFETY'

681

RECITATION = 'RECITATION'

682

LANGUAGE = 'LANGUAGE'

683

OTHER = 'OTHER'

684

BLOCKLIST = 'BLOCKLIST'

685

PROHIBITED_CONTENT = 'PROHIBITED_CONTENT'

686

SPII = 'SPII'

687

MALFORMED_FUNCTION_CALL = 'MALFORMED_FUNCTION_CALL'

688

689

class BlockedReason(Enum):

690

"""Reasons why prompt was blocked."""

691

BLOCKED_REASON_UNSPECIFIED = 'BLOCKED_REASON_UNSPECIFIED'

692

SAFETY = 'SAFETY'

693

OTHER = 'OTHER'

694

BLOCKLIST = 'BLOCKLIST'

695

PROHIBITED_CONTENT = 'PROHIBITED_CONTENT'

696

697

# Function calling types

698

class Tool:

699

"""

700

Tool containing function declarations.

701

702

Attributes:

703

function_declarations (list[FunctionDeclaration], optional): Function definitions

704

google_search (GoogleSearch, optional): Google Search tool

705

code_execution (ToolCodeExecution, optional): Code execution tool

706

"""

707

function_declarations: Optional[list[FunctionDeclaration]] = None

708

google_search: Optional[GoogleSearch] = None

709

code_execution: Optional[ToolCodeExecution] = None

710

711

class FunctionDeclaration:

712

"""

713

Function schema definition for function calling.

714

715

Attributes:

716

name (str): Function name

717

description (str): Function description for the model

718

parameters (Schema, optional): JSON schema for function parameters

719

"""

720

name: str

721

description: str

722

parameters: Optional[Schema] = None

723

724

class FunctionCall:

725

"""

726

Function invocation from model.

727

728

Attributes:

729

name (str): Function name to call

730

args (dict[str, Any]): Function arguments

731

id (str, optional): Unique call identifier

732

"""

733

name: str

734

args: dict[str, Any]

735

id: Optional[str] = None

736

737

class FunctionResponse:

738

"""

739

Function execution response to send back to model.

740

741

Attributes:

742

name (str): Function name that was called

743

response (dict[str, Any]): Function return value

744

id (str, optional): Call identifier matching FunctionCall.id

745

"""

746

name: str

747

response: dict[str, Any]

748

id: Optional[str] = None

749

750

class ToolConfig:

751

"""

752

Configuration for function calling behavior.

753

754

Attributes:

755

function_calling_config (FunctionCallingConfig, optional): Function calling mode

756

"""

757

function_calling_config: Optional[FunctionCallingConfig] = None

758

759

class FunctionCallingConfig:

760

"""

761

Function calling mode configuration.

762

763

Attributes:

764

mode (FunctionCallingConfigMode): Calling mode (AUTO, ANY, NONE)

765

allowed_function_names (list[str], optional): Restrict to specific functions

766

"""

767

mode: FunctionCallingConfigMode

768

allowed_function_names: Optional[list[str]] = None

769

770

class FunctionCallingConfigMode(Enum):

771

"""Function calling modes."""

772

MODE_UNSPECIFIED = 'MODE_UNSPECIFIED'

773

AUTO = 'AUTO'

774

ANY = 'ANY'

775

NONE = 'NONE'

776

777

class Schema:

778

"""

779

JSON schema for structured output and function parameters.

780

781

Attributes:

782

type (Type): Schema type (OBJECT, ARRAY, STRING, NUMBER, etc.)

783

format (str, optional): Format specifier

784

description (str, optional): Schema description

785

nullable (bool, optional): Whether value can be null

786

enum (list[str], optional): Allowed values for enums

787

properties (dict[str, Schema], optional): Object properties

788

required (list[str], optional): Required property names

789

items (Schema, optional): Schema for array items

790

"""

791

type: Type

792

format: Optional[str] = None

793

description: Optional[str] = None

794

nullable: Optional[bool] = None

795

enum: Optional[list[str]] = None

796

properties: Optional[dict[str, Schema]] = None

797

required: Optional[list[str]] = None

798

items: Optional[Schema] = None

799

800

class Type(Enum):

801

"""JSON schema types."""

802

TYPE_UNSPECIFIED = 'TYPE_UNSPECIFIED'

803

STRING = 'STRING'

804

NUMBER = 'NUMBER'

805

INTEGER = 'INTEGER'

806

BOOLEAN = 'BOOLEAN'

807

ARRAY = 'ARRAY'

808

OBJECT = 'OBJECT'

809

NULL = 'NULL'

810

811

class GoogleSearch:

812

"""Google Search tool for web grounding."""

813

pass

814

815

class ToolCodeExecution:

816

"""Code execution tool for running generated code."""

817

pass

818

819

class ExecutableCode:

820

"""

821

Executable code from model.

822

823

Attributes:

824

language (Language): Programming language

825

code (str): Code to execute

826

"""

827

language: Language

828

code: str

829

830

class CodeExecutionResult:

831

"""

832

Code execution result.

833

834

Attributes:

835

outcome (Outcome): Execution outcome (OK, FAILED, DEADLINE_EXCEEDED)

836

output (str): Execution output

837

"""

838

outcome: Outcome

839

output: str

840

841

class Language(Enum):

842

"""Programming languages for code execution."""

843

LANGUAGE_UNSPECIFIED = 'LANGUAGE_UNSPECIFIED'

844

PYTHON = 'PYTHON'

845

846

class Outcome(Enum):

847

"""Code execution outcomes."""

848

OUTCOME_UNSPECIFIED = 'OUTCOME_UNSPECIFIED'

849

OUTCOME_OK = 'OUTCOME_OK'

850

OUTCOME_FAILED = 'OUTCOME_FAILED'

851

OUTCOME_DEADLINE_EXCEEDED = 'OUTCOME_DEADLINE_EXCEEDED'

852

853

# Additional metadata types

854

class CitationMetadata:

855

"""

856

Citation information for generated content.

857

858

Attributes:

859

citations (list[Citation]): List of citations

860

"""

861

citations: list[Citation]

862

863

class Citation:

864

"""

865

Individual citation.

866

867

Attributes:

868

start_index (int): Start character index in generated text

869

end_index (int): End character index in generated text

870

uri (str): Citation source URI

871

title (str, optional): Citation source title

872

license (str, optional): Content license

873

publication_date (GoogleTypeDate, optional): Publication date

874

"""

875

start_index: int

876

end_index: int

877

uri: str

878

title: Optional[str] = None

879

license: Optional[str] = None

880

publication_date: Optional[GoogleTypeDate] = None

881

882

class GroundingMetadata:

883

"""

884

Grounding attribution metadata.

885

886

Attributes:

887

grounding_chunks (list[GroundingChunk], optional): Grounding sources

888

grounding_supports (list[GroundingSupport], optional): Support scores

889

web_search_queries (list[str], optional): Web search queries used

890

search_entry_point (SearchEntryPoint, optional): Search entry point

891

"""

892

grounding_chunks: Optional[list[GroundingChunk]] = None

893

grounding_supports: Optional[list[GroundingSupport]] = None

894

web_search_queries: Optional[list[str]] = None

895

search_entry_point: Optional[SearchEntryPoint] = None

896

897

class GroundingChunk:

898

"""Grounding source chunk (web, retrieved context, etc.)."""

899

pass

900

901

class GroundingSupport:

902

"""

903

Grounding support score.

904

905

Attributes:

906

segment (Segment): Text segment

907

grounding_chunk_indices (list[int]): Indices of supporting chunks

908

confidence_scores (list[float]): Confidence scores

909

"""

910

segment: Segment

911

grounding_chunk_indices: list[int]

912

confidence_scores: list[float]

913

914

class SearchEntryPoint:

915

"""

916

Search entry point for web grounding.

917

918

Attributes:

919

rendered_content (str): Rendered search content

920

sdk_blob (bytes, optional): SDK blob data

921

"""

922

rendered_content: str

923

sdk_blob: Optional[bytes] = None

924

925

class Segment:

926

"""

927

Content segment.

928

929

Attributes:

930

part_index (int): Part index in content

931

start_index (int): Start character index

932

end_index (int): End character index

933

text (str, optional): Segment text

934

"""

935

part_index: int

936

start_index: int

937

end_index: int

938

text: Optional[str] = None

939

940

class LogprobsResult:

941

"""

942

Log probabilities result.

943

944

Attributes:

945

top_candidates (list[LogprobsResultTopCandidates]): Top candidates per token

946

chosen_candidates (list[LogprobsResultCandidate]): Chosen candidates

947

"""

948

top_candidates: list[LogprobsResultTopCandidates]

949

chosen_candidates: list[LogprobsResultCandidate]

950

951

class LogprobsResultTopCandidates:

952

"""

953

Top candidate tokens and probabilities.

954

955

Attributes:

956

candidates (list[LogprobsResultCandidate]): Candidate tokens

957

"""

958

candidates: list[LogprobsResultCandidate]

959

960

class LogprobsResultCandidate:

961

"""

962

Individual token candidate.

963

964

Attributes:

965

token (str): Token string

966

token_id (int): Token ID

967

log_probability (float): Log probability

968

"""

969

token: str

970

token_id: int

971

log_probability: float

972

973

class GoogleTypeDate:

974

"""

975

Date representation.

976

977

Attributes:

978

year (int): Year

979

month (int): Month (1-12)

980

day (int): Day (1-31)

981

"""

982

year: int

983

month: int

984

day: int

985

```

986