or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

tessl/pypi-google-genai

GenAI Python SDK for Google's generative models supporting both Gemini Developer API and Vertex AI APIs

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
pypipkg:pypi/google-genai@1.51.x

To install, run

npx @tessl/cli install tessl/pypi-google-genai@1.51.0

0

# Google GenAI Python SDK

1

2

A comprehensive Python SDK for integrating Google's generative AI models into applications, supporting both the Gemini Developer API and Vertex AI APIs. The SDK provides a unified interface for content generation, multi-turn conversations, embeddings, image and video generation, file management, caching, batch processing, fine-tuning, and real-time bidirectional streaming.

3

4

## Package Information

5

6

- **Package Name**: google-genai

7

- **Language**: Python

8

- **Installation**: `pip install google-genai`

9

- **Version**: 1.51.0

10

11

## Core Imports

12

13

```python

14

from google import genai

15

```

16

17

Common imports:

18

19

```python

20

from google.genai import Client

21

from google.genai import types

22

```

23

24

## Basic Usage

25

26

```python

27

from google.genai import Client

28

29

# Initialize client with API key (Gemini Developer API)

30

client = Client(api_key='YOUR_API_KEY')

31

32

# Generate content

33

response = client.models.generate_content(

34

model='gemini-2.0-flash',

35

contents='Explain how AI works in simple terms.'

36

)

37

print(response.text)

38

39

# Multi-turn chat

40

chat = client.chats.create(model='gemini-2.0-flash')

41

response = chat.send_message('What is machine learning?')

42

print(response.text)

43

44

# Close client

45

client.close()

46

```

47

48

With context manager:

49

50

```python

51

from google.genai import Client

52

53

with Client(api_key='YOUR_API_KEY') as client:

54

response = client.models.generate_content(

55

model='gemini-2.0-flash',

56

contents='Hello!'

57

)

58

print(response.text)

59

```

60

61

## Architecture

62

63

The SDK follows a modular architecture with these key components:

64

65

- **Client**: Main entry point providing access to all API modules, supports both Gemini Developer API (api_key) and Vertex AI (credentials/project)

66

- **API Modules**: Specialized interfaces for models, chats, files, caches, batches, tunings, file_search_stores, operations, live

67

- **Types System**: Comprehensive type definitions with both Pydantic models and TypedDict variants for flexible usage

68

- **Async Support**: Complete async/await implementations via the `aio` property on the Client

69

- **Streaming**: Built-in streaming support for content generation and live interactions

70

- **Function Calling**: Automatic and manual function calling with Python functions or JSON schemas

71

72

## Capabilities

73

74

### Client Initialization

75

76

Initialize the main client to access all SDK functionality with support for both Gemini Developer API and Vertex AI.

77

78

```python { .api }

79

class Client:

80

def __init__(

81

self,

82

*,

83

vertexai: Optional[bool] = None,

84

api_key: Optional[str] = None,

85

credentials: Optional[google.auth.credentials.Credentials] = None,

86

project: Optional[str] = None,

87

location: Optional[str] = None,

88

debug_config: Optional[DebugConfig] = None,

89

http_options: Optional[Union[HttpOptions, HttpOptionsDict]] = None

90

): ...

91

92

@property

93

def aio(self) -> AsyncClient: ...

94

95

def close(self) -> None: ...

96

def __enter__(self) -> 'Client': ...

97

def __exit__(self, *args) -> None: ...

98

99

class AsyncClient:

100

@property

101

def models(self) -> AsyncModels: ...

102

@property

103

def chats(self) -> AsyncChats: ...

104

@property

105

def files(self) -> AsyncFiles: ...

106

@property

107

def caches(self) -> AsyncCaches: ...

108

@property

109

def batches(self) -> AsyncBatches: ...

110

@property

111

def tunings(self) -> AsyncTunings: ...

112

@property

113

def file_search_stores(self) -> AsyncFileSearchStores: ...

114

@property

115

def live(self) -> AsyncLive: ...

116

@property

117

def operations(self) -> AsyncOperations: ...

118

119

async def aclose(self) -> None: ...

120

async def __aenter__(self) -> 'AsyncClient': ...

121

async def __aexit__(self, *args) -> None: ...

122

```

123

124

[Client Initialization](./client.md)

125

126

### Content Generation

127

128

Generate text and multimodal content using Gemini models with support for streaming, function calling, structured output, and extensive configuration options.

129

130

```python { .api }

131

def generate_content(

132

*,

133

model: str,

134

contents: Union[str, list[Content], Content],

135

config: Optional[GenerateContentConfig] = None

136

) -> GenerateContentResponse: ...

137

138

def generate_content_stream(

139

*,

140

model: str,

141

contents: Union[str, list[Content], Content],

142

config: Optional[GenerateContentConfig] = None

143

) -> Iterator[GenerateContentResponse]: ...

144

```

145

146

[Content Generation](./content-generation.md)

147

148

### Multi-Turn Conversations

149

150

Create and manage chat sessions for multi-turn conversations with automatic history management.

151

152

```python { .api }

153

class Chat:

154

def send_message(

155

self,

156

message: Union[str, Content],

157

config: Optional[GenerateContentConfig] = None

158

) -> GenerateContentResponse: ...

159

160

def send_message_stream(

161

self,

162

message: Union[str, Content],

163

config: Optional[GenerateContentConfig] = None

164

) -> Iterator[GenerateContentResponse]: ...

165

166

def get_history(self, curated: bool = False) -> list[Content]: ...

167

```

168

169

[Multi-Turn Conversations](./chats.md)

170

171

### Embeddings

172

173

Generate text embeddings for semantic search, clustering, and similarity comparisons.

174

175

```python { .api }

176

def embed_content(

177

*,

178

model: str,

179

contents: Union[str, list[Content], Content],

180

config: Optional[EmbedContentConfig] = None

181

) -> EmbedContentResponse: ...

182

```

183

184

[Embeddings](./embeddings.md)

185

186

### Image Generation

187

188

Generate, edit, upscale, and segment images using Imagen models.

189

190

```python { .api }

191

def generate_images(

192

*,

193

model: str,

194

prompt: str,

195

config: Optional[GenerateImagesConfig] = None

196

) -> GenerateImagesResponse: ...

197

198

def edit_image(

199

*,

200

model: str,

201

prompt: str,

202

reference_images: Sequence[ReferenceImage],

203

config: Optional[EditImageConfig] = None

204

) -> EditImageResponse: ...

205

206

def upscale_image(

207

*,

208

model: str,

209

image: Image,

210

upscale_factor: str,

211

config: Optional[UpscaleImageConfig] = None

212

) -> UpscaleImageResponse: ...

213

```

214

215

[Image Generation](./image-generation.md)

216

217

### Video Generation

218

219

Generate videos from prompts, images, or existing videos using Veo models.

220

221

```python { .api }

222

def generate_videos(

223

*,

224

model: str,

225

prompt: Optional[str] = None,

226

image: Optional[Image] = None,

227

video: Optional[Video] = None,

228

config: Optional[GenerateVideosConfig] = None

229

) -> GenerateVideosOperation: ...

230

```

231

232

[Video Generation](./video-generation.md)

233

234

### File Management

235

236

Upload, manage, and download files for use with multimodal content generation (Gemini Developer API only).

237

238

```python { .api }

239

class Files:

240

def upload(

241

self,

242

*,

243

file: Union[str, Path, IO],

244

config: Optional[UploadFileConfig] = None

245

) -> File: ...

246

247

def get(self, *, name: str) -> File: ...

248

def delete(self, *, name: str) -> None: ...

249

def download(self, *, name: str, path: Optional[str] = None) -> bytes: ...

250

def list(self, *, config: Optional[ListFilesConfig] = None) -> Union[Pager[File], Iterator[File]]: ...

251

```

252

253

[File Management](./files.md)

254

255

### Context Caching

256

257

Create and manage cached content to reduce costs and latency for repeated requests with shared context.

258

259

```python { .api }

260

class Caches:

261

def create(

262

self,

263

*,

264

model: str,

265

config: CreateCachedContentConfig

266

) -> CachedContent: ...

267

268

def get(self, *, name: str) -> CachedContent: ...

269

def update(self, *, name: str, config: UpdateCachedContentConfig) -> CachedContent: ...

270

def delete(self, *, name: str) -> None: ...

271

def list(self, *, config: Optional[ListCachedContentsConfig] = None) -> Union[Pager[CachedContent], Iterator[CachedContent]]: ...

272

```

273

274

[Context Caching](./caching.md)

275

276

### Batch Processing

277

278

Submit batch prediction jobs for high-volume inference with cost savings.

279

280

```python { .api }

281

class Batches:

282

def create(

283

self,

284

*,

285

model: str,

286

src: Union[str, list[dict]],

287

dest: Optional[str] = None,

288

config: Optional[CreateBatchJobConfig] = None

289

) -> BatchJob: ...

290

291

def create_embeddings(

292

self,

293

*,

294

model: str,

295

src: Union[str, list[dict]],

296

dest: Optional[str] = None,

297

config: Optional[CreateBatchJobConfig] = None

298

) -> BatchJob: ...

299

300

def get(self, *, name: str) -> BatchJob: ...

301

def cancel(self, *, name: str) -> None: ...

302

def delete(self, *, name: str) -> None: ...

303

def list(self, *, config: Optional[ListBatchJobsConfig] = None) -> Union[Pager[BatchJob], Iterator[BatchJob]]: ...

304

```

305

306

[Batch Processing](./batches.md)

307

308

### Model Fine-Tuning

309

310

Create and manage supervised fine-tuning jobs to customize models (Vertex AI only).

311

312

```python { .api }

313

class Tunings:

314

def tune(

315

self,

316

*,

317

base_model: str,

318

training_dataset: TuningDataset,

319

config: Optional[CreateTuningJobConfig] = None

320

) -> TuningJob: ...

321

322

def get(self, *, name: str, config: Optional[GetTuningJobConfig] = None) -> TuningJob: ...

323

def cancel(self, *, name: str) -> None: ...

324

def list(self, *, config: Optional[ListTuningJobsConfig] = None) -> Union[Pager[TuningJob], Iterator[TuningJob]]: ...

325

```

326

327

[Model Fine-Tuning](./tuning.md)

328

329

### File Search Stores

330

331

Create and manage file search stores with document retrieval for retrieval-augmented generation.

332

333

```python { .api }

334

class FileSearchStores:

335

def create(self, *, config: CreateFileSearchStoreConfig) -> FileSearchStore: ...

336

def get(self, *, name: str) -> FileSearchStore: ...

337

def delete(self, *, name: str) -> None: ...

338

def import_file(

339

self,

340

*,

341

store: str,

342

source: ImportFileSource,

343

config: Optional[ImportFileConfig] = None

344

) -> ImportFileOperation: ...

345

def upload_to_file_search_store(

346

self,

347

*,

348

store: str,

349

file: Union[str, Path, IO],

350

config: Optional[UploadToFileSearchStoreConfig] = None

351

) -> UploadToFileSearchStoreOperation: ...

352

def list(self, *, config: Optional[ListFileSearchStoresConfig] = None) -> Union[Pager[FileSearchStore], Iterator[FileSearchStore]]: ...

353

```

354

355

[File Search Stores](./file-search-stores.md)

356

357

### Live API

358

359

Real-time bidirectional streaming for interactive applications with support for audio, video, and function calling (Preview).

360

361

```python { .api }

362

class AsyncLive:

363

async def connect(

364

self,

365

*,

366

model: str,

367

config: Optional[LiveConnectConfig] = None

368

) -> AsyncIterator[AsyncSession]: ...

369

370

class AsyncSession:

371

async def send_client_content(

372

self,

373

*,

374

turns: Optional[Union[Content, list[Content]]] = None,

375

turn_complete: bool = False

376

) -> None: ...

377

378

async def send_realtime_input(

379

self,

380

*,

381

media_chunks: Optional[Sequence[Blob]] = None

382

) -> None: ...

383

384

async def send_tool_response(

385

self,

386

*,

387

function_responses: Sequence[FunctionResponse]

388

) -> None: ...

389

390

async def receive(self) -> AsyncIterator[LiveServerMessage]: ...

391

async def close(self) -> None: ...

392

```

393

394

[Live API](./live.md)

395

396

### Token Operations

397

398

Count tokens and compute detailed token information for content, with support for local tokenization without API calls.

399

400

```python { .api }

401

def count_tokens(

402

*,

403

model: str,

404

contents: Union[str, list[Content], Content],

405

config: Optional[CountTokensConfig] = None

406

) -> CountTokensResponse: ...

407

408

def compute_tokens(

409

*,

410

model: str,

411

contents: Union[str, list[Content], Content],

412

config: Optional[ComputeTokensConfig] = None

413

) -> ComputeTokensResponse: ...

414

415

class LocalTokenizer:

416

def __init__(self, model_name: str): ...

417

def count_tokens(

418

self,

419

contents: Union[str, list[Content], Content],

420

*,

421

config: Optional[CountTokensConfig] = None

422

) -> CountTokensResult: ...

423

def compute_tokens(

424

self,

425

contents: Union[str, list[Content], Content]

426

) -> ComputeTokensResult: ...

427

```

428

429

[Token Operations](./tokens.md)

430

431

### Model Information

432

433

Retrieve and manage model information and capabilities.

434

435

```python { .api }

436

def get(self, *, model: str, config: Optional[GetModelConfig] = None) -> Model: ...

437

def update(self, *, model: str, config: UpdateModelConfig) -> Model: ...

438

def delete(self, *, model: str, config: Optional[DeleteModelConfig] = None) -> DeleteModelResponse: ...

439

def list(self, *, config: Optional[ListModelsConfig] = None) -> Union[Pager[Model], Iterator[Model]]: ...

440

```

441

442

[Model Information](./models.md)

443

444

### Long-Running Operations

445

446

Monitor and retrieve status of long-running operations like video generation and file imports.

447

448

```python { .api }

449

class Operations:

450

def get(self, operation: Union[Operation, str]) -> Operation: ...

451

```

452

453

[Long-Running Operations](./operations.md)

454

455

## Core Types

456

457

### Content and Parts

458

459

```python { .api }

460

class Content:

461

"""Container for conversation content with role and parts."""

462

parts: list[Part]

463

role: Optional[str] = None

464

465

class Part:

466

"""Individual content part - text, image, video, function call, etc."""

467

text: Optional[str] = None

468

inline_data: Optional[Blob] = None

469

file_data: Optional[FileData] = None

470

function_call: Optional[FunctionCall] = None

471

function_response: Optional[FunctionResponse] = None

472

executable_code: Optional[ExecutableCode] = None

473

code_execution_result: Optional[CodeExecutionResult] = None

474

475

class Blob:

476

"""Binary data with MIME type."""

477

mime_type: str

478

data: bytes

479

480

class Image:

481

"""Image data - can be URL, file path, bytes, PIL Image, or FileData."""

482

# Various constructors supported

483

484

class Video:

485

"""Video data - can be URL, file path, bytes, or FileData."""

486

# Various constructors supported

487

```

488

489

### Generation Configuration

490

491

```python { .api }

492

class GenerateContentConfig:

493

"""Configuration for content generation."""

494

system_instruction: Optional[Union[str, Content]] = None

495

contents: Optional[Union[str, list[Content], Content]] = None

496

generation_config: Optional[GenerationConfig] = None

497

safety_settings: Optional[list[SafetySetting]] = None

498

tools: Optional[list[Tool]] = None

499

tool_config: Optional[ToolConfig] = None

500

cached_content: Optional[str] = None

501

502

class GenerationConfig:

503

"""Core generation parameters."""

504

temperature: Optional[float] = None

505

top_p: Optional[float] = None

506

top_k: Optional[int] = None

507

candidate_count: Optional[int] = None

508

max_output_tokens: Optional[int] = None

509

stop_sequences: Optional[list[str]] = None

510

response_mime_type: Optional[str] = None

511

response_schema: Optional[Schema] = None

512

```

513

514

### Response Types

515

516

```python { .api }

517

class GenerateContentResponse:

518

"""Response from content generation."""

519

text: str # Convenience property

520

candidates: list[Candidate]

521

usage_metadata: Optional[GenerateContentResponseUsageMetadata] = None

522

prompt_feedback: Optional[GenerateContentResponsePromptFeedback] = None

523

524

class Candidate:

525

"""Generated candidate with content and metadata."""

526

content: Content

527

finish_reason: Optional[FinishReason] = None

528

safety_ratings: Optional[list[SafetyRating]] = None

529

citation_metadata: Optional[CitationMetadata] = None

530

grounding_metadata: Optional[GroundingMetadata] = None

531

```

532

533

### Function Calling

534

535

```python { .api }

536

class Tool:

537

"""Tool containing function declarations."""

538

function_declarations: Optional[list[FunctionDeclaration]] = None

539

google_search: Optional[GoogleSearch] = None

540

code_execution: Optional[ToolCodeExecution] = None

541

542

class FunctionDeclaration:

543

"""Function schema definition."""

544

name: str

545

description: str

546

parameters: Optional[Schema] = None

547

548

class FunctionCall:

549

"""Function invocation from model."""

550

name: str

551

args: dict[str, Any]

552

553

class FunctionResponse:

554

"""Function execution response."""

555

name: str

556

response: dict[str, Any]

557

```

558

559

### Error Handling

560

561

```python { .api }

562

class APIError(Exception):

563

"""Base exception for API errors."""

564

code: int

565

status: Optional[str]

566

message: Optional[str]

567

details: Any

568

569

class ClientError(APIError):

570

"""Client errors (4xx status codes)."""

571

pass

572

573

class ServerError(APIError):

574

"""Server errors (5xx status codes)."""

575

pass

576

```

577

578

## Type System

579

580

The SDK provides comprehensive type coverage with 695+ type classes organized into categories including:

581

582

- **Enumerations** (50+): HarmCategory, HarmBlockThreshold, FinishReason, BlockedReason, etc.

583

- **Content Types** (20+): Content, Part, Blob, Image, Video, FileData, etc.

584

- **Function Calling** (15+): Tool, FunctionDeclaration, FunctionCall, FunctionResponse, etc.

585

- **Retrieval & RAG** (25+): GoogleSearchRetrieval, FileSearch, VertexAISearch, VertexRagStore, etc.

586

- **Generation Config** (30+): GenerateContentConfig, GenerationConfig, SafetySetting, etc.

587

- **Response Types** (40+): GenerateContentResponse, Candidate, SafetyRating, GroundingMetadata, etc.

588

- **Embeddings** (10+): EmbedContentConfig, EmbedContentResponse, ContentEmbedding, etc.

589

- **Image Generation** (40+): GenerateImagesConfig, EditImageConfig, UpscaleImageConfig, etc.

590

- **Video Generation** (20+): GenerateVideosConfig, GenerateVideosResponse, GeneratedVideo, etc.

591

- **Model Management** (20+): Model, TunedModelInfo, Endpoint, etc.

592

- **Tuning** (30+): TuningJob, TuningDataset, SupervisedTuningSpec, HyperParameters, etc.

593

- **Batches** (20+): BatchJob, CreateBatchJobConfig, etc.

594

- **Caching** (15+): CachedContent, CreateCachedContentConfig, etc.

595

- **Files** (20+): File, FileStatus, UploadFileConfig, etc.

596

- **Live API** (60+): LiveConnectConfig, LiveClientMessage, LiveServerMessage, etc.

597

598

All types are available via `from google.genai import types` and include both Pydantic models and TypedDict variants for flexible usage.

599