or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

batch.mddatasets.mdexperiments.mdfeature-store.mdgenerative-ai.mdindex.mdmodels.mdpipelines.mdtraining.mdvector-search.mdvision.md

generative-ai.mddocs/

0

# Generative AI

1

2

Modern generative AI capabilities providing streamlined access to Google's most advanced AI models including Gemini for multimodal generation, PaLM for text, and Imagen for image generation. The `vertexai` package offers simplified APIs designed for rapid development of AI-powered applications.

3

4

## Capabilities

5

6

### Gemini Models (GenerativeModel)

7

8

Advanced multimodal AI models supporting text, images, video, audio, and function calling with sophisticated reasoning capabilities.

9

10

```python { .api }

11

class GenerativeModel:

12

def __init__(

13

self,

14

model_name: str,

15

generation_config: Optional[GenerationConfig] = None,

16

safety_settings: Optional[SafetySettingsType] = None,

17

tools: Optional[List[Tool]] = None,

18

tool_config: Optional[ToolConfig] = None,

19

system_instruction: Optional[ContentsType] = None,

20

labels: Optional[Dict[str, str]] = None

21

): ...

22

23

def generate_content(

24

self,

25

contents: ContentsType,

26

generation_config: Optional[GenerationConfigType] = None,

27

safety_settings: Optional[SafetySettingsType] = None,

28

tools: Optional[List[Tool]] = None,

29

tool_config: Optional[ToolConfig] = None,

30

labels: Optional[Dict[str, str]] = None,

31

stream: bool = False

32

) -> GenerationResponse: ...

33

34

def count_tokens(

35

self,

36

contents: ContentsType,

37

tools: Optional[List[Tool]] = None

38

) -> CountTokensResponse: ...

39

40

def start_chat(

41

self,

42

history: Optional[List[Content]] = None,

43

response_validation: bool = True

44

) -> ChatSession: ...

45

46

@classmethod

47

def from_cached_content(

48

cls,

49

cached_content: CachedContent,

50

generation_config: Optional[GenerationConfig] = None,

51

safety_settings: Optional[SafetySettingsType] = None,

52

tools: Optional[List[Tool]] = None,

53

tool_config: Optional[ToolConfig] = None

54

) -> 'GenerativeModel': ...

55

```

56

57

#### Usage Examples

58

59

**Basic text generation:**

60

```python

61

from vertexai.generative_models import GenerativeModel

62

63

model = GenerativeModel('gemini-1.5-pro')

64

response = model.generate_content('Explain quantum computing in simple terms')

65

print(response.text)

66

```

67

68

**Multimodal generation with images:**

69

```python

70

from vertexai.generative_models import GenerativeModel, Image

71

72

model = GenerativeModel('gemini-1.5-pro')

73

image = Image.load_from_file('photo.jpg')

74

response = model.generate_content(['What do you see in this image?', image])

75

print(response.text)

76

```

77

78

**Streaming responses:**

79

```python

80

model = GenerativeModel('gemini-1.5-pro')

81

stream = model.generate_content('Write a story about space exploration', stream=True)

82

for chunk in stream:

83

print(chunk.text, end='')

84

```

85

86

### Chat Sessions

87

88

Stateful multi-turn conversations with conversation history management and streaming support.

89

90

```python { .api }

91

class ChatSession:

92

def __init__(

93

self,

94

model: GenerativeModel,

95

history: Optional[List[Content]] = None,

96

response_validation: bool = True

97

): ...

98

99

def send_message(

100

self,

101

content: ContentsType,

102

generation_config: Optional[GenerationConfigType] = None,

103

safety_settings: Optional[SafetySettingsType] = None,

104

tools: Optional[List[Tool]] = None,

105

labels: Optional[Dict[str, str]] = None,

106

stream: bool = False

107

) -> GenerationResponse: ...

108

109

def send_message_async(

110

self,

111

content: ContentsType,

112

generation_config: Optional[GenerationConfigType] = None,

113

safety_settings: Optional[SafetySettingsType] = None,

114

tools: Optional[List[Tool]] = None,

115

labels: Optional[Dict[str, str]] = None,

116

stream: bool = False

117

) -> Awaitable[GenerationResponse]: ...

118

119

@property

120

def history(self) -> List[Content]: ...

121

```

122

123

#### Usage Examples

124

125

**Basic chat conversation:**

126

```python

127

model = GenerativeModel('gemini-1.5-pro')

128

chat = model.start_chat()

129

130

response = chat.send_message('Hello! How are you?')

131

print(response.text)

132

133

response = chat.send_message('Can you help me with Python programming?')

134

print(response.text)

135

136

# View conversation history

137

print(f"Total messages: {len(chat.history)}")

138

```

139

140

### Content and Media Types

141

142

Rich content representation supporting text, images, video, audio, and structured data.

143

144

```python { .api }

145

class Content:

146

def __init__(self, parts: List[Part], role: str = 'user'): ...

147

148

@property

149

def parts(self) -> List[Part]: ...

150

@property

151

def role(self) -> str: ...

152

@property

153

def text(self) -> str: ...

154

155

@classmethod

156

def from_dict(cls, content_dict: Dict[str, Any]) -> 'Content': ...

157

def to_dict(self) -> Dict[str, Any]: ...

158

159

class Part:

160

@property

161

def text(self) -> Optional[str]: ...

162

@property

163

def inline_data(self) -> Optional[Blob]: ...

164

@property

165

def file_data(self) -> Optional[FileData]: ...

166

@property

167

def function_call(self) -> Optional[FunctionCall]: ...

168

@property

169

def function_response(self) -> Optional[FunctionResponse]: ...

170

171

@staticmethod

172

def from_text(text: str) -> 'Part': ...

173

@staticmethod

174

def from_data(data: bytes, mime_type: str) -> 'Part': ...

175

@staticmethod

176

def from_uri(uri: str, mime_type: str) -> 'Part': ...

177

@staticmethod

178

def from_image(image: Image) -> 'Part': ...

179

@staticmethod

180

def from_function_response(name: str, response: Any) -> 'Part': ...

181

182

class Image:

183

def __init__(self, data: bytes): ...

184

185

@property

186

def data(self) -> bytes: ...

187

188

@staticmethod

189

def load_from_file(location: str) -> 'Image': ...

190

@staticmethod

191

def from_bytes(data: bytes) -> 'Image': ...

192

```

193

194

### Generation Configuration

195

196

Fine-grained control over model behavior and response characteristics.

197

198

```python { .api }

199

class GenerationConfig:

200

def __init__(

201

self,

202

temperature: Optional[float] = None,

203

top_p: Optional[float] = None,

204

top_k: Optional[int] = None,

205

candidate_count: Optional[int] = None,

206

max_output_tokens: Optional[int] = None,

207

stop_sequences: Optional[List[str]] = None,

208

presence_penalty: Optional[float] = None,

209

frequency_penalty: Optional[float] = None,

210

response_mime_type: Optional[str] = None,

211

response_schema: Optional[Dict[str, Any]] = None,

212

seed: Optional[int] = None,

213

logprobs: Optional[int] = None,

214

response_logprobs: Optional[bool] = None

215

): ...

216

217

@classmethod

218

def from_dict(cls, generation_config_dict: Dict[str, Any]) -> 'GenerationConfig': ...

219

def to_dict(self) -> Dict[str, Any]: ...

220

```

221

222

#### Usage Examples

223

224

**Controlled generation:**

225

```python

226

from vertexai.generative_models import GenerativeModel, GenerationConfig

227

228

config = GenerationConfig(

229

temperature=0.7,

230

top_p=0.8,

231

max_output_tokens=1000,

232

stop_sequences=['END']

233

)

234

235

model = GenerativeModel('gemini-1.5-pro', generation_config=config)

236

response = model.generate_content('Write a short story')

237

```

238

239

**Structured JSON output:**

240

```python

241

config = GenerationConfig(

242

response_mime_type='application/json',

243

response_schema={

244

'type': 'object',

245

'properties': {

246

'name': {'type': 'string'},

247

'age': {'type': 'integer'},

248

'skills': {'type': 'array', 'items': {'type': 'string'}}

249

}

250

}

251

)

252

253

response = model.generate_content('Create a character profile', generation_config=config)

254

```

255

256

### Safety Settings

257

258

Comprehensive content filtering and safety controls for responsible AI deployment.

259

260

```python { .api }

261

class SafetySetting:

262

def __init__(

263

self,

264

category: HarmCategory,

265

threshold: HarmBlockThreshold,

266

method: Optional[HarmBlockMethod] = None

267

): ...

268

269

@classmethod

270

def from_dict(cls, safety_setting_dict: Dict[str, Any]) -> 'SafetySetting': ...

271

def to_dict(self) -> Dict[str, Any]: ...

272

273

# Enums for safety configuration

274

class HarmCategory(Enum):

275

HARM_CATEGORY_UNSPECIFIED = 0

276

HARM_CATEGORY_DEROGATORY = 1

277

HARM_CATEGORY_TOXICITY = 2

278

HARM_CATEGORY_VIOLENCE = 3

279

HARM_CATEGORY_SEXUAL = 4

280

HARM_CATEGORY_MEDICAL = 5

281

HARM_CATEGORY_DANGEROUS = 6

282

HARM_CATEGORY_HARASSMENT = 7

283

HARM_CATEGORY_HATE_SPEECH = 8

284

HARM_CATEGORY_SEXUALLY_EXPLICIT = 9

285

HARM_CATEGORY_DANGEROUS_CONTENT = 10

286

287

class HarmBlockThreshold(Enum):

288

HARM_BLOCK_THRESHOLD_UNSPECIFIED = 0

289

BLOCK_LOW_AND_ABOVE = 1

290

BLOCK_MEDIUM_AND_ABOVE = 2

291

BLOCK_ONLY_HIGH = 3

292

BLOCK_NONE = 4

293

```

294

295

### Function Calling

296

297

Enable models to call external functions and APIs for enhanced capabilities and real-time data access.

298

299

```python { .api }

300

class Tool:

301

def __init__(self, function_declarations: List[FunctionDeclaration]): ...

302

303

@classmethod

304

def from_function_declarations(cls, function_declarations: List[FunctionDeclaration]) -> 'Tool': ...

305

@classmethod

306

def from_retrieval(cls, retrieval: Retrieval) -> 'Tool': ...

307

@classmethod

308

def from_google_search_retrieval(cls, google_search_retrieval: GoogleSearchRetrieval) -> 'Tool': ...

309

310

class FunctionDeclaration:

311

def __init__(

312

self,

313

name: str,

314

description: str,

315

parameters: Optional[Dict[str, Any]] = None,

316

response: Optional[Dict[str, Any]] = None

317

): ...

318

319

@classmethod

320

def from_func(cls, func: Callable) -> 'FunctionDeclaration': ...

321

def to_dict(self) -> Dict[str, Any]: ...

322

323

class FunctionCall:

324

@property

325

def name(self) -> str: ...

326

@property

327

def args(self) -> Dict[str, Any]: ...

328

def to_dict(self) -> Dict[str, Any]: ...

329

```

330

331

#### Usage Examples

332

333

**Define and use functions:**

334

```python

335

from vertexai.generative_models import GenerativeModel, Tool, FunctionDeclaration

336

337

# Define a function

338

def get_weather(location: str) -> str:

339

"""Get current weather for a location."""

340

return f"Weather in {location}: Sunny, 25°C"

341

342

# Create function declaration

343

weather_func = FunctionDeclaration.from_func(get_weather)

344

tool = Tool([weather_func])

345

346

# Use with model

347

model = GenerativeModel('gemini-1.5-pro', tools=[tool])

348

response = model.generate_content('What is the weather like in Paris?')

349

350

# Check for function calls in response

351

for candidate in response.candidates:

352

for part in candidate.content.parts:

353

if part.function_call:

354

print(f"Function called: {part.function_call.name}")

355

print(f"Arguments: {part.function_call.args}")

356

```

357

358

### PaLM Text Models

359

360

Specialized text generation models optimized for various language tasks with fine-tuning capabilities.

361

362

```python { .api }

363

class TextGenerationModel:

364

@classmethod

365

def from_pretrained(cls, model_name: str) -> 'TextGenerationModel': ...

366

367

def predict(

368

self,

369

prompt: str,

370

max_output_tokens: int = 128,

371

temperature: Optional[float] = None,

372

top_k: Optional[int] = None,

373

top_p: Optional[float] = None,

374

stop_sequences: Optional[List[str]] = None,

375

candidate_count: Optional[int] = None,

376

grounding_source: Optional[GroundingSource] = None,

377

logprobs: Optional[int] = None,

378

presence_penalty: Optional[float] = None,

379

frequency_penalty: Optional[float] = None,

380

seed: Optional[int] = None

381

) -> MultiCandidateTextGenerationResponse: ...

382

383

def predict_streaming(self, prompt: str, **kwargs) -> Iterator[TextGenerationResponse]: ...

384

def tune_model(self, training_data: List[InputOutputTextPair], **kwargs) -> LanguageModelTuningJob: ...

385

386

class ChatModel:

387

@classmethod

388

def from_pretrained(cls, model_name: str) -> 'ChatModel': ...

389

390

def start_chat(

391

self,

392

context: Optional[str] = None,

393

examples: Optional[List[InputOutputTextPair]] = None,

394

max_output_tokens: Optional[int] = None,

395

temperature: Optional[float] = None,

396

top_k: Optional[int] = None,

397

top_p: Optional[float] = None,

398

message_history: Optional[List[ChatMessage]] = None,

399

stop_sequences: Optional[List[str]] = None

400

) -> ChatSession: ...

401

```

402

403

### Response Types

404

405

Comprehensive response objects with detailed metadata and safety information.

406

407

```python { .api }

408

class GenerationResponse:

409

@property

410

def candidates(self) -> List[Candidate]: ...

411

@property

412

def text(self) -> str: ...

413

@property

414

def prompt_feedback(self) -> Optional[PromptFeedback]: ...

415

@property

416

def usage_metadata(self) -> Optional[UsageMetadata]: ...

417

418

@classmethod

419

def from_dict(cls, response_dict: Dict[str, Any]) -> 'GenerationResponse': ...

420

def to_dict(self) -> Dict[str, Any]: ...

421

422

class Candidate:

423

@property

424

def content(self) -> Content: ...

425

@property

426

def finish_reason(self) -> FinishReason: ...

427

@property

428

def finish_message(self) -> Optional[str]: ...

429

@property

430

def safety_ratings(self) -> List[SafetyRating]: ...

431

@property

432

def citation_metadata(self) -> Optional[CitationMetadata]: ...

433

@property

434

def text(self) -> str: ...

435

@property

436

def function_calls(self) -> List[FunctionCall]: ...

437

438

@classmethod

439

def from_dict(cls, candidate_dict: Dict[str, Any]) -> 'Candidate': ...

440

def to_dict(self) -> Dict[str, Any]: ...

441

```

442

443

## Error Handling

444

445

```python { .api }

446

class ResponseValidationError(Exception):

447

"""Raised when response validation fails."""

448

pass

449

450

class ResponseBlockedError(Exception):

451

"""Raised when response is blocked by safety filters."""

452

pass

453

```

454

455

Common error scenarios:

456

- Safety filtering blocking responses

457

- Rate limiting and quota exhaustion

458

- Invalid model parameters

459

- Network connectivity issues

460

- Authentication and authorization failures

461

462

## Advanced Features

463

464

### Grounding

465

466

Connect models to external knowledge sources for factual accuracy and real-time information.

467

468

```python { .api }

469

class grounding:

470

class GoogleSearchRetrieval:

471

def __init__(self, disable_attribution: bool = False): ...

472

473

class VertexAISearch:

474

def __init__(self, datastore: str, project: str): ...

475

476

class Retrieval:

477

def __init__(self, source: VertexAISearch, disable_attribution: bool = False): ...

478

```

479

480

### Caching

481

482

Optimize costs and latency by caching frequently used context.

483

484

```python { .api }

485

class CachedContent:

486

@classmethod

487

def create(

488

cls,

489

model_name: str,

490

contents: ContentsType,

491

ttl: Optional[datetime.timedelta] = None,

492

display_name: Optional[str] = None

493

) -> 'CachedContent': ...

494

495

def update(self, ttl: datetime.timedelta) -> None: ...

496

def delete(self) -> None: ...

497

```

498

499

This comprehensive API enables building sophisticated AI applications with Google's most advanced generative models, supporting everything from simple text generation to complex multimodal applications with function calling and external knowledge integration.