or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

batches.mdcaching.mdchats.mdclient.mdcontent-generation.mdembeddings.mdfile-search-stores.mdfiles.mdimage-generation.mdindex.mdlive.mdmodels.mdoperations.mdtokens.mdtuning.mdvideo-generation.md

models.mddocs/

0

# Model Information and Management

1

2

Retrieve and manage model information and capabilities. Access model metadata, update model settings, delete tuned models, and list available models.

3

4

## Capabilities

5

6

### Get Model

7

8

Retrieve information about a specific model including capabilities and configuration.

9

10

```python { .api }

11

def get(

12

self,

13

*,

14

model: str,

15

config: Optional[GetModelConfig] = None

16

) -> Model:

17

"""

18

Get model information.

19

20

Parameters:

21

model (str): Model identifier (e.g., 'gemini-2.0-flash', 'gemini-1.5-pro').

22

For tuned models, use full resource name.

23

config (GetModelConfig, optional): Get configuration.

24

25

Returns:

26

Model: Model information including:

27

- Supported generation methods

28

- Input/output token limits

29

- Supported features (function calling, multimodal, etc.)

30

- Model version

31

32

Raises:

33

ClientError: For client errors including 404 if model not found

34

ServerError: For server errors (5xx status codes)

35

"""

36

...

37

38

async def get(

39

self,

40

*,

41

model: str,

42

config: Optional[GetModelConfig] = None

43

) -> Model:

44

"""Async version of get."""

45

...

46

```

47

48

**Usage Example:**

49

50

```python

51

from google.genai import Client

52

53

client = Client(api_key='YOUR_API_KEY')

54

55

# Get model info

56

model = client.models.get(model='gemini-2.0-flash')

57

58

print(f"Model: {model.name}")

59

print(f"Display name: {model.display_name}")

60

print(f"Description: {model.description}")

61

print(f"Input token limit: {model.input_token_limit}")

62

print(f"Output token limit: {model.output_token_limit}")

63

print(f"Supported methods: {model.supported_generation_methods}")

64

65

# Check capabilities

66

if 'generateContent' in model.supported_generation_methods:

67

print("Supports content generation")

68

if 'embedContent' in model.supported_generation_methods:

69

print("Supports embeddings")

70

```

71

72

### Update Model

73

74

Update mutable model metadata such as display name and description (for tuned models).

75

76

```python { .api }

77

def update(

78

self,

79

*,

80

model: str,

81

config: UpdateModelConfig

82

) -> Model:

83

"""

84

Update model metadata (tuned models only).

85

86

Parameters:

87

model (str): Model resource name.

88

config (UpdateModelConfig): Update configuration including:

89

- display_name: New display name

90

- description: New description

91

92

Returns:

93

Model: Updated model information.

94

95

Raises:

96

ClientError: For client errors

97

ServerError: For server errors

98

"""

99

...

100

101

async def update(

102

self,

103

*,

104

model: str,

105

config: UpdateModelConfig

106

) -> Model:

107

"""Async version of update."""

108

...

109

```

110

111

**Usage Example:**

112

113

```python

114

from google.genai import Client

115

from google.genai.types import UpdateModelConfig

116

117

client = Client(vertexai=True, project='PROJECT_ID', location='us-central1')

118

119

config = UpdateModelConfig(

120

display_name='My Custom Model v2',

121

description='Updated description'

122

)

123

124

updated_model = client.models.update(

125

model='projects/.../locations/.../models/my-model',

126

config=config

127

)

128

129

print(f"Updated: {updated_model.display_name}")

130

```

131

132

### Delete Model

133

134

Delete a tuned model.

135

136

```python { .api }

137

def delete(

138

self,

139

*,

140

model: str,

141

config: Optional[DeleteModelConfig] = None

142

) -> DeleteModelResponse:

143

"""

144

Delete a tuned model.

145

146

Parameters:

147

model (str): Model resource name to delete.

148

config (DeleteModelConfig, optional): Delete configuration.

149

150

Returns:

151

DeleteModelResponse: Deletion confirmation.

152

153

Raises:

154

ClientError: For client errors including 404 if model not found

155

ServerError: For server errors

156

"""

157

...

158

159

async def delete(

160

self,

161

*,

162

model: str,

163

config: Optional[DeleteModelConfig] = None

164

) -> DeleteModelResponse:

165

"""Async version of delete."""

166

...

167

```

168

169

**Usage Example:**

170

171

```python

172

from google.genai import Client

173

174

client = Client(vertexai=True, project='PROJECT_ID', location='us-central1')

175

176

response = client.models.delete(

177

model='projects/.../locations/.../models/my-old-model'

178

)

179

180

print("Model deleted")

181

```

182

183

### List Models

184

185

List all available models with optional pagination and filtering.

186

187

```python { .api }

188

def list(

189

self,

190

*,

191

config: Optional[ListModelsConfig] = None

192

) -> Union[Pager[Model], Iterator[Model]]:

193

"""

194

List available models.

195

196

Parameters:

197

config (ListModelsConfig, optional): List configuration including:

198

- page_size: Number of models per page

199

- page_token: Token for pagination

200

- filter: Filter expression

201

202

Returns:

203

Union[Pager[Model], Iterator[Model]]: Paginated model list.

204

205

Raises:

206

ClientError: For client errors

207

ServerError: For server errors

208

"""

209

...

210

211

async def list(

212

self,

213

*,

214

config: Optional[ListModelsConfig] = None

215

) -> Union[AsyncPager[Model], AsyncIterator[Model]]:

216

"""Async version of list."""

217

...

218

```

219

220

**Usage Example:**

221

222

```python

223

from google.genai import Client

224

225

client = Client(api_key='YOUR_API_KEY')

226

227

# List all available models

228

print("Available models:")

229

for model in client.models.list():

230

print(f"- {model.name}: {model.display_name}")

231

print(f" Methods: {', '.join(model.supported_generation_methods)}")

232

233

# List with pagination

234

from google.genai.types import ListModelsConfig

235

236

config = ListModelsConfig(page_size=10)

237

pager = client.models.list(config=config)

238

239

print(f"\nFirst page ({len(pager.page)} models):")

240

for model in pager.page:

241

print(f"- {model.name}")

242

```

243

244

## Types

245

246

```python { .api }

247

from typing import Optional, List, Iterator, AsyncIterator, Union

248

249

# Configuration types

250

class GetModelConfig:

251

"""Configuration for getting model."""

252

pass

253

254

class UpdateModelConfig:

255

"""

256

Configuration for updating model.

257

258

Attributes:

259

display_name (str, optional): New display name.

260

description (str, optional): New description.

261

"""

262

display_name: Optional[str] = None

263

description: Optional[str] = None

264

265

class DeleteModelConfig:

266

"""Configuration for deleting model."""

267

pass

268

269

class ListModelsConfig:

270

"""

271

Configuration for listing models.

272

273

Attributes:

274

page_size (int, optional): Number of models per page.

275

page_token (str, optional): Token for pagination.

276

filter (str, optional): Filter expression.

277

"""

278

page_size: Optional[int] = None

279

page_token: Optional[str] = None

280

filter: Optional[str] = None

281

282

# Response types

283

class Model:

284

"""

285

Model information and capabilities.

286

287

Attributes:

288

name (str): Model resource name (e.g., 'models/gemini-2.0-flash').

289

base_model_id (str, optional): Base model identifier.

290

version (str, optional): Model version.

291

display_name (str): Human-readable display name.

292

description (str): Model description.

293

input_token_limit (int): Maximum input tokens.

294

output_token_limit (int): Maximum output tokens.

295

supported_generation_methods (list[str]): Supported methods:

296

- 'generateContent': Text/multimodal generation

297

- 'embedContent': Embeddings

298

- 'generateImages': Image generation

299

- 'generateVideos': Video generation

300

temperature (float, optional): Default temperature.

301

top_p (float, optional): Default top_p.

302

top_k (int, optional): Default top_k.

303

max_temperature (float, optional): Maximum allowed temperature.

304

tuned_model_info (TunedModelInfo, optional): Info for tuned models.

305

"""

306

name: str

307

base_model_id: Optional[str] = None

308

version: Optional[str] = None

309

display_name: str

310

description: str

311

input_token_limit: int

312

output_token_limit: int

313

supported_generation_methods: list[str]

314

temperature: Optional[float] = None

315

top_p: Optional[float] = None

316

top_k: Optional[int] = None

317

max_temperature: Optional[float] = None

318

tuned_model_info: Optional[TunedModelInfo] = None

319

320

class TunedModelInfo:

321

"""

322

Information for tuned models.

323

324

Attributes:

325

tuning_job (str): Tuning job that created this model.

326

base_model (str): Base model used for tuning.

327

tuning_dataset (str, optional): Training dataset.

328

"""

329

tuning_job: str

330

base_model: str

331

tuning_dataset: Optional[str] = None

332

333

class DeleteModelResponse:

334

"""

335

Response from deleting a model.

336

337

Attributes:

338

deleted (bool): Whether deletion succeeded.

339

"""

340

deleted: bool

341

342

# Pager types

343

class Pager[T]:

344

"""Synchronous pager."""

345

page: list[T]

346

def next_page(self) -> None: ...

347

def __iter__(self) -> Iterator[T]: ...

348

349

class AsyncPager[T]:

350

"""Asynchronous pager."""

351

page: list[T]

352

async def next_page(self) -> None: ...

353

async def __aiter__(self) -> AsyncIterator[T]: ...

354

```

355