0
# Google Gen AI SDK
1
2
The Google Gen AI JavaScript SDK is a comprehensive TypeScript and JavaScript library for building applications powered by Gemini, Google's generative AI platform. It provides unified access to both the Gemini Developer API and Vertex AI, supporting Gemini 2.0+ features including content generation, image generation, video generation, function calling, caching, file uploads, and real-time live sessions with multimodal input/output.
3
4
## Package Information
5
6
- **Package Name**: @google/genai
7
- **Package Type**: npm
8
- **Language**: TypeScript
9
- **Installation**: `npm install @google/genai`
10
- **Minimum Node.js Version**: 20.0.0
11
12
## Core Imports
13
14
```typescript
15
import { GoogleGenAI } from '@google/genai';
16
```
17
18
For CommonJS:
19
20
```javascript
21
const { GoogleGenAI } = require('@google/genai');
22
```
23
24
Platform-specific imports:
25
26
```typescript
27
// Browser-specific
28
import { GoogleGenAI } from '@google/genai/web';
29
30
// Node.js-specific
31
import { GoogleGenAI } from '@google/genai/node';
32
```
33
34
## Basic Usage
35
36
```typescript
37
import { GoogleGenAI } from '@google/genai';
38
39
// Initialize with Gemini API (requires API key)
40
const client = new GoogleGenAI({ apiKey: 'YOUR_API_KEY' });
41
42
// Generate text content
43
const response = await client.models.generateContent({
44
model: 'gemini-2.0-flash',
45
contents: 'Explain quantum computing in simple terms'
46
});
47
48
console.log(response.text);
49
50
// Generate with streaming
51
const stream = await client.models.generateContentStream({
52
model: 'gemini-2.0-flash',
53
contents: 'Write a short story about AI'
54
});
55
56
for await (const chunk of stream) {
57
process.stdout.write(chunk.text || '');
58
}
59
```
60
61
## Architecture
62
63
The SDK is built around several key components:
64
65
- **Main Client**: `GoogleGenAI` class provides unified access to all API capabilities
66
- **Module System**: Specialized modules for different operations (Models, Chats, Files, Caches, Batches, etc.)
67
- **Dual API Support**: Single interface for both Gemini Developer API and Vertex AI
68
- **Platform Abstraction**: Cross-platform support for Node.js and browser environments
69
- **Type Safety**: Full TypeScript integration with 374+ exported types
70
- **Streaming Support**: AsyncGenerator-based streaming for real-time responses
71
- **Automatic Function Calling**: Built-in AFC support with configurable iteration limits
72
73
## Capabilities
74
75
### Client Initialization
76
77
Main entry point for all SDK operations supporting both Gemini Developer API and Vertex AI.
78
79
```typescript { .api }
80
class GoogleGenAI {
81
constructor(options?: GoogleGenAIOptions);
82
83
readonly models: Models;
84
readonly chats: Chats;
85
readonly live: Live;
86
readonly batches: Batches;
87
readonly caches: Caches;
88
readonly files: Files;
89
readonly operations: Operations;
90
readonly authTokens: Tokens;
91
readonly tunings: Tunings;
92
readonly fileSearchStores: FileSearchStores;
93
readonly vertexai: boolean;
94
}
95
96
interface GoogleGenAIOptions {
97
vertexai?: boolean;
98
project?: string;
99
location?: string;
100
apiKey?: string;
101
apiVersion?: string;
102
googleAuthOptions?: GoogleAuthOptions;
103
httpOptions?: HttpOptions;
104
}
105
```
106
107
[Client Initialization](./client.md)
108
109
### Content Generation
110
111
Core text and multimodal generation capabilities with support for streaming, function calling, and various configuration options.
112
113
```typescript { .api }
114
// Generate content
115
function generateContent(params: GenerateContentParameters): Promise<GenerateContentResponse>;
116
117
// Stream content generation
118
function generateContentStream(params: GenerateContentParameters): Promise<AsyncGenerator<GenerateContentResponse>>;
119
120
interface GenerateContentParameters {
121
model: string;
122
contents: ContentListUnion;
123
config?: GenerateContentConfig;
124
}
125
126
interface GenerateContentResponse {
127
candidates?: Candidate[];
128
text?: string;
129
functionCalls?: FunctionCall[];
130
usageMetadata?: UsageMetadata;
131
promptFeedback?: PromptFeedback;
132
modelVersion?: string;
133
automaticFunctionCallingHistory?: Content[];
134
sdkHttpResponse?: HttpResponse;
135
}
136
```
137
138
[Content Generation](./content-generation.md)
139
140
### Chat Sessions
141
142
Multi-turn conversation management with history tracking and streaming support.
143
144
```typescript { .api }
145
function create(params: CreateChatParameters): Chat;
146
147
interface CreateChatParameters {
148
model: string;
149
config?: GenerateContentConfig;
150
history?: Content[];
151
}
152
153
class Chat {
154
sendMessage(params: SendMessageParameters): Promise<GenerateContentResponse>;
155
sendMessageStream(params: SendMessageParameters): Promise<AsyncGenerator<GenerateContentResponse>>;
156
getHistory(curated?: boolean): Content[];
157
}
158
```
159
160
[Chat Sessions](./chat.md)
161
162
### Image Generation
163
164
Generate, edit, upscale, and segment images using Imagen models.
165
166
```typescript { .api }
167
// Generate images from text
168
function generateImages(params: GenerateImagesParameters): Promise<GenerateImagesResponse>;
169
170
// Edit images with masks and prompts
171
function editImage(params: EditImageParameters): Promise<EditImageResponse>;
172
173
// Upscale images
174
function upscaleImage(params: UpscaleImageParameters): Promise<UpscaleImageResponse>;
175
176
// Segment images
177
function segmentImage(params: SegmentImageParameters): Promise<SegmentImageResponse>;
178
```
179
180
[Image Generation](./image-generation.md)
181
182
### Video Generation
183
184
Generate videos from text or image prompts using Veo models (long-running operations).
185
186
```typescript { .api }
187
function generateVideos(params: GenerateVideosParameters): Promise<GenerateVideosOperation>;
188
189
interface GenerateVideosOperation {
190
name?: string;
191
done?: boolean;
192
response?: GenerateVideosResponse;
193
error?: Status;
194
metadata?: GenerateVideosMetadata;
195
}
196
```
197
198
[Video Generation](./video-generation.md)
199
200
### Function Calling
201
202
Define and use custom functions and tools for extended model capabilities.
203
204
```typescript { .api }
205
interface Tool {
206
functionDeclarations?: FunctionDeclaration[];
207
codeExecution?: ToolCodeExecution;
208
googleSearch?: GoogleSearch;
209
retrieval?: Retrieval;
210
fileSearch?: FileSearch;
211
computerUse?: ComputerUse;
212
}
213
214
interface CallableTool {
215
tool(): Promise<Tool>;
216
callTool(functionCalls: FunctionCall[]): Promise<Part[]>;
217
}
218
219
interface FunctionDeclaration {
220
name?: string;
221
description?: string;
222
parameters?: Schema;
223
parametersJsonSchema?: unknown;
224
response?: Schema;
225
responseJsonSchema?: unknown;
226
behavior?: Behavior;
227
}
228
```
229
230
[Function Calling](./function-calling.md)
231
232
### Context Caching
233
234
Cache content for improved efficiency and reduced costs on repeated requests.
235
236
```typescript { .api }
237
function create(params: CreateCachedContentParameters): Promise<CachedContent>;
238
function list(params?: ListCachedContentsParameters): Promise<Pager<CachedContent>>;
239
function get(params: GetCachedContentParameters): Promise<CachedContent>;
240
function update(params: UpdateCachedContentParameters): Promise<CachedContent>;
241
function delete(params: DeleteCachedContentParameters): Promise<DeleteCachedContentResponse>;
242
243
interface CachedContent {
244
name?: string;
245
model?: string;
246
createTime?: string;
247
updateTime?: string;
248
expireTime?: string;
249
ttl?: string;
250
contents?: Content[];
251
tools?: Tool[];
252
systemInstruction?: Content;
253
}
254
```
255
256
[Context Caching](./caching.md)
257
258
### Batch Processing
259
260
Process multiple requests efficiently using batch jobs with GCS or BigQuery integration.
261
262
```typescript { .api }
263
function create(params: CreateBatchJobParameters): Promise<BatchJob>;
264
function createEmbeddings(params: CreateEmbeddingsBatchJobParameters): Promise<BatchJob>;
265
function list(params?: ListBatchJobsParameters): Promise<Pager<BatchJob>>;
266
function get(params: GetBatchJobParameters): Promise<BatchJob>;
267
function cancel(params: CancelBatchJobParameters): Promise<void>;
268
269
interface BatchJob {
270
name?: string;
271
displayName?: string;
272
model?: string;
273
state?: JobState;
274
createTime?: string;
275
startTime?: string;
276
endTime?: string;
277
src?: BatchJobSource;
278
config?: BatchJobConfig;
279
}
280
```
281
282
[Batch Processing](./batch.md)
283
284
### File Management
285
286
Upload and manage files for use in generation requests (Gemini API only).
287
288
```typescript { .api }
289
function upload(params: UploadFileParameters): Promise<File>;
290
function list(params?: ListFilesParameters): Promise<Pager<File>>;
291
function download(params: DownloadFileParameters): Promise<void>;
292
293
interface File {
294
name?: string;
295
displayName?: string;
296
mimeType?: string;
297
sizeBytes?: string;
298
createTime?: string;
299
updateTime?: string;
300
expirationTime?: string;
301
sha256Hash?: string;
302
uri?: string;
303
state?: FileState;
304
videoMetadata?: VideoMetadata;
305
}
306
```
307
308
[File Management](./files.md)
309
310
### Live API (Experimental)
311
312
Real-time bidirectional communication with models via WebSocket for multimodal streaming.
313
314
```typescript { .api }
315
function connect(params: LiveConnectParameters): Promise<Session>;
316
317
class Session {
318
send(message: LiveClientMessage): void;
319
sendRealtimeInput(params: LiveSendRealtimeInputParameters): void;
320
sendToolResponse(params: LiveSendToolResponseParameters): void;
321
close(): void;
322
}
323
324
interface LiveConnectParameters {
325
model: string;
326
config?: LiveConnectConfig;
327
callbacks?: LiveCallbacks;
328
}
329
```
330
331
[Live API](./live.md)
332
333
### Embeddings
334
335
Generate text embeddings for semantic search and similarity comparison.
336
337
```typescript { .api }
338
function embedContent(params: EmbedContentParameters): Promise<EmbedContentResponse>;
339
340
interface EmbedContentParameters {
341
model: string;
342
contents: ContentListUnion;
343
config?: EmbedContentConfig;
344
}
345
346
interface EmbedContentResponse {
347
embeddings?: ContentEmbedding[];
348
}
349
```
350
351
[Embeddings](./embeddings.md)
352
353
### Model Management
354
355
List, retrieve, update, and delete models (especially tuned models).
356
357
```typescript { .api }
358
function list(params?: ListModelsParameters): Promise<Pager<Model>>;
359
function get(params: GetModelParameters): Promise<Model>;
360
function update(params: UpdateModelParameters): Promise<Model>;
361
function delete(params: DeleteModelParameters): Promise<DeleteModelResponse>;
362
363
interface Model {
364
name?: string;
365
baseModelId?: string;
366
version?: string;
367
displayName?: string;
368
description?: string;
369
inputTokenLimit?: number;
370
outputTokenLimit?: number;
371
supportedGenerationMethods?: string[];
372
}
373
```
374
375
[Model Management](./models.md)
376
377
### Model Tuning (Experimental)
378
379
Fine-tune models with custom training data.
380
381
```typescript { .api }
382
function tune(params: CreateTuningJobParameters): Promise<TuningJob>;
383
function list(params?: ListTuningJobsParameters): Promise<Pager<TuningJob>>;
384
function get(params: GetTuningJobParameters): Promise<TuningJob>;
385
386
interface TuningJob {
387
name?: string;
388
tunedModelDisplayName?: string;
389
baseModel?: string;
390
state?: JobState;
391
createTime?: string;
392
startTime?: string;
393
endTime?: string;
394
trainingDataset?: TuningDataset;
395
validationDataset?: TuningDataset;
396
tunedModel?: string;
397
}
398
```
399
400
[Model Tuning](./tuning.md)
401
402
### File Search Stores
403
404
Manage file search stores and documents for enhanced search capabilities during generation (Gemini API only).
405
406
```typescript { .api }
407
function create(params: CreateFileSearchStoreParameters): Promise<FileSearchStore>;
408
function list(params?: ListFileSearchStoresParameters): Promise<Pager<FileSearchStore>>;
409
function get(params: GetFileSearchStoreParameters): Promise<FileSearchStore>;
410
function delete(params: DeleteFileSearchStoreParameters): Promise<void>;
411
function uploadToFileSearchStore(params: UploadToFileSearchStoreParameters): Promise<UploadToFileSearchStoreOperation>;
412
function importFile(params: ImportFileParameters): Promise<ImportFileOperation>;
413
414
interface FileSearchStores {
415
readonly documents: Documents;
416
}
417
418
interface FileSearchStore {
419
name?: string;
420
displayName?: string;
421
createTime?: string;
422
updateTime?: string;
423
}
424
```
425
426
[File Search Stores](./file-search-stores.md)
427
428
### Operations
429
430
Manage and monitor long-running operations such as video generation and file imports.
431
432
```typescript { .api }
433
function get<T, U extends Operation<T>>(
434
parameters: OperationGetParameters<T, U>
435
): Promise<Operation<T>>;
436
437
function getVideosOperation(
438
parameters: OperationGetParameters<GenerateVideosResponse, GenerateVideosOperation>
439
): Promise<GenerateVideosOperation>;
440
441
interface Operation<T> {
442
name?: string;
443
done?: boolean;
444
response?: T;
445
error?: Status;
446
metadata?: Record<string, unknown>;
447
}
448
```
449
450
[Operations](./operations.md)
451
452
### Authentication Tokens (Experimental)
453
454
Create ephemeral authentication tokens for constrained Live API access.
455
456
```typescript { .api }
457
function create(params: CreateAuthTokenParameters): Promise<AuthToken>;
458
459
interface AuthToken {
460
name?: string;
461
uses?: number;
462
expireTime?: string;
463
usesRemaining?: number;
464
createTime?: string;
465
}
466
```
467
468
[Authentication Tokens](./auth-tokens.md)
469
470
### MCP Integration (Experimental)
471
472
Model Context Protocol support for extended tool capabilities.
473
474
```typescript { .api }
475
function mcpToTool(
476
mcpClientOrTools: McpClient | McpTool[],
477
config?: CallableToolConfig
478
): Promise<CallableTool>;
479
```
480
481
[MCP Integration](./mcp.md)
482
483
## Core Types
484
485
```typescript { .api }
486
// Content structure
487
interface Part {
488
text?: string;
489
inlineData?: Blob;
490
fileData?: FileData;
491
functionCall?: FunctionCall;
492
functionResponse?: FunctionResponse;
493
videoMetadata?: VideoMetadata;
494
executableCode?: ExecutableCode;
495
codeExecutionResult?: CodeExecutionResult;
496
thought?: boolean;
497
mediaResolution?: PartMediaResolution;
498
}
499
500
interface Content {
501
parts?: Part[];
502
role?: string;
503
}
504
505
// Blob and file references
506
interface Blob {
507
mimeType?: string;
508
data?: string;
509
}
510
511
interface FileData {
512
fileUri?: string;
513
mimeType?: string;
514
}
515
516
// Union types for flexible inputs
517
type PartUnion = Part | string;
518
type PartListUnion = PartUnion[] | PartUnion;
519
type ContentUnion = Content | PartUnion[] | PartUnion;
520
type ContentListUnion = Content | Content[] | PartUnion | PartUnion[];
521
522
// Generation configuration
523
interface GenerateContentConfig {
524
temperature?: number;
525
topP?: number;
526
topK?: number;
527
candidateCount?: number;
528
maxOutputTokens?: number;
529
stopSequences?: string[];
530
presencePenalty?: number;
531
frequencyPenalty?: number;
532
responseModalities?: Modality[];
533
systemInstruction?: Content | string;
534
tools?: ToolListUnion;
535
toolConfig?: ToolConfig;
536
safetySettings?: SafetySetting[];
537
cachedContent?: string;
538
automaticFunctionCalling?: AutomaticFunctionCallingConfig;
539
thinkingConfig?: ThinkingConfig;
540
responseSchema?: Schema;
541
responseJsonSchema?: unknown;
542
responseMimeType?: string;
543
httpOptions?: HttpOptions;
544
abortSignal?: AbortSignal;
545
}
546
547
// Safety configuration
548
interface SafetySetting {
549
category?: HarmCategory;
550
threshold?: HarmBlockThreshold;
551
method?: HarmBlockMethod;
552
}
553
554
interface SafetyRating {
555
category?: HarmCategory;
556
probability?: HarmProbability;
557
blocked?: boolean;
558
probabilityScore?: number;
559
severity?: HarmSeverity;
560
severityScore?: number;
561
}
562
563
// Usage metadata
564
interface UsageMetadata {
565
promptTokenCount?: number;
566
candidatesTokenCount?: number;
567
totalTokenCount?: number;
568
cachedContentTokenCount?: number;
569
}
570
571
// HTTP configuration
572
interface HttpOptions {
573
baseUrl?: string;
574
apiVersion?: string;
575
headers?: Record<string, string>;
576
timeout?: number;
577
extraBody?: Record<string, unknown>;
578
}
579
580
interface HttpResponse {
581
headers?: Record<string, string>;
582
status?: number;
583
}
584
585
// Schema for structured outputs
586
interface Schema {
587
type?: Type;
588
format?: string;
589
description?: string;
590
nullable?: boolean;
591
enum?: string[];
592
items?: Schema;
593
properties?: Record<string, Schema>;
594
required?: string[];
595
maxItems?: string;
596
minItems?: string;
597
propertyOrdering?: string[];
598
}
599
```
600
601
## Enumerations
602
603
```typescript { .api }
604
// Content modalities
605
enum Modality {
606
TEXT = 'TEXT',
607
IMAGE = 'IMAGE',
608
AUDIO = 'AUDIO'
609
}
610
611
// Finish reasons
612
enum FinishReason {
613
FINISH_REASON_UNSPECIFIED = 'FINISH_REASON_UNSPECIFIED',
614
STOP = 'STOP',
615
MAX_TOKENS = 'MAX_TOKENS',
616
SAFETY = 'SAFETY',
617
RECITATION = 'RECITATION',
618
OTHER = 'OTHER',
619
BLOCKLIST = 'BLOCKLIST',
620
PROHIBITED_CONTENT = 'PROHIBITED_CONTENT',
621
SPII = 'SPII',
622
MALFORMED_FUNCTION_CALL = 'MALFORMED_FUNCTION_CALL'
623
}
624
625
// Harm categories
626
enum HarmCategory {
627
HARM_CATEGORY_UNSPECIFIED = 'HARM_CATEGORY_UNSPECIFIED',
628
HARM_CATEGORY_HATE_SPEECH = 'HARM_CATEGORY_HATE_SPEECH',
629
HARM_CATEGORY_SEXUALLY_EXPLICIT = 'HARM_CATEGORY_SEXUALLY_EXPLICIT',
630
HARM_CATEGORY_HARASSMENT = 'HARM_CATEGORY_HARASSMENT',
631
HARM_CATEGORY_DANGEROUS_CONTENT = 'HARM_CATEGORY_DANGEROUS_CONTENT',
632
HARM_CATEGORY_CIVIC_INTEGRITY = 'HARM_CATEGORY_CIVIC_INTEGRITY',
633
HARM_CATEGORY_IMAGE_HATE = 'HARM_CATEGORY_IMAGE_HATE',
634
HARM_CATEGORY_IMAGE_DANGEROUS_CONTENT = 'HARM_CATEGORY_IMAGE_DANGEROUS_CONTENT',
635
HARM_CATEGORY_IMAGE_HARASSMENT = 'HARM_CATEGORY_IMAGE_HARASSMENT',
636
HARM_CATEGORY_IMAGE_SEXUALLY_EXPLICIT = 'HARM_CATEGORY_IMAGE_SEXUALLY_EXPLICIT',
637
HARM_CATEGORY_JAILBREAK = 'HARM_CATEGORY_JAILBREAK'
638
}
639
640
// Harm block thresholds
641
enum HarmBlockThreshold {
642
HARM_BLOCK_THRESHOLD_UNSPECIFIED = 'HARM_BLOCK_THRESHOLD_UNSPECIFIED',
643
BLOCK_LOW_AND_ABOVE = 'BLOCK_LOW_AND_ABOVE',
644
BLOCK_MEDIUM_AND_ABOVE = 'BLOCK_MEDIUM_AND_ABOVE',
645
BLOCK_ONLY_HIGH = 'BLOCK_ONLY_HIGH',
646
BLOCK_NONE = 'BLOCK_NONE',
647
OFF = 'OFF'
648
}
649
650
// Harm probabilities
651
enum HarmProbability {
652
HARM_PROBABILITY_UNSPECIFIED = 'HARM_PROBABILITY_UNSPECIFIED',
653
NEGLIGIBLE = 'NEGLIGIBLE',
654
LOW = 'LOW',
655
MEDIUM = 'MEDIUM',
656
HIGH = 'HIGH'
657
}
658
659
// Schema types
660
enum Type {
661
TYPE_UNSPECIFIED = 'TYPE_UNSPECIFIED',
662
STRING = 'STRING',
663
NUMBER = 'NUMBER',
664
INTEGER = 'INTEGER',
665
BOOLEAN = 'BOOLEAN',
666
ARRAY = 'ARRAY',
667
OBJECT = 'OBJECT',
668
NULL = 'NULL'
669
}
670
671
// Programming languages
672
enum Language {
673
LANGUAGE_UNSPECIFIED = 'LANGUAGE_UNSPECIFIED',
674
PYTHON = 'PYTHON'
675
}
676
677
// Function calling modes
678
enum FunctionCallingConfigMode {
679
MODE_UNSPECIFIED = 'MODE_UNSPECIFIED',
680
AUTO = 'AUTO',
681
ANY = 'ANY',
682
NONE = 'NONE',
683
VALIDATED = 'VALIDATED'
684
}
685
686
// Function behavior
687
enum Behavior {
688
BEHAVIOR_UNSPECIFIED = 'BEHAVIOR_UNSPECIFIED',
689
BLOCKING = 'BLOCKING',
690
NON_BLOCKING = 'NON_BLOCKING'
691
}
692
693
// Job states
694
enum JobState {
695
JOB_STATE_UNSPECIFIED = 'JOB_STATE_UNSPECIFIED',
696
JOB_STATE_QUEUED = 'JOB_STATE_QUEUED',
697
JOB_STATE_PENDING = 'JOB_STATE_PENDING',
698
JOB_STATE_RUNNING = 'JOB_STATE_RUNNING',
699
JOB_STATE_SUCCEEDED = 'JOB_STATE_SUCCEEDED',
700
JOB_STATE_FAILED = 'JOB_STATE_FAILED',
701
JOB_STATE_CANCELLING = 'JOB_STATE_CANCELLING',
702
JOB_STATE_CANCELLED = 'JOB_STATE_CANCELLED',
703
JOB_STATE_PAUSED = 'JOB_STATE_PAUSED',
704
JOB_STATE_EXPIRED = 'JOB_STATE_EXPIRED',
705
JOB_STATE_UPDATING = 'JOB_STATE_UPDATING',
706
JOB_STATE_PARTIALLY_SUCCEEDED = 'JOB_STATE_PARTIALLY_SUCCEEDED'
707
}
708
709
// File states
710
enum FileState {
711
STATE_UNSPECIFIED = 'STATE_UNSPECIFIED',
712
PROCESSING = 'PROCESSING',
713
ACTIVE = 'ACTIVE',
714
FAILED = 'FAILED'
715
}
716
```
717
718
## Helper Functions
719
720
```typescript { .api }
721
// Part creation helpers
722
function createPartFromText(text: string): Part;
723
function createPartFromUri(
724
uri: string,
725
mimeType: string,
726
mediaResolution?: PartMediaResolutionLevel
727
): Part;
728
function createPartFromBase64(
729
data: string,
730
mimeType: string,
731
mediaResolution?: PartMediaResolutionLevel
732
): Part;
733
function createPartFromFunctionCall(
734
name: string,
735
args: Record<string, unknown>
736
): Part;
737
function createPartFromFunctionResponse(
738
id: string,
739
name: string,
740
response: Record<string, unknown>,
741
parts?: FunctionResponsePart[]
742
): Part;
743
function createPartFromCodeExecutionResult(
744
outcome: Outcome,
745
output: string
746
): Part;
747
function createPartFromExecutableCode(
748
code: string,
749
language: Language
750
): Part;
751
752
// Content creation helpers
753
function createUserContent(partOrString: PartListUnion | string): Content;
754
function createModelContent(partOrString: PartListUnion | string): Content;
755
756
// FunctionResponsePart creation helpers
757
function createFunctionResponsePartFromBase64(
758
data: string,
759
mimeType: string
760
): FunctionResponsePart;
761
function createFunctionResponsePartFromUri(
762
uri: string,
763
mimeType: string
764
): FunctionResponsePart;
765
766
// Base URL configuration
767
function setDefaultBaseUrls(baseUrlParams: BaseUrlParameters): void;
768
function getDefaultBaseUrls(): BaseUrlParameters;
769
770
interface BaseUrlParameters {
771
vertexBaseUrl?: string;
772
geminiBaseUrl?: string;
773
}
774
```
775
776
## Pagination
777
778
```typescript { .api }
779
class Pager<T> implements AsyncIterable<T> {
780
readonly page: T[];
781
readonly pageSize: number;
782
readonly pageLength: number;
783
readonly name: PagedItem;
784
readonly params: PagedItemConfig;
785
readonly sdkHttpResponse?: HttpResponse;
786
787
[Symbol.asyncIterator](): AsyncIterator<T>;
788
nextPage(): Promise<T[]>;
789
hasNextPage(): boolean;
790
getItem(index: number): T;
791
}
792
793
enum PagedItem {
794
PAGED_ITEM_BATCH_JOBS = 'PAGED_ITEM_BATCH_JOBS',
795
PAGED_ITEM_MODELS = 'PAGED_ITEM_MODELS',
796
PAGED_ITEM_TUNING_JOBS = 'PAGED_ITEM_TUNING_JOBS',
797
PAGED_ITEM_FILES = 'PAGED_ITEM_FILES',
798
PAGED_ITEM_CACHED_CONTENTS = 'PAGED_ITEM_CACHED_CONTENTS',
799
PAGED_ITEM_FILE_SEARCH_STORES = 'PAGED_ITEM_FILE_SEARCH_STORES',
800
PAGED_ITEM_DOCUMENTS = 'PAGED_ITEM_DOCUMENTS'
801
}
802
```
803
804
## Error Handling
805
806
```typescript { .api }
807
class ApiError extends Error {
808
message: string;
809
status: number;
810
name: 'ApiError';
811
}
812
813
interface ApiErrorInfo {
814
message: string;
815
status: number;
816
}
817
```
818
819
## Constants
820
821
```typescript { .api }
822
const SDK_VERSION: string = '1.30.0';
823
const GOOGLE_API_CLIENT_HEADER: string = 'x-goog-api-client';
824
```
825