tessl install github:jezweb/claude-skills --skill google-gemini-apiIntegrate Gemini API with @google/genai SDK (NOT deprecated @google/generative-ai). Text generation, multimodal (images/video/audio/PDFs), function calling, thinking mode, streaming. 1M input tokens. Prevents 14 documented errors. Use when: Gemini integration, multimodal AI, reasoning with thinking mode. Troubleshoot: SDK deprecation, model not found, context window, function calling errors, streaming corruption, safety settings, rate limits.
Review Score
70%
Validation Score
12/16
Implementation Score
42%
Activation Score
100%
Version: 3.0.0 (14 Known Issues Added) Package: @google/genai@1.35.0 (⚠️ NOT @google/generative-ai) Last Updated: 2026-01-21
DEPRECATED SDK: @google/generative-ai (sunset November 30, 2025)
CURRENT SDK: @google/genai v1.27+
If you see code using @google/generative-ai, it's outdated!
This skill uses the correct current SDK and provides a complete migration guide.
✅ Phase 1 Complete:
✅ Phase 2 Complete:
📦 Separate Skills:
google-gemini-embeddings skill for text-embedding-004Phase 1 - Core Features:
Phase 2 - Advanced Features: 12. Context Caching 13. Code Execution 14. Grounding with Google Search
Common Reference: 15. Known Issues Prevention 16. Error Handling 17. Rate Limits 18. SDK Migration Guide 19. Production Best Practices
CORRECT SDK:
npm install @google/genai@1.34.0❌ WRONG (DEPRECATED):
npm install @google/generative-ai # DO NOT USE!export GEMINI_API_KEY="..."Or create .env file:
GEMINI_API_KEY=...import { GoogleGenAI } from '@google/genai';
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
const response = await ai.models.generateContent({
model: 'gemini-2.5-flash',
contents: 'Explain quantum computing in simple terms'
});
console.log(response.text);const response = await fetch(
`https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-goog-api-key': env.GEMINI_API_KEY,
},
body: JSON.stringify({
contents: [{ parts: [{ text: 'Explain quantum computing in simple terms' }] }]
}),
}
);
const data = await response.json();
console.log(data.candidates[0].content.parts[0].text);| Feature | 3-Flash | 3-Pro (Preview) | 2.5-Pro | 2.5-Flash | 2.5-Flash-Lite |
|---|---|---|---|---|---|
| Thinking Mode | ✅ Default ON | TBD | ✅ Default ON | ✅ Default ON | ✅ Default ON |
| Function Calling | ✅ | ✅ | ✅ | ✅ | ✅ |
| Multimodal | ✅ Enhanced | ✅ Enhanced | ✅ | ✅ | ✅ |
| Streaming | ✅ | ✅ | ✅ | ✅ | ✅ |
| System Instructions | ✅ | ✅ | ✅ | ✅ | ✅ |
| Context Window | 1,048,576 in | TBD | 1,048,576 in | 1,048,576 in | 1,048,576 in |
| Output Tokens | 65,536 max | TBD | 65,536 max | 65,536 max | 65,536 max |
| Status | GA | Preview | Stable | Stable | Stable |
ACCURATE (Gemini 2.5): Gemini 2.5 models support 1,048,576 input tokens (NOT 2M!) OUTDATED: Only Gemini 1.5 Pro (previous generation) had 2M token context window GEMINI 3: Context window specifications pending official documentation
Common mistake: Claiming Gemini 2.5 has 2M tokens. It doesn't. This skill prevents this error.
Pros:
Cons:
Use when: Building Node.js apps, Next.js Server Actions/Components, or any environment with Node.js compatibility
Pros:
Cons:
Use when: Deploying to Cloudflare Workers, browser clients, or lightweight edge runtimes
import { GoogleGenAI } from '@google/genai';
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
const response = await ai.models.generateContent({
model: 'gemini-2.5-flash',
contents: 'Write a haiku about artificial intelligence'
});
console.log(response.text);const response = await fetch(
`https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-goog-api-key': env.GEMINI_API_KEY,
},
body: JSON.stringify({
contents: [
{
parts: [
{ text: 'Write a haiku about artificial intelligence' }
]
}
]
}),
}
);
const data = await response.json();
console.log(data.candidates[0].content.parts[0].text);{
text: string, // Convenience accessor for text content
candidates: [
{
content: {
parts: [
{ text: string } // Generated text
],
role: string // "model"
},
finishReason: string, // "STOP" | "MAX_TOKENS" | "SAFETY" | "OTHER"
index: number
}
],
usageMetadata: {
promptTokenCount: number,
candidatesTokenCount: number,
totalTokenCount: number
}
}const response = await ai.models.generateContentStream({
model: 'gemini-2.5-flash',
contents: 'Write a 200-word story about time travel'
});
for await (const chunk of response) {
process.stdout.write(chunk.text);
}const response = await fetch(
`https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:streamGenerateContent`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-goog-api-key': env.GEMINI_API_KEY,
},
body: JSON.stringify({
contents: [{ parts: [{ text: 'Write a 200-word story about time travel' }] }]
}),
}
);
const reader = response.body.getReader();
const decoder = new TextDecoder();
let buffer = '';
while (true) {
const { done, value } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
const lines = buffer.split('\n');
buffer = lines.pop() || '';
for (const line of lines) {
if (line.trim() === '' || line.startsWith('data: [DONE]')) continue;
if (!line.startsWith('data: ')) continue;
try {
const data = JSON.parse(line.slice(6));
const text = data.candidates[0]?.content?.parts[0]?.text;
if (text) {
process.stdout.write(text);
}
} catch (e) {
// Skip invalid JSON
}
}
}Key Points:
streamGenerateContent endpoint (not generateContent)data: {json}\n\n[DONE] markersGemini 2.5 models support text + images + video + audio + PDFs in the same request.
import { GoogleGenAI } from '@google/genai';
import fs from 'fs';
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
// From file
const imageData = fs.readFileSync('/path/to/image.jpg');
const base64Image = imageData.toString('base64');
const response = await ai.models.generateContent({
model: 'gemini-2.5-flash',
contents: [
{
parts: [
{ text: 'What is in this image?' },
{
inlineData: {
data: base64Image,
mimeType: 'image/jpeg'
}
}
]
}
]
});
console.log(response.text);const imageData = fs.readFileSync('/path/to/image.jpg');
const base64Image = imageData.toString('base64');
const response = await fetch(
`https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-goog-api-key': env.GEMINI_API_KEY,
},
body: JSON.stringify({
contents: [
{
parts: [
{ text: 'What is in this image?' },
{
inlineData: {
data: base64Image,
mimeType: 'image/jpeg'
}
}
]
}
]
}),
}
);
const data = await response.json();
console.log(data.candidates[0].content.parts[0].text);Supported Image Formats:
.jpg, .jpeg).png).webp).heic).heif)Max Image Size: 20MB per image
// Video must be < 2 minutes for inline data
const videoData = fs.readFileSync('/path/to/video.mp4');
const base64Video = videoData.toString('base64');
const response = await ai.models.generateContent({
model: 'gemini-2.5-flash',
contents: [
{
parts: [
{ text: 'Describe what happens in this video' },
{
inlineData: {
data: base64Video,
mimeType: 'video/mp4'
}
}
]
}
]
});
console.log(response.text);Supported Video Formats:
.mp4).mpeg).mov).avi).flv).mpg).webm).wmv)Max Video Length (inline): 2 minutes Max Video Size: 2GB (use File API for larger files - Phase 2)
const audioData = fs.readFileSync('/path/to/audio.mp3');
const base64Audio = audioData.toString('base64');
const response = await ai.models.generateContent({
model: 'gemini-2.5-flash',
contents: [
{
parts: [
{ text: 'Transcribe and summarize this audio' },
{
inlineData: {
data: base64Audio,
mimeType: 'audio/mp3'
}
}
]
}
]
});
console.log(response.text);Supported Audio Formats:
.mp3).wav).flac).aac).ogg).opus)Max Audio Size: 20MB
const pdfData = fs.readFileSync('/path/to/document.pdf');
const base64Pdf = pdfData.toString('base64');
const response = await ai.models.generateContent({
model: 'gemini-2.5-flash',
contents: [
{
parts: [
{ text: 'Summarize the key points in this PDF' },
{
inlineData: {
data: base64Pdf,
mimeType: 'application/pdf'
}
}
]
}
]
});
console.log(response.text);Max PDF Size: 30MB PDF Limitations: Text-based PDFs work best; scanned images may have lower accuracy
You can combine multiple modalities in one request:
const response = await ai.models.generateContent({
model: 'gemini-2.5-flash',
contents: [
{
parts: [
{ text: 'Compare these two images and describe the differences:' },
{ inlineData: { data: base64Image1, mimeType: 'image/jpeg' } },
{ inlineData: { data: base64Image2, mimeType: 'image/jpeg' } }
]
}
]
});Gemini supports function calling (tool use) to connect models with external APIs and systems.
import { GoogleGenAI, FunctionCallingConfigMode } from '@google/genai';
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
// Define function declarations
const getCurrentWeather = {
name: 'get_current_weather',
description: 'Get the current weather for a location',
parametersJsonSchema: {
type: 'object',
properties: {
location: {
type: 'string',
description: 'City name, e.g. San Francisco'
},
unit: {
type: 'string',
enum: ['celsius', 'fahrenheit']
}
},
required: ['location']
}
};
// Make request with tools
const response = await ai.models.generateContent({
model: 'gemini-2.5-flash',
contents: 'What\'s the weather in Tokyo?',
config: {
tools: [
{ functionDeclarations: [getCurrentWeather] }
]
}
});
// Check if model wants to call a function
const functionCall = response.candidates[0].content.parts[0].functionCall;
if (functionCall) {
console.log('Function to call:', functionCall.name);
console.log('Arguments:', functionCall.args);
// Execute the function (your implementation)
const weatherData = await fetchWeather(functionCall.args.location);
// Send function result back to model
const finalResponse = await ai.models.generateContent({
model: 'gemini-2.5-flash',
contents: [
'What\'s the weather in Tokyo?',
response.candidates[0].content, // Original assistant response with function call
{
parts: [
{
functionResponse: {
name: functionCall.name,
response: weatherData
}
}
]
}
],
config: {
tools: [
{ functionDeclarations: [getCurrentWeather] }
]
}
});
console.log(finalResponse.text);
}const response = await fetch(
`https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-goog-api-key': env.GEMINI_API_KEY,
},
body: JSON.stringify({
contents: [
{ parts: [{ text: 'What\'s the weather in Tokyo?' }] }
],
tools: [
{
functionDeclarations: [
{
name: 'get_current_weather',
description: 'Get the current weather for a location',
parameters: {
type: 'object',
properties: {
location: {
type: 'string',
description: 'City name'
}
},
required: ['location']
}
}
]
}
]
}),
}
);
const data = await response.json();
const functionCall = data.candidates[0]?.content?.parts[0]?.functionCall;
if (functionCall) {
// Execute function and send result back (same flow as SDK)
}Gemini can call multiple independent functions simultaneously:
const tools = [
{
functionDeclarations: [
{
name: 'get_weather',
description: 'Get weather for a location',
parametersJsonSchema: {
type: 'object',
properties: {
location: { type: 'string' }
},
required: ['location']
}
},
{
name: 'get_population',
description: 'Get population of a city',
parametersJsonSchema: {
type: 'object',
properties: {
city: { type: 'string' }
},
required: ['city']
}
}
]
}
];
const response = await ai.models.generateContent({
model: 'gemini-2.5-flash',
contents: 'What is the weather and population of Tokyo?',
config: { tools }
});
// Model may return MULTIPLE function calls in parallel
const functionCalls = response.candidates[0].content.parts.filter(
part => part.functionCall
);
console.log(`Model wants to call ${functionCalls.length} functions in parallel`);import { FunctionCallingConfigMode } from '@google/genai';
const response = await ai.models.generateContent({
model: 'gemini-2.5-flash',
contents: 'What\'s the weather?',
config: {
tools: [{ functionDeclarations: [getCurrentWeather] }],
toolConfig: {
functionCallingConfig: {
mode: FunctionCallingConfigMode.ANY, // Force function call
// mode: FunctionCallingConfigMode.AUTO, // Model decides (default)
// mode: FunctionCallingConfigMode.NONE, // Never call functions
allowedFunctionNames: ['get_current_weather'] // Optional: restrict to specific functions
}
}
}
});Modes:
AUTO (default): Model decides whether to call functionsANY: Force model to call at least one functionNONE: Disable function calling for this requestSystem instructions guide the model's behavior and set context. They are separate from the conversation messages.
const response = await ai.models.generateContent({
model: 'gemini-2.5-flash',
systemInstruction: 'You are a helpful AI assistant that always responds in the style of a pirate. Use nautical terminology and end sentences with "arrr".',
contents: 'Explain what a database is'
});
console.log(response.text);
// Output: "Ahoy there! A database be like a treasure chest..."const response = await fetch(
`https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-goog-api-key': env.GEMINI_API_KEY,
},
body: JSON.stringify({
systemInstruction: {
parts: [
{ text: 'You are a helpful AI assistant that always responds in the style of a pirate.' }
]
},
contents: [
{ parts: [{ text: 'Explain what a database is' }] }
]
}),
}
);Key Points:
contents arrayFor conversations with history, use the SDK's chat helpers or manually manage conversation state.
const chat = await ai.models.createChat({
model: 'gemini-2.5-flash',
systemInstruction: 'You are a helpful coding assistant.',
history: [] // Start empty or with previous messages
});
// Send first message
const response1 = await chat.sendMessage('What is TypeScript?');
console.log('Assistant:', response1.text);
// Send follow-up (context is automatically maintained)
const response2 = await chat.sendMessage('How do I install it?');
console.log('Assistant:', response2.text);
// Get full chat history
const history = chat.getHistory();
console.log('Full conversation:', history);const conversationHistory = [];
// First turn
const response1 = await fetch(
`https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-goog-api-key': env.GEMINI_API_KEY,
},
body: JSON.stringify({
contents: [
{
role: 'user',
parts: [{ text: 'What is TypeScript?' }]
}
]
}),
}
);
const data1 = await response1.json();
const assistantReply1 = data1.candidates[0].content.parts[0].text;
// Add to history
conversationHistory.push(
{ role: 'user', parts: [{ text: 'What is TypeScript?' }] },
{ role: 'model', parts: [{ text: assistantReply1 }] }
);
// Second turn (include full history)
const response2 = await fetch(
`https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-goog-api-key': env.GEMINI_API_KEY,
},
body: JSON.stringify({
contents: [
...conversationHistory,
{ role: 'user', parts: [{ text: 'How do I install it?' }] }
]
}),
}
);Message Roles:
user: User messagesmodel: Assistant responses⚠️ Important: Chat helpers are SDK-only. With fetch, you must manually manage conversation history.
Gemini 2.5 models have thinking mode enabled by default for enhanced quality. You can configure the thinking budget.
const response = await ai.models.generateContent({
model: 'gemini-2.5-flash',
contents: 'Solve this complex math problem: ...',
config: {
thinkingConfig: {
thinkingBudget: 8192 // Max tokens for thinking (default: model-dependent)
}
}
});const response = await fetch(
`https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-goog-api-key': env.GEMINI_API_KEY,
},
body: JSON.stringify({
contents: [{ parts: [{ text: 'Solve this complex math problem: ...' }] }],
generationConfig: {
thinkingConfig: {
thinkingBudget: 8192
}
}
}),
}
);const response = await ai.models.generateContent({
model: 'gemini-2.5-flash',
contents: 'Solve this complex problem: ...',
config: {
thinkingConfig: {
thinkingLevel: 'MEDIUM' // 'LOW' | 'MEDIUM' | 'HIGH'
}
}
});Thinking Levels:
LOW: Minimal internal reasoning (faster, lower quality)MEDIUM: Balanced reasoning (default)HIGH: Maximum reasoning depth (slower, higher quality)Key Points:
thinkingLevel provides simpler control than thinkingBudget (new in v1.30.0)Customize model behavior with generation parameters.
const response = await ai.models.generateContent({
model: 'gemini-2.5-flash',
contents: 'Write a creative story',
config: {
temperature: 0.9, // Randomness (0.0-2.0, default: 1.0)
topP: 0.95, // Nucleus sampling (0.0-1.0)
topK: 40, // Top-k sampling
maxOutputTokens: 2048, // Max tokens to generate
stopSequences: ['END'], // Stop generation if these appear
responseMimeType: 'text/plain', // Or 'application/json' for JSON mode
candidateCount: 1 // Number of response candidates (usually 1)
}
});const response = await fetch(
`https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-goog-api-key': env.GEMINI_API_KEY,
},
body: JSON.stringify({
contents: [{ parts: [{ text: 'Write a creative story' }] }],
generationConfig: {
temperature: 0.9,
topP: 0.95,
topK: 40,
maxOutputTokens: 2048,
stopSequences: ['END'],
responseMimeType: 'text/plain',
candidateCount: 1
}
}),
}
);| Parameter | Range | Default | Use Case |
|---|---|---|---|
| temperature | 0.0-2.0 | 1.0 | Lower = more focused, higher = more creative |
| topP | 0.0-1.0 | 0.95 | Nucleus sampling threshold |
| topK | 1-100+ | 40 | Limit to top K tokens |
| maxOutputTokens | 1-65536 | Model max | Control response length |
| stopSequences | Array | None | Stop generation at specific strings |
Tips:
Context caching allows you to cache frequently used content (like system instructions, large documents, or video files) to reduce costs by up to 90% and improve latency.
import { GoogleGenAI } from '@google/genai';
import fs from 'fs';
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
// Create a cache for a large document
const documentText = fs.readFileSync('./large-document.txt', 'utf-8');
const cache = await ai.caches.create({
model: 'gemini-2.5-flash',
config: {
displayName: 'large-doc-cache', // Identifier for the cache
systemInstruction: 'You are an expert at analyzing legal documents.',
contents: documentText,
ttl: '3600s', // Cache for 1 hour
}
});
console.log('Cache created:', cache.name);
console.log('Expires at:', cache.expireTime);const response = await fetch(
'https://generativelanguage.googleapis.com/v1beta/cachedContents',
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-goog-api-key': env.GEMINI_API_KEY,
},
body: JSON.stringify({
model: 'models/gemini-2.5-flash',
displayName: 'large-doc-cache',
systemInstruction: {
parts: [{ text: 'You are an expert at analyzing legal documents.' }]
},
contents: [
{ parts: [{ text: documentText }] }
],
ttl: '3600s'
}),
}
);
const cache = await response.json();
console.log('Cache created:', cache.name);// Generate content using the cache
const response = await ai.models.generateContent({
model: cache.name, // Use cache name as model
contents: 'Summarize the key points in the document'
});
console.log(response.text);const response = await fetch(
`https://generativelanguage.googleapis.com/v1beta/${cache.name}:generateContent`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-goog-api-key': env.GEMINI_API_KEY,
},
body: JSON.stringify({
contents: [
{ parts: [{ text: 'Summarize the key points in the document' }] }
]
}),
}
);
const data = await response.json();
console.log(data.candidates[0].content.parts[0].text);import { UpdateCachedContentConfig } from '@google/genai';
await ai.caches.update({
name: cache.name,
config: {
ttl: '7200s' // Extend to 2 hours
}
});// Set specific expiration time (must be timezone-aware)
const in10Minutes = new Date(Date.now() + 10 * 60 * 1000);
await ai.caches.update({
name: cache.name,
config: {
expireTime: in10Minutes
}
});// List all caches
const caches = await ai.caches.list();
for (const cache of caches) {
console.log(cache.name, cache.displayName);
}
// Delete a specific cache
await ai.caches.delete({ name: cache.name });import { GoogleGenAI } from '@google/genai';
import fs from 'fs';
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
// Upload video file
const videoFile = await ai.files.upload({
file: fs.createReadStream('./video.mp4')
});
// Wait for processing
while (videoFile.state.name === 'PROCESSING') {
await new Promise(resolve => setTimeout(resolve, 2000));
videoFile = await ai.files.get({ name: videoFile.name });
}
// Create cache with video
const cache = await ai.caches.create({
model: 'gemini-2.5-flash',
config: {
displayName: 'video-analysis-cache',
systemInstruction: 'You are an expert video analyzer.',
contents: [videoFile],
ttl: '300s' // 5 minutes
}
});
// Use cache for multiple queries
const response1 = await ai.models.generateContent({
model: cache.name,
contents: 'What happens in the first minute?'
});
const response2 = await ai.models.generateContent({
model: cache.name,
contents: 'Describe the main characters'
});When to Use Caching:
TTL Guidelines:
Cost Savings:
Important:
gemini-2.5-flash-001, NOT just gemini-2.5-flash)Gemini models can generate and execute Python code to solve problems requiring computation, data analysis, or visualization.
Standard Library:
math, statistics, random, datetime, json, csv, recollections, itertools, functoolsData Science:
numpy, pandas, scipyVisualization:
matplotlib, seabornNote: Limited package availability compared to full Python environment
import { GoogleGenAI, Tool, ToolCodeExecution } from '@google/genai';
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
const response = await ai.models.generateContent({
model: 'gemini-2.5-flash',
contents: 'What is the sum of the first 50 prime numbers? Generate and run code for the calculation.',
config: {
tools: [{ codeExecution: {} }]
}
});
// Parse response parts
for (const part of response.candidates[0].content.parts) {
if (part.text) {
console.log('Text:', part.text);
}
if (part.executableCode) {
console.log('Generated Code:', part.executableCode.code);
}
if (part.codeExecutionResult) {
console.log('Execution Output:', part.codeExecutionResult.output);
}
}const response = await fetch(
`https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-goog-api-key': env.GEMINI_API_KEY,
},
body: JSON.stringify({
tools: [{ code_execution: {} }],
contents: [
{
parts: [
{ text: 'What is the sum of the first 50 prime numbers? Generate and run code.' }
]
}
]
}),
}
);
const data = await response.json();
for (const part of data.candidates[0].content.parts) {
if (part.text) {
console.log('Text:', part.text);
}
if (part.executableCode) {
console.log('Code:', part.executableCode.code);
}
if (part.codeExecutionResult) {
console.log('Result:', part.codeExecutionResult.output);
}
}const chat = await ai.chats.create({
model: 'gemini-2.5-flash',
config: {
tools: [{ codeExecution: {} }]
}
});
let response = await chat.sendMessage('I have a math question for you.');
console.log(response.text);
response = await chat.sendMessage(
'Calculate the Fibonacci sequence up to the 20th number and sum them.'
);
// Model will generate and execute code, then provide answer
for (const part of response.candidates[0].content.parts) {
if (part.text) console.log(part.text);
if (part.executableCode) console.log('Code:', part.executableCode.code);
if (part.codeExecutionResult) console.log('Output:', part.codeExecutionResult.output);
}const response = await ai.models.generateContent({
model: 'gemini-2.5-flash',
contents: `
Analyze this sales data and calculate:
1. Total revenue
2. Average sale price
3. Best-selling month
Data (CSV format):
month,sales,revenue
Jan,150,45000
Feb,200,62000
Mar,175,53000
Apr,220,68000
`,
config: {
tools: [{ codeExecution: {} }]
}
});
// Model will generate pandas/numpy code to analyze data
for (const part of response.candidates[0].content.parts) {
if (part.text) console.log(part.text);
if (part.executableCode) console.log('Analysis Code:', part.executableCode.code);
if (part.codeExecutionResult) console.log('Results:', part.codeExecutionResult.output);
}const response = await ai.models.generateContent({
model: 'gemini-2.5-flash',
contents: 'Create a bar chart showing the distribution of prime numbers under 100 by their last digit. Generate the chart and describe the pattern.',
config: {
tools: [{ codeExecution: {} }]
}
});
// Model generates matplotlib code, executes it, and describes results
for (const part of response.candidates[0].content.parts) {
if (part.text) console.log(part.text);
if (part.executableCode) console.log('Chart Code:', part.executableCode.code);
if (part.codeExecutionResult) {
// Note: Chart image data would be in output
console.log('Execution completed');
}
}{
candidates: [
{
content: {
parts: [
{ text: "I'll calculate that for you." },
{
executableCode: {
language: "PYTHON",
code: "def is_prime(n):\n if n <= 1:\n return False\n ..."
}
},
{
codeExecutionResult: {
outcome: "OUTCOME_OK", // or "OUTCOME_FAILED"
output: "5117\n"
}
},
{ text: "The sum of the first 50 prime numbers is 5117." }
]
}
}
]
}for (const part of response.candidates[0].content.parts) {
if (part.codeExecutionResult) {
if (part.codeExecutionResult.outcome === 'OUTCOME_FAILED') {
console.error('Code execution failed:', part.codeExecutionResult.output);
} else {
console.log('Success:', part.codeExecutionResult.output);
}
}
}When to Use Code Execution:
Limitations:
Best Practices:
outcome field for errorsImportant:
Grounding connects the model to real-time web information, reducing hallucinations and providing up-to-date, fact-checked responses with citations.
googleSearch) - Recommended for Gemini 2.5const groundingTool = {
googleSearch: {}
};Features:
const fileSearchTool = {
fileSearch: {
fileSearchStoreId: 'store-id-here' // Created via FileSearchStore APIs
}
};Features:
Note: See FileSearch documentation for store creation and management.
googleSearchRetrieval) - Legacy (Gemini 1.5)const retrievalTool = {
googleSearchRetrieval: {
dynamicRetrievalConfig: {
mode: 'MODE_DYNAMIC',
dynamicThreshold: 0.7 // Only search if confidence < 70%
}
}
};Features:
import { GoogleGenAI } from '@google/genai';
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
const response = await ai.models.generateContent({
model: 'gemini-2.5-flash',
contents: 'Who won the euro 2024?',
config: {
tools: [{ googleSearch: {} }]
}
});
console.log(response.text);
// Check if grounding was used
if (response.candidates[0].groundingMetadata) {
console.log('Search was performed!');
console.log('Sources:', response.candidates[0].groundingMetadata);
}const response = await fetch(
`https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash:generateContent`,
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'x-goog-api-key': env.GEMINI_API_KEY,
},
body: JSON.stringify({
contents: [
{ parts: [{ text: 'Who won the euro 2024?' }] }
],
tools: [
{ google_search: {} }
]
}),
}
);
const data = await response.json();
console.log(data.candidates[0].content.parts[0].text);
if (data.candidates[0].groundingMetadata) {
console.log('Grounding metadata:', data.candidates[0].groundingMetadata);
}import { GoogleGenAI, DynamicRetrievalConfigMode } from '@google/genai';
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
const response = await ai.models.generateContent({
model: 'gemini-2.5-flash',
contents: 'Who won the euro 2024?',
config: {
tools: [
{
googleSearchRetrieval: {
dynamicRetrievalConfig: {
mode: DynamicRetrievalConfigMode.MODE_DYNAMIC,
dynamicThreshold: 0.7 // Search only if confidence < 70%
}
}
}
]
}
});
console.log(response.text);
if (!response.candidates[0].groundingMetadata) {
console.log('Model answered from its own knowledge (high confidence)');
}{
groundingMetadata: {
searchQueries: [
{ text: "euro 2024 winner" }
],
webPages: [
{
url: "https://example.com/euro-2024-results",
title: "UEFA Euro 2024 Final Results",
snippet: "Spain won UEFA Euro 2024..."
}
],
citations: [
{
startIndex: 42,
endIndex: 47,
uri: "https://example.com/euro-2024-results"
}
],
retrievalQueries: [
{
query: "who won euro 2024 final"
}
]
}
}const chat = await ai.chats.create({
model: 'gemini-2.5-flash',
config: {
tools: [{ googleSearch: {} }]
}
});
let response = await chat.sendMessage('What are the latest developments in quantum computing?');
console.log(response.text);
// Check grounding sources
if (response.candidates[0].groundingMetadata) {
const sources = response.candidates[0].groundingMetadata.webPages || [];
console.log(`Sources used: ${sources.length}`);
sources.forEach(source => {
console.log(`- ${source.title}: ${source.url}`);
});
}
// Follow-up still has grounding enabled
response = await chat.sendMessage('Which company made the biggest breakthrough?');
console.log(response.text);const weatherFunction = {
name: 'get_current_weather',
description: 'Get current weather for a location',
parametersJsonSchema: {
type: 'object',
properties: {
location: { type: 'string', description: 'City name' }
},
required: ['location']
}
};
const response = await ai.models.generateContent({
model: 'gemini-2.5-flash',
contents: 'What is the weather like in the city that won Euro 2024?',
config: {
tools: [
{ googleSearch: {} },
{ functionDeclarations: [weatherFunction] }
]
}
});
// Model will:
// 1. Use Google Search to find Euro 2024 winner
// 2. Call get_current_weather function with the city
// 3. Combine both results in responseconst response = await ai.models.generateContent({
model: 'gemini-2.5-flash',
contents: 'What is 2+2?', // Model knows this without search
config: {
tools: [{ googleSearch: {} }]
}
});
if (!response.candidates[0].groundingMetadata) {
console.log('Model answered from its own knowledge (no search needed)');
} else {
console.log('Search was performed');
}When to Use Grounding:
When NOT to Use:
Cost Considerations:
dynamicThreshold to control when searches happen (Gemini 1.5)Important Notes:
Gemini 2.5 vs 1.5:
googleSearch (simple, recommended)googleSearchRetrieval with dynamicThresholdBest Practices:
groundingMetadata to see if search was usedThis skill prevents 14 documented issues:
Error: Garbled text or � symbols when streaming responses with non-English text
Source: GitHub Issue #764
Why It Happens: The TextDecoder converts chunks to strings without the {stream: true} option. Multi-byte UTF-8 characters (Chinese, Japanese, Korean, emoji) split across chunks create invalid strings.
Prevention:
// The SDK already fixes this, but if implementing custom streaming:
const decoder = new TextDecoder();
const { value } = await reader.read();
const text = decoder.decode(value, { stream: true }); // ← stream: true requiredAffected: All non-English languages using multi-byte characters Status: Fixed in SDK, but documented for custom implementations
Error: "method parameter is not supported in Gemini API"
Source: GitHub Issue #810
Why It Happens: The method parameter in safetySettings only works with Vertex AI Gemini API, not Gemini Developer API or Google AI Studio. The SDK allows passing it without validation.
Prevention:
// ❌ WRONG - Fails with Gemini Developer API:
config: {
safetySettings: [{
category: HarmCategory.HARM_CATEGORY_HATE_SPEECH,
threshold: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,
method: HarmBlockMethod.SEVERITY // Not supported!
}]
}
// ✅ CORRECT - Omit 'method' for Gemini Developer API:
config: {
safetySettings: [{
category: HarmCategory.HARM_CATEGORY_HATE_SPEECH,
threshold: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE
// No 'method' field
}]
}Affected: Gemini Developer API and Google AI Studio users
Status: Known limitation, use Vertex AI if you need method parameter
Error: Content passes through despite strict safety settings, or safetyRatings shows NEGLIGIBLE with empty output
Source: GitHub Issue #872
Why It Happens: Different models have different blocking thresholds. gemini-2.5-flash blocks more strictly than gemini-2.0-flash. Additionally, promptFeedback only appears when INPUT is blocked; if the model generates a refusal message, safetyRatings may show NEGLIGIBLE.
Prevention:
// Check BOTH promptFeedback AND empty response:
if (response.candidates[0].finishReason === 'SAFETY' ||
!response.text || response.text.trim() === '') {
console.log('Content blocked or refused');
}
// Be aware: Different models have different thresholds
// gemini-2.5-flash: Lower threshold (stricter blocking)
// gemini-2.0-flash: Higher threshold (more permissive)Affected: All models when using safety settings Status: Known behavior, model-specific thresholds are by design
Error: Model loops forever calling tools, never returns text response
Source: GitHub Issue #908
Why It Happens: When FunctionCallingConfigMode.ANY is set with automatic function calling (CallableTool), the model is forced to call at least one tool on every turn and physically cannot stop, looping until max invocations limit.
Prevention:
// ❌ WRONG - Loops forever:
config: {
toolConfig: {
functionCallingConfig: {
mode: FunctionCallingConfigMode.ANY // Forces tool calls forever
}
}
}
// ✅ CORRECT - Use AUTO mode (model decides):
config: {
toolConfig: {
functionCallingConfig: {
mode: FunctionCallingConfigMode.AUTO // Model can choose to answer directly
}
}
}
// Or use manual function calling (check for functionCall, execute, send back)Affected: Automatic function calling with CallableTool
Status: Known limitation, use AUTO mode or manual function calling
Error: JSON.parse fails on structured output, or keys with backslashes are incorrect
Source: GitHub Issue #1226
Why It Happens: When using responseMimeType: "application/json" with schema keys containing escaped backslashes (e.g., \\a for key \a), the model output doesn't preserve JSON escaping. It emits a single backslash, causing invalid JSON.
Prevention:
// Avoid using backslashes in JSON schema keys
// Or manually post-process if required:
let jsonText = response.text;
// Add custom escaping logic if neededAffected: Gemini 3 models with structured output using backslashes in keys Status: Known issue, workaround required
Error: ApiError: {"error":{"code":400,"message":"The document has no pages.","status":"INVALID_ARGUMENT"}}
Source: GitHub Issue #1259
Why It Happens: Larger PDFs (e.g., 20MB) from AWS S3 signed URLs fail when passed via fileData.fileUri. The API cannot fetch or process the PDF from signed URLs.
Prevention:
// ❌ WRONG - Fails with large PDFs from S3:
contents: [{
parts: [{
fileData: {
fileUri: 'https://bucket.s3.region.amazonaws.com/file.pdf?X-Amz-Algorithm=...'
}
}]
}]
// ✅ CORRECT - Fetch and encode to base64:
const pdfResponse = await fetch(signedUrl);
const pdfBuffer = await pdfResponse.arrayBuffer();
const base64Pdf = Buffer.from(pdfBuffer).toString('base64');
contents: [{
parts: [{
inlineData: {
data: base64Pdf,
mimeType: 'application/pdf'
}
}]
}]Affected: PDF files from external signed URLs Status: Known limitation, use base64 inline data instead
Error: 404 NOT_FOUND when using uploaded video files with Gemini 3 models
Source: GitHub Issue #1220
Why It Happens: Some Gemini 3 models (gemini-3-flash-preview, gemini-3-pro-preview) are not available in the free tier or have limited access even with paid accounts. Video file uploads fail with 404.
Prevention:
// ❌ WRONG - 404 error with Gemini 3:
const response = await ai.models.generateContent({
model: 'gemini-3-pro-preview', // 404 error
contents: [{
parts: [
{ text: 'Describe this video' },
{ fileData: { fileUri: videoFile.uri }}
]
}]
});
// ✅ CORRECT - Use Gemini 2.5 for video understanding:
const response = await ai.models.generateContent({
model: 'gemini-2.5-flash', // Works
contents: [{
parts: [
{ text: 'Describe this video' },
{ fileData: { fileUri: videoFile.uri }}
]
}]
});Affected: Gemini 3 preview models with video uploads Status: Known limitation, use Gemini 2.5 models for video
Error: 429 RESOURCE_EXHAUSTED when using Batch API, even when under documented quota Source: GitHub Issue #1264 Why It Happens: The Batch API may have dynamic rate limiting based on server load or undocumented limits beyond static quotas.
Prevention:
// Implement exponential backoff for Batch API:
async function batchWithRetry(request, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
return await ai.batches.create(request);
} catch (error) {
if (error.status === 429 && i < maxRetries - 1) {
const delay = Math.pow(2, i) * 1000;
await new Promise(resolve => setTimeout(resolve, delay));
continue;
}
throw error;
}
}
}Affected: Batch API users on paid tier Status: Under investigation, use retry logic
Error: 404 NOT FOUND when creating caches with Gemini 2.0, 2.5, or 3.0 models Source: GitHub Issue #339 Why It Happens: Context caching only supports Gemini 1.5 Pro and Gemini 1.5 Flash models. Documentation examples incorrectly show Gemini 2.0+ models.
Prevention:
// ❌ WRONG - 404 error:
const cache = await ai.caches.create({
model: 'gemini-2.5-flash', // Not supported
config: { /* ... */ }
});
// ✅ CORRECT - Use Gemini 1.5 with explicit version:
const cache = await ai.caches.create({
model: 'gemini-1.5-flash-001', // Explicit version required
config: { /* ... */ }
});Affected: All Gemini 2.x and 3.x users trying to use context caching Status: Known limitation, only Gemini 1.5 models support caching
Error: SyntaxError: Unexpected token ''when parsing JSON responses **Source**: [GitHub Issue #976](https://github.com/googleapis/js-genai/issues/976) **Why It Happens**: When usingresponseMimeType: "application/json", the response occasionally includes markdown code fence backticks wrapping the JSON (`` ```json\n{...}\n``` ``), breaking JSON.parse()`.
Prevention:
// Strip markdown code fences before parsing:
let jsonText = response.text.trim();
if (jsonText.startsWith('```json')) {
jsonText = jsonText.replace(/^```json\n/, '').replace(/\n```$/, '');
} else if (jsonText.startsWith('```')) {
jsonText = jsonText.replace(/^```\n/, '').replace(/\n```$/, '');
}
const data = JSON.parse(jsonText);Affected: All models when using structured output with responseMimeType: "application/json"
Status: Known intermittent issue, workaround required
Error: Infinite loops or degraded reasoning quality on complex tasks Source: Official Troubleshooting Docs Why It Happens: Gemini 3 models are optimized for temperature 1.0. Lowering temperature below 1.0 may cause looping behavior or degraded performance on complex mathematical/reasoning tasks.
Prevention:
// ❌ WRONG - May cause issues with Gemini 3:
const response = await ai.models.generateContent({
model: 'gemini-3-flash',
contents: 'Solve this complex math problem: ...',
config: {
temperature: 0.3 // May cause looping/degradation
}
});
// ✅ CORRECT - Keep default temperature:
const response = await ai.models.generateContent({
model: 'gemini-3-flash',
contents: 'Solve this complex math problem: ...',
config: {
temperature: 1.0 // Recommended for Gemini 3
}
});
// Or omit temperature config entirely (uses default 1.0)Affected: Gemini 3 series models Status: Official recommendation, keep temperature at 1.0
Error: Sudden 429 RESOURCE_EXHAUSTED errors after December 6, 2025 Source: LaoZhang AI Blog | HowToGeek Why It Happens: Google reduced free tier rate limits by 80-90% without wide announcement, catching developers off guard.
Changes:
Prevention:
// For production, upgrade to paid tier:
// https://ai.google.dev/pricing
// For free tier, implement aggressive rate limiting:
const rateLimiter = {
requests: 0,
resetTime: Date.now() + 24 * 60 * 60 * 1000,
async checkLimit() {
if (Date.now() > this.resetTime) {
this.requests = 0;
this.resetTime = Date.now() + 24 * 60 * 60 * 1000;
}
if (this.requests >= 20) {
throw new Error('Daily limit reached');
}
this.requests++;
}
};
await rateLimiter.checkLimit();
const response = await ai.models.generateContent({/* ... */});Affected: Free tier users (December 6, 2025 onwards) Status: Permanent change, upgrade to paid tier for production
Error: Unexpected behavior changes, deprecation, or service interruptions
Source: Arsturn Blog | Official docs
Why It Happens: Preview and experimental models (e.g., gemini-2.5-flash-preview, gemini-3-pro-preview) have no service level agreements (SLAs) and are inherently unstable. Google can change or deprecate them with little notice.
Prevention:
// ❌ WRONG - Using preview models in production:
const response = await ai.models.generateContent({
model: 'gemini-2.5-flash-preview', // No SLA!
contents: 'Production traffic'
});
// ✅ CORRECT - Use GA (generally available) models:
const response = await ai.models.generateContent({
model: 'gemini-2.5-flash', // Stable, with SLA
contents: 'Production traffic'
});
// Or use specific version numbers for stability:
const response = await ai.models.generateContent({
model: 'gemini-2.5-flash-001', // Pinned version
contents: 'Production traffic'
});Affected: Users of preview/experimental models in production Status: Known limitation, use GA models for production
Error: "Invalid API key" after accidentally committing key to GitHub Source: AI Free API Blog | Official troubleshooting Why It Happens: Google proactively scans for publicly exposed API keys (e.g., in GitHub repos) and automatically blocks them from accessing the Gemini API as a security measure.
Prevention:
// Best practices:
// 1. Use .env files (never commit)
// 2. Use environment variables in production
// 3. Rotate keys if exposed
// 4. Use .gitignore:
// .gitignore
.env
.env.local
*.keyAffected: Users who accidentally commit API keys to public repos Status: Security feature, rotate keys if exposed
{
error: {
code: 401,
message: 'API key not valid. Please pass a valid API key.',
status: 'UNAUTHENTICATED'
}
}Solution: Verify GEMINI_API_KEY environment variable is set correctly.
{
error: {
code: 429,
message: 'Resource has been exhausted (e.g. check quota).',
status: 'RESOURCE_EXHAUSTED'
}
}Solution: Implement exponential backoff retry strategy.
{
error: {
code: 404,
message: 'models/gemini-3.0-flash is not found',
status: 'NOT_FOUND'
}
}Solution: Use correct model names: gemini-2.5-pro, gemini-2.5-flash, gemini-2.5-flash-lite
{
error: {
code: 400,
message: 'Request payload size exceeds the limit',
status: 'INVALID_ARGUMENT'
}
}Solution: Reduce input size. Gemini 2.5 models support 1,048,576 input tokens max.
async function generateWithRetry(request, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
return await ai.models.generateContent(request);
} catch (error) {
if (error.status === 429 && i < maxRetries - 1) {
const delay = Math.pow(2, i) * 1000; // 1s, 2s, 4s
await new Promise(resolve => setTimeout(resolve, delay));
continue;
}
throw error;
}
}
}CRITICAL: Google reduced free tier limits by 80-90% on December 6-7, 2025 without wide announcement. Free tier is now primarily for prototyping only.
Sources: LaoZhang AI | HowToGeek
Rate limits vary by model:
Gemini 2.5 Pro:
Gemini 2.5 Flash:
Gemini 2.5 Flash-Lite:
Requires billing account linked to your Google Cloud project.
Gemini 2.5 Pro:
Gemini 2.5 Flash:
Gemini 2.5 Flash-Lite:
Tier 2 (requires $250+ spending and 30-day wait):
Tier 3 (requires $1,000+ spending and 30-day wait):
Tips:
# Remove deprecated SDK
npm uninstall @google/generative-ai
# Install current SDK
npm install @google/genai@1.27.0Old (DEPRECATED):
import { GoogleGenerativeAI } from '@google/generative-ai';
const genAI = new GoogleGenerativeAI(apiKey);
const model = genAI.getGenerativeModel({ model: 'gemini-2.5-flash' });New (CURRENT):
import { GoogleGenAI } from '@google/genai';
const ai = new GoogleGenAI({ apiKey });
// Use ai.models.generateContent() directlyOld:
const result = await model.generateContent(prompt);
const response = await result.response;
const text = response.text();New:
const response = await ai.models.generateContent({
model: 'gemini-2.5-flash',
contents: prompt
});
const text = response.text;Old:
const result = await model.generateContentStream(prompt);
for await (const chunk of result.stream) {
console.log(chunk.text());
}New:
const response = await ai.models.generateContentStream({
model: 'gemini-2.5-flash',
contents: prompt
});
for await (const chunk of response) {
console.log(chunk.text);
}Old:
const chat = model.startChat();
const result = await chat.sendMessage(message);
const response = await result.response;New:
const chat = await ai.models.createChat({ model: 'gemini-2.5-flash' });
const response = await chat.sendMessage(message);
// response.text is directly available✅ Use @google/genai (NOT @google/generative-ai) ✅ Set maxOutputTokens to prevent excessive generation ✅ Implement rate limit handling with exponential backoff ✅ Use environment variables for API keys (never hardcode) ✅ Validate inputs before sending to API (save costs) ✅ Use streaming for better UX on long responses ✅ Choose the right model based on your needs (Pro for complex reasoning, Flash for balance, Flash-Lite for speed) ✅ Handle errors gracefully with try-catch ✅ Monitor token usage for cost control ✅ Use correct model names: gemini-2.5-pro/flash/flash-lite
❌ Never use @google/generative-ai (deprecated!) ❌ Never hardcode API keys in code ❌ Never claim 2M context for Gemini 2.5 (it's 1,048,576 input tokens) ❌ Never expose API keys in client-side code ❌ Never skip error handling (always try-catch) ❌ Never use generic rate limits (each model has different limits - check official docs) ❌ Never send PII without user consent ❌ Never trust user input without validation ❌ Never ignore rate limits (will get 429 errors) ❌ Never use old model names like gemini-1.5-pro (use 2.5 models)
npm install @google/genai@1.34.0export GEMINI_API_KEY="..."gemini-3-flash (1,048,576 in / 65,536 out) - NEW Best speed+quality balancegemini-2.5-pro (1,048,576 in / 65,536 out) - Best for complex reasoninggemini-2.5-flash (1,048,576 in / 65,536 out) - Proven price-performance balancegemini-2.5-flash-lite (1,048,576 in / 65,536 out) - Fastest, most cost-effectiveconst response = await ai.models.generateContent({
model: 'gemini-2.5-flash',
contents: 'Your prompt here'
});
console.log(response.text);const response = await ai.models.generateContentStream({...});
for await (const chunk of response) {
console.log(chunk.text);
}contents: [
{
parts: [
{ text: 'What is this?' },
{ inlineData: { data: base64Image, mimeType: 'image/jpeg' } }
]
}
]config: {
tools: [{ functionDeclarations: [...] }]
}Last Updated: 2026-01-21 Production Validated: All features tested with @google/genai@1.35.0 Phase: 2 Complete ✅ (All Core + Advanced Features) Known Issues: 14 documented errors prevented Changes: Added Known Issues Prevention section with 14 community-researched findings from post-training-cutoff period (May 2025-Jan 2026)