OpenTelemetry instrumentation for Google Generative AI Python library providing automatic tracing and monitoring of AI model interactions
npx @tessl/cli install tessl/pypi-opentelemetry-instrumentation-google-generativeai@0.46.00
# OpenTelemetry Google Generative AI Instrumentation
1
2
OpenTelemetry instrumentation for the Google Generative AI Python library, providing automatic tracing and monitoring of AI model interactions. This package captures detailed telemetry data including prompts, completions, and embeddings sent to Google's Gemini models, enabling comprehensive observability in LLM applications.
3
4
## Package Information
5
6
- **Package Name**: opentelemetry-instrumentation-google-generativeai
7
- **Language**: Python
8
- **Installation**: `pip install opentelemetry-instrumentation-google-generativeai`
9
- **Instrumented Package**: google-genai >= 1.0.0
10
11
## Core Imports
12
13
```python
14
from opentelemetry.instrumentation.google_generativeai import GoogleGenerativeAiInstrumentor
15
```
16
17
For advanced configuration:
18
19
```python
20
from opentelemetry.instrumentation.google_generativeai.config import Config
21
```
22
23
For utility functions:
24
25
```python
26
from opentelemetry.instrumentation.google_generativeai.utils import (
27
dont_throw,
28
should_send_prompts,
29
should_emit_events,
30
part_to_dict,
31
is_package_installed
32
)
33
```
34
35
For response type detection:
36
37
```python
38
from opentelemetry.instrumentation.google_generativeai import (
39
is_streaming_response,
40
is_async_streaming_response
41
)
42
```
43
44
For event models:
45
46
```python
47
from opentelemetry.instrumentation.google_generativeai.event_models import (
48
MessageEvent,
49
ChoiceEvent,
50
ToolCall,
51
CompletionMessage
52
)
53
```
54
55
For roles, constants, and event emission:
56
57
```python
58
from opentelemetry.instrumentation.google_generativeai.event_emitter import (
59
Roles,
60
VALID_MESSAGE_ROLES,
61
EVENT_ATTRIBUTES,
62
emit_message_events,
63
emit_choice_events,
64
emit_event
65
)
66
```
67
68
## Basic Usage
69
70
```python
71
from opentelemetry.instrumentation.google_generativeai import GoogleGenerativeAiInstrumentor
72
import google.genai as genai
73
74
# Enable instrumentation
75
GoogleGenerativeAiInstrumentor().instrument()
76
77
# Use Google Generative AI normally - calls will be automatically traced
78
client = genai.Client(api_key="your-api-key")
79
response = client.models.generate_content(
80
model='gemini-1.5-flash',
81
contents='Tell me a joke about Python programming'
82
)
83
```
84
85
Custom configuration:
86
87
```python
88
from opentelemetry.instrumentation.google_generativeai import GoogleGenerativeAiInstrumentor
89
90
# Configure with custom settings
91
instrumentor = GoogleGenerativeAiInstrumentor(
92
exception_logger=my_logger.error,
93
use_legacy_attributes=False,
94
upload_base64_image=my_image_upload_handler
95
)
96
instrumentor.instrument()
97
```
98
99
## Capabilities
100
101
### Core Instrumentation
102
103
The main instrumentor class that enables automatic tracing of Google Generative AI calls.
104
105
```python { .api }
106
class GoogleGenerativeAiInstrumentor(BaseInstrumentor):
107
"""An instrumentor for Google Generative AI's client library."""
108
109
def __init__(
110
self,
111
exception_logger=None,
112
use_legacy_attributes=True,
113
upload_base64_image=None
114
):
115
"""
116
Initialize the instrumentor.
117
118
Parameters:
119
- exception_logger: callable, optional custom exception logger
120
- use_legacy_attributes: bool, whether to use legacy span attributes (default: True)
121
- upload_base64_image: callable, optional function for uploading base64 image data
122
"""
123
124
def instrumentation_dependencies(self) -> Collection[str]:
125
"""
126
Return the list of instrumentation dependencies.
127
128
Returns:
129
Collection[str]: Required dependencies ["google-genai >= 1.0.0"]
130
"""
131
132
def instrument(self, **kwargs):
133
"""
134
Enable instrumentation for Google Generative AI calls.
135
136
Parameters:
137
- tracer_provider: TracerProvider, optional tracer provider
138
- event_logger_provider: EventLoggerProvider, optional event logger provider
139
"""
140
141
def uninstrument(self, **kwargs):
142
"""
143
Disable instrumentation for Google Generative AI calls.
144
145
Parameters:
146
- **kwargs: Additional keyword arguments (unused)
147
"""
148
```
149
150
### Response Type Detection
151
152
Utility functions for identifying different response types from Google Generative AI.
153
154
```python { .api }
155
def is_streaming_response(response) -> bool:
156
"""
157
Check if response is a streaming generator type.
158
159
Parameters:
160
- response: response object to check
161
162
Returns:
163
bool: True if response is a generator (streaming)
164
"""
165
166
def is_async_streaming_response(response) -> bool:
167
"""
168
Check if response is an async streaming generator type.
169
170
Parameters:
171
- response: response object to check
172
173
Returns:
174
bool: True if response is an async generator (async streaming)
175
"""
176
```
177
178
### Utility Functions
179
180
Additional utility functions for internal operations.
181
182
```python { .api }
183
def dont_throw(func):
184
"""
185
A decorator that wraps the passed in function and logs exceptions instead of throwing them.
186
187
Parameters:
188
- func: The function to wrap
189
190
Returns:
191
The wrapper function
192
"""
193
194
def should_send_prompts() -> bool:
195
"""
196
Check if prompts should be sent based on environment variables and context.
197
198
Returns:
199
bool: True if content tracing is enabled
200
"""
201
202
def should_emit_events() -> bool:
203
"""
204
Checks if the instrumentation isn't using the legacy attributes
205
and if the event logger is not None.
206
207
Returns:
208
bool: True if events should be emitted
209
"""
210
211
def part_to_dict(part):
212
"""
213
Convert a Google Generative AI part object to a dictionary.
214
215
Parameters:
216
- part: A part object from Google Generative AI response
217
218
Returns:
219
dict: Dictionary representation of the part
220
"""
221
222
def is_package_installed(package_name: str) -> bool:
223
"""
224
Check if a package is installed.
225
226
Parameters:
227
- package_name: str, name of the package to check
228
229
Returns:
230
bool: True if package is installed
231
"""
232
```
233
234
### Event Emission Functions
235
236
Functions for emitting OpenTelemetry events from Google Generative AI interactions.
237
238
```python { .api }
239
def emit_message_events(args, kwargs, event_logger):
240
"""
241
Emit message events for input prompts.
242
243
Parameters:
244
- args: tuple, positional arguments from the function call
245
- kwargs: dict, keyword arguments from the function call
246
- event_logger: EventLogger, logger for emitting events
247
"""
248
249
def emit_choice_events(response, event_logger):
250
"""
251
Emit choice events for model responses.
252
253
Parameters:
254
- response: GenerateContentResponse, response from Google Generative AI
255
- event_logger: EventLogger, logger for emitting events
256
"""
257
258
def emit_event(event, event_logger) -> None:
259
"""
260
Emit an event to the OpenTelemetry SDK.
261
262
Parameters:
263
- event: Union[MessageEvent, ChoiceEvent], the event to emit
264
- event_logger: EventLogger, logger for emitting events
265
"""
266
```
267
268
### Configuration Management
269
270
Global configuration settings for the instrumentation behavior.
271
272
```python { .api }
273
class Config:
274
"""Global configuration settings for the instrumentation."""
275
276
exception_logger = None # Custom exception logger function
277
use_legacy_attributes: bool = True # Use legacy span attributes
278
upload_base64_image: Callable[[str, str, str, str], str] = (
279
lambda trace_id, span_id, image_name, base64_string: str
280
) # Base64 image upload handler with default lambda
281
```
282
283
## Types
284
285
### Event Data Models
286
287
```python { .api }
288
@dataclass
289
class MessageEvent:
290
"""Represents an input event for the AI model."""
291
292
content: Any
293
role: str = "user"
294
tool_calls: Optional[List[ToolCall]] = None
295
296
@dataclass
297
class ChoiceEvent:
298
"""Represents a completion event for the AI model."""
299
300
index: int
301
message: CompletionMessage
302
finish_reason: str = "unknown"
303
tool_calls: Optional[List[ToolCall]] = None
304
305
class ToolCall(TypedDict):
306
"""Represents a tool call in the AI model."""
307
308
id: str
309
function: _FunctionToolCall
310
type: Literal["function"]
311
312
class CompletionMessage(TypedDict):
313
"""Represents a message in the AI model."""
314
315
content: Any
316
role: str # Default: "assistant"
317
318
class _FunctionToolCall(TypedDict):
319
function_name: str
320
arguments: Optional[dict[str, Any]]
321
```
322
323
### Role Enumeration
324
325
```python { .api }
326
class Roles(Enum):
327
"""Valid roles for message events."""
328
329
USER = "user"
330
ASSISTANT = "assistant"
331
SYSTEM = "system"
332
TOOL = "tool"
333
```
334
335
## Constants
336
337
```python { .api }
338
TRACELOOP_TRACE_CONTENT = "TRACELOOP_TRACE_CONTENT"
339
"""Environment variable name for controlling content tracing."""
340
341
WRAPPED_METHODS = [
342
{
343
"package": "google.genai.models",
344
"object": "Models",
345
"method": "generate_content",
346
"span_name": "gemini.generate_content",
347
},
348
{
349
"package": "google.genai.models",
350
"object": "AsyncModels",
351
"method": "generate_content",
352
"span_name": "gemini.generate_content",
353
},
354
]
355
"""Configuration for methods to be instrumented."""
356
357
VALID_MESSAGE_ROLES = {role.value for role in Roles}
358
"""Set of valid message roles derived from Roles enum."""
359
360
EVENT_ATTRIBUTES = {GenAIAttributes.GEN_AI_SYSTEM: "gemini"}
361
"""Attributes used for events (uses OpenTelemetry semantic conventions)."""
362
363
__version__ = "0.21.5"
364
"""Internal package version string (differs from the main package version 0.46.2)."""
365
```
366
367
## Privacy and Configuration
368
369
### Content Tracing Control
370
371
By default, this instrumentation logs prompts, completions, and embeddings to span attributes for visibility into LLM application behavior. To disable for privacy or trace size reasons:
372
373
```bash
374
export TRACELOOP_TRACE_CONTENT=false
375
```
376
377
### Legacy vs. Event-Based Attributes
378
379
The instrumentation supports both legacy span attributes and the newer event-based approach:
380
381
```python
382
# Use legacy attributes (default)
383
GoogleGenerativeAiInstrumentor(use_legacy_attributes=True).instrument()
384
385
# Use event-based approach
386
GoogleGenerativeAiInstrumentor(use_legacy_attributes=False).instrument()
387
```
388
389
### Image Handling
390
391
For applications using image inputs, provide a custom upload handler:
392
393
```python
394
async def my_image_uploader(trace_id: str, span_id: str, image_name: str, base64_data: str) -> str:
395
# Upload image and return URL
396
return "https://my-storage.com/images/abc123"
397
398
GoogleGenerativeAiInstrumentor(upload_base64_image=my_image_uploader).instrument()
399
```
400
401
## Instrumented Methods
402
403
The package automatically instruments these Google Generative AI methods:
404
405
- `google.genai.models.Models.generate_content` (synchronous)
406
- `google.genai.models.AsyncModels.generate_content` (asynchronous)
407
408
Both methods are traced with the span name "gemini.generate_content" and capture:
409
410
- Model request parameters (temperature, max_tokens, etc.)
411
- Input prompts and images
412
- Model responses and completions
413
- Token usage metadata
414
- Model identification and system attribution