OpenTelemetry instrumentation for Google Generative AI Python library providing automatic tracing and monitoring of AI model interactions
npx @tessl/cli install tessl/pypi-opentelemetry-instrumentation-google-generativeai@0.46.0OpenTelemetry instrumentation for the Google Generative AI Python library, providing automatic tracing and monitoring of AI model interactions. This package captures detailed telemetry data including prompts, completions, and embeddings sent to Google's Gemini models, enabling comprehensive observability in LLM applications.
pip install opentelemetry-instrumentation-google-generativeaifrom opentelemetry.instrumentation.google_generativeai import GoogleGenerativeAiInstrumentorFor advanced configuration:
from opentelemetry.instrumentation.google_generativeai.config import ConfigFor utility functions:
from opentelemetry.instrumentation.google_generativeai.utils import (
dont_throw,
should_send_prompts,
should_emit_events,
part_to_dict,
is_package_installed
)For response type detection:
from opentelemetry.instrumentation.google_generativeai import (
is_streaming_response,
is_async_streaming_response
)For event models:
from opentelemetry.instrumentation.google_generativeai.event_models import (
MessageEvent,
ChoiceEvent,
ToolCall,
CompletionMessage
)For roles, constants, and event emission:
from opentelemetry.instrumentation.google_generativeai.event_emitter import (
Roles,
VALID_MESSAGE_ROLES,
EVENT_ATTRIBUTES,
emit_message_events,
emit_choice_events,
emit_event
)from opentelemetry.instrumentation.google_generativeai import GoogleGenerativeAiInstrumentor
import google.genai as genai
# Enable instrumentation
GoogleGenerativeAiInstrumentor().instrument()
# Use Google Generative AI normally - calls will be automatically traced
client = genai.Client(api_key="your-api-key")
response = client.models.generate_content(
model='gemini-1.5-flash',
contents='Tell me a joke about Python programming'
)Custom configuration:
from opentelemetry.instrumentation.google_generativeai import GoogleGenerativeAiInstrumentor
# Configure with custom settings
instrumentor = GoogleGenerativeAiInstrumentor(
exception_logger=my_logger.error,
use_legacy_attributes=False,
upload_base64_image=my_image_upload_handler
)
instrumentor.instrument()The main instrumentor class that enables automatic tracing of Google Generative AI calls.
class GoogleGenerativeAiInstrumentor(BaseInstrumentor):
"""An instrumentor for Google Generative AI's client library."""
def __init__(
self,
exception_logger=None,
use_legacy_attributes=True,
upload_base64_image=None
):
"""
Initialize the instrumentor.
Parameters:
- exception_logger: callable, optional custom exception logger
- use_legacy_attributes: bool, whether to use legacy span attributes (default: True)
- upload_base64_image: callable, optional function for uploading base64 image data
"""
def instrumentation_dependencies(self) -> Collection[str]:
"""
Return the list of instrumentation dependencies.
Returns:
Collection[str]: Required dependencies ["google-genai >= 1.0.0"]
"""
def instrument(self, **kwargs):
"""
Enable instrumentation for Google Generative AI calls.
Parameters:
- tracer_provider: TracerProvider, optional tracer provider
- event_logger_provider: EventLoggerProvider, optional event logger provider
"""
def uninstrument(self, **kwargs):
"""
Disable instrumentation for Google Generative AI calls.
Parameters:
- **kwargs: Additional keyword arguments (unused)
"""Utility functions for identifying different response types from Google Generative AI.
def is_streaming_response(response) -> bool:
"""
Check if response is a streaming generator type.
Parameters:
- response: response object to check
Returns:
bool: True if response is a generator (streaming)
"""
def is_async_streaming_response(response) -> bool:
"""
Check if response is an async streaming generator type.
Parameters:
- response: response object to check
Returns:
bool: True if response is an async generator (async streaming)
"""Additional utility functions for internal operations.
def dont_throw(func):
"""
A decorator that wraps the passed in function and logs exceptions instead of throwing them.
Parameters:
- func: The function to wrap
Returns:
The wrapper function
"""
def should_send_prompts() -> bool:
"""
Check if prompts should be sent based on environment variables and context.
Returns:
bool: True if content tracing is enabled
"""
def should_emit_events() -> bool:
"""
Checks if the instrumentation isn't using the legacy attributes
and if the event logger is not None.
Returns:
bool: True if events should be emitted
"""
def part_to_dict(part):
"""
Convert a Google Generative AI part object to a dictionary.
Parameters:
- part: A part object from Google Generative AI response
Returns:
dict: Dictionary representation of the part
"""
def is_package_installed(package_name: str) -> bool:
"""
Check if a package is installed.
Parameters:
- package_name: str, name of the package to check
Returns:
bool: True if package is installed
"""Functions for emitting OpenTelemetry events from Google Generative AI interactions.
def emit_message_events(args, kwargs, event_logger):
"""
Emit message events for input prompts.
Parameters:
- args: tuple, positional arguments from the function call
- kwargs: dict, keyword arguments from the function call
- event_logger: EventLogger, logger for emitting events
"""
def emit_choice_events(response, event_logger):
"""
Emit choice events for model responses.
Parameters:
- response: GenerateContentResponse, response from Google Generative AI
- event_logger: EventLogger, logger for emitting events
"""
def emit_event(event, event_logger) -> None:
"""
Emit an event to the OpenTelemetry SDK.
Parameters:
- event: Union[MessageEvent, ChoiceEvent], the event to emit
- event_logger: EventLogger, logger for emitting events
"""Global configuration settings for the instrumentation behavior.
class Config:
"""Global configuration settings for the instrumentation."""
exception_logger = None # Custom exception logger function
use_legacy_attributes: bool = True # Use legacy span attributes
upload_base64_image: Callable[[str, str, str, str], str] = (
lambda trace_id, span_id, image_name, base64_string: str
) # Base64 image upload handler with default lambda@dataclass
class MessageEvent:
"""Represents an input event for the AI model."""
content: Any
role: str = "user"
tool_calls: Optional[List[ToolCall]] = None
@dataclass
class ChoiceEvent:
"""Represents a completion event for the AI model."""
index: int
message: CompletionMessage
finish_reason: str = "unknown"
tool_calls: Optional[List[ToolCall]] = None
class ToolCall(TypedDict):
"""Represents a tool call in the AI model."""
id: str
function: _FunctionToolCall
type: Literal["function"]
class CompletionMessage(TypedDict):
"""Represents a message in the AI model."""
content: Any
role: str # Default: "assistant"
class _FunctionToolCall(TypedDict):
function_name: str
arguments: Optional[dict[str, Any]]class Roles(Enum):
"""Valid roles for message events."""
USER = "user"
ASSISTANT = "assistant"
SYSTEM = "system"
TOOL = "tool"TRACELOOP_TRACE_CONTENT = "TRACELOOP_TRACE_CONTENT"
"""Environment variable name for controlling content tracing."""
WRAPPED_METHODS = [
{
"package": "google.genai.models",
"object": "Models",
"method": "generate_content",
"span_name": "gemini.generate_content",
},
{
"package": "google.genai.models",
"object": "AsyncModels",
"method": "generate_content",
"span_name": "gemini.generate_content",
},
]
"""Configuration for methods to be instrumented."""
VALID_MESSAGE_ROLES = {role.value for role in Roles}
"""Set of valid message roles derived from Roles enum."""
EVENT_ATTRIBUTES = {GenAIAttributes.GEN_AI_SYSTEM: "gemini"}
"""Attributes used for events (uses OpenTelemetry semantic conventions)."""
__version__ = "0.21.5"
"""Internal package version string (differs from the main package version 0.46.2)."""By default, this instrumentation logs prompts, completions, and embeddings to span attributes for visibility into LLM application behavior. To disable for privacy or trace size reasons:
export TRACELOOP_TRACE_CONTENT=falseThe instrumentation supports both legacy span attributes and the newer event-based approach:
# Use legacy attributes (default)
GoogleGenerativeAiInstrumentor(use_legacy_attributes=True).instrument()
# Use event-based approach
GoogleGenerativeAiInstrumentor(use_legacy_attributes=False).instrument()For applications using image inputs, provide a custom upload handler:
async def my_image_uploader(trace_id: str, span_id: str, image_name: str, base64_data: str) -> str:
# Upload image and return URL
return "https://my-storage.com/images/abc123"
GoogleGenerativeAiInstrumentor(upload_base64_image=my_image_uploader).instrument()The package automatically instruments these Google Generative AI methods:
google.genai.models.Models.generate_content (synchronous)google.genai.models.AsyncModels.generate_content (asynchronous)Both methods are traced with the span name "gemini.generate_content" and capture: