OpenTelemetry instrumentation for AWS Bedrock runtime services providing automatic tracing, metrics, and event emission for AI model invocations
npx @tessl/cli install tessl/pypi-opentelemetry-instrumentation-bedrock@0.46.00
# OpenTelemetry Bedrock Instrumentation
1
2
OpenTelemetry instrumentation for AWS Bedrock runtime services providing automatic tracing, metrics, and event emission for AI model invocations. This package automatically instruments boto3 clients to capture comprehensive telemetry data for Amazon Bedrock's generative AI services including prompts, completions, model parameters, performance metrics, guardrail interactions, and prompt caching.
3
4
## Package Information
5
6
- **Package Name**: opentelemetry-instrumentation-bedrock
7
- **Package Type**: PyPI
8
- **Language**: Python (>=3.9,<4)
9
- **Installation**: `pip install opentelemetry-instrumentation-bedrock`
10
- **Dependencies**: boto3 >=1.28.57, opentelemetry-api ^1.28.0, anthropic >=0.17.0
11
12
## Core Imports
13
14
```python
15
from opentelemetry.instrumentation.bedrock import BedrockInstrumentor
16
```
17
18
Common imports for configuration and types:
19
20
```python
21
from opentelemetry.instrumentation.bedrock.config import Config
22
from opentelemetry.instrumentation.bedrock.event_models import MessageEvent, ChoiceEvent, Roles
23
from opentelemetry.instrumentation.bedrock.utils import should_send_prompts, should_emit_events
24
from opentelemetry.instrumentation.bedrock.guardrail import is_guardrail_activated, GuardrailAttributes, Type
25
from opentelemetry.instrumentation.bedrock.prompt_caching import CachingHeaders, CacheSpanAttrs
26
```
27
28
## Basic Usage
29
30
```python
31
import boto3
32
from opentelemetry.instrumentation.bedrock import BedrockInstrumentor
33
34
# Basic instrumentation - automatically instruments all bedrock-runtime clients
35
BedrockInstrumentor().instrument()
36
37
# Create a Bedrock client (will be automatically instrumented)
38
bedrock_client = boto3.client(
39
service_name='bedrock-runtime',
40
region_name='us-east-1'
41
)
42
43
# Use the client normally - instrumentation happens automatically
44
response = bedrock_client.invoke_model(
45
modelId='anthropic.claude-3-sonnet-20240229-v1:0',
46
body=json.dumps({
47
"messages": [{"role": "user", "content": "Hello, world!"}],
48
"max_tokens": 100,
49
"anthropic_version": "bedrock-2023-05-31"
50
})
51
)
52
53
# Advanced configuration with custom options
54
BedrockInstrumentor(
55
enrich_token_usage=True,
56
use_legacy_attributes=False,
57
exception_logger=my_exception_handler
58
).instrument()
59
```
60
61
## Architecture
62
63
The instrumentation follows OpenTelemetry's auto-instrumentation pattern:
64
65
- **Automatic Wrapping**: Intercepts boto3 client creation for bedrock-runtime services
66
- **Span Creation**: Creates spans for invoke_model, invoke_model_with_response_stream, converse, and converse_stream operations
67
- **Metrics Collection**: Tracks token usage, latency, errors, and guardrail interactions
68
- **Event Emission**: Emits structured events for input/output messages (when semantic conventions are enabled)
69
- **Multi-Model Support**: Handles all Bedrock-supported AI model families (Anthropic, Cohere, AI21, Meta, Amazon, Nova, imported models)
70
71
## Capabilities
72
73
### Core Instrumentation
74
75
Primary instrumentor class for enabling automatic tracing of Bedrock API calls with configuration options for token enrichment, legacy attributes, and exception handling.
76
77
```python { .api }
78
class BedrockInstrumentor(BaseInstrumentor):
79
def __init__(
80
self,
81
enrich_token_usage: bool = False,
82
exception_logger = None,
83
use_legacy_attributes: bool = True
84
): ...
85
86
def instrument(self, **kwargs): ...
87
def uninstrument(self, **kwargs): ...
88
```
89
90
[Core Instrumentation](./instrumentation.md)
91
92
### Event System
93
94
Comprehensive event models and emission functions for capturing AI model interactions as structured OpenTelemetry events, supporting both input messages and completion responses.
95
96
```python { .api }
97
@dataclass
98
class MessageEvent:
99
content: Any
100
role: str = "user"
101
tool_calls: List[ToolCall] | None = None
102
103
@dataclass
104
class ChoiceEvent:
105
index: int
106
message: CompletionMessage
107
finish_reason: str = "unknown"
108
tool_calls: List[ToolCall] | None = None
109
110
def emit_message_events(event_logger, kwargs): ...
111
def emit_choice_events(event_logger, response): ...
112
```
113
114
[Event System](./events.md)
115
116
### Metrics and Monitoring
117
118
Advanced metrics collection including token usage, request duration, error tracking, guardrail interactions, and prompt caching with support for all major Bedrock model families.
119
120
```python { .api }
121
class MetricParams:
122
def __init__(
123
self,
124
token_histogram: Histogram,
125
choice_counter: Counter,
126
duration_histogram: Histogram,
127
exception_counter: Counter,
128
guardrail_activation: Counter,
129
guardrail_latency_histogram: Histogram,
130
# ... additional guardrail and caching metrics
131
): ...
132
133
def is_metrics_enabled() -> bool: ...
134
135
class GuardrailMeters:
136
LLM_BEDROCK_GUARDRAIL_ACTIVATION = "gen_ai.bedrock.guardrail.activation"
137
LLM_BEDROCK_GUARDRAIL_LATENCY = "gen_ai.bedrock.guardrail.latency"
138
# ... additional constants
139
```
140
141
[Metrics and Monitoring](./metrics.md)
142
143
### Utilities and Streaming
144
145
Utility functions, streaming response handling, and helper classes for managing configuration, error handling, and reusable streaming bodies.
146
147
```python { .api }
148
def dont_throw(func): ...
149
def should_send_prompts() -> bool: ...
150
def should_emit_events() -> bool: ...
151
152
class StreamingWrapper(ObjectProxy):
153
def __init__(self, response, stream_done_callback): ...
154
def __iter__(self): ...
155
156
class ReusableStreamingBody(StreamingBody):
157
def __init__(self, raw_stream, content_length): ...
158
def read(self, amt=None): ...
159
```
160
161
[Utilities and Streaming](./utilities.md)
162
163
## Configuration
164
165
### Environment Variables
166
167
Control instrumentation behavior through environment variables:
168
169
- **TRACELOOP_METRICS_ENABLED**: Enable/disable metrics collection (default: "true")
170
- **TRACELOOP_TRACE_CONTENT**: Enable/disable content tracing (default: "true")
171
172
### Global Configuration
173
174
```python { .api }
175
class Config:
176
enrich_token_usage: bool
177
exception_logger: Any
178
use_legacy_attributes: bool
179
```
180
181
## Type Definitions
182
183
### Guardrail Types
184
185
```python { .api }
186
class GuardrailAttributes:
187
"""Constants for guardrail span attributes in OpenTelemetry instrumentation."""
188
GUARDRAIL = "gen_ai.guardrail"
189
TYPE = "gen_ai.guardrail.type"
190
PII = "gen_ai.guardrail.pii"
191
PATTERN = "gen_ai.guardrail.pattern"
192
TOPIC = "gen_ai.guardrail.topic"
193
CONTENT = "gen_ai.guardrail.content"
194
CONFIDENCE = "gen_ai.guardrail.confidence"
195
MATCH = "gen_ai.guardrail.match"
196
197
from enum import Enum
198
199
class Type(Enum):
200
"""Enum for guardrail assessment types."""
201
INPUT = "input"
202
OUTPUT = "output"
203
```
204
205
### Prompt Caching Types
206
207
```python { .api }
208
class CachingHeaders:
209
"""HTTP headers for Bedrock prompt caching functionality."""
210
READ = "x-amzn-bedrock-cache-read-input-token-count"
211
WRITE = "x-amzn-bedrock-cache-write-input-token-count"
212
213
class CacheSpanAttrs:
214
"""Span attributes for prompt caching operations."""
215
TYPE = "gen_ai.cache.type"
216
CACHED = "gen_ai.prompt_caching"
217
```
218
219
### Event Role Types
220
221
```python { .api }
222
from enum import Enum
223
224
class Roles(Enum):
225
"""Valid roles for message events in conversation."""
226
USER = "user"
227
ASSISTANT = "assistant"
228
SYSTEM = "system"
229
TOOL = "tool"
230
```
231
232
## Supported Models
233
234
The instrumentation supports all AI model vendors available through Amazon Bedrock:
235
236
- **Anthropic**: Claude models (claude-3-sonnet, claude-3-haiku, claude-3-opus, claude-instant)
237
- **Cohere**: Command models (command, command-light, command-r, command-r-plus)
238
- **AI21**: Jurassic models (j2-mid, j2-ultra)
239
- **Meta**: Llama models (llama2-7b, llama2-13b, llama2-70b, llama3-8b, llama3-70b)
240
- **Amazon**: Titan models (titan-text-express, titan-text-lite, titan-embed)
241
- **Nova**: Amazon's foundational models (nova-micro, nova-lite, nova-pro)
242
- **Imported Models**: Custom and third-party models deployed through Bedrock
243
244
## Key Features
245
246
1. **Zero-Code Instrumentation**: Automatic instrumentation of all Bedrock API calls
247
2. **Comprehensive Telemetry**: Spans, metrics, and events for complete observability
248
3. **Multi-API Support**: Handles both legacy invoke_model and modern converse APIs
249
4. **Streaming Support**: Full instrumentation for streaming responses
250
5. **Guardrail Integration**: Detailed tracking of Bedrock Guardrails interactions
251
6. **Prompt Caching**: Metrics and attributes for prompt caching features
252
7. **Token Usage Tracking**: Detailed token consumption metrics with model-specific counting
253
8. **Error Handling**: Comprehensive exception tracking and custom error handling
254
9. **Privacy Controls**: Configurable content tracing for sensitive data management
255
10. **Production Ready**: Designed for high-volume production AI applications