OpenTelemetry instrumentation for Mistral AI client library enabling automatic tracing and observability
npx @tessl/cli install tessl/pypi-opentelemetry-instrumentation-mistralai@0.46.00
# OpenTelemetry Mistral AI Instrumentation
1
2
OpenTelemetry instrumentation for the Mistral AI Python client library, enabling automatic tracing and observability for LLM applications using Mistral AI's APIs. This library instruments chat completions, chat streaming, and embeddings endpoints to collect telemetry data including request/response attributes, token usage metrics, and span information.
3
4
## Package Information
5
6
- **Package Name**: opentelemetry-instrumentation-mistralai
7
- **Package Type**: pypi
8
- **Language**: Python
9
- **Installation**: `pip install opentelemetry-instrumentation-mistralai`
10
- **Dependencies**: `mistralai >= 0.2.0, < 1`
11
12
## Core Imports
13
14
```python
15
from opentelemetry.instrumentation.mistralai import MistralAiInstrumentor
16
```
17
18
Additional imports for type annotations:
19
20
```python
21
from typing import Collection, Union, Any, Optional, List, Literal
22
```
23
24
Event models and utilities:
25
26
```python
27
from opentelemetry.instrumentation.mistralai.event_models import MessageEvent, ChoiceEvent, ToolCall, CompletionMessage, _FunctionToolCall
28
from opentelemetry.instrumentation.mistralai.utils import should_send_prompts, should_emit_events, dont_throw, TRACELOOP_TRACE_CONTENT
29
from opentelemetry.instrumentation.mistralai.config import Config
30
from opentelemetry.instrumentation.mistralai.event_emitter import emit_event, Roles, VALID_MESSAGE_ROLES, EVENT_ATTRIBUTES
31
from opentelemetry.instrumentation.mistralai.version import __version__
32
```
33
34
## Basic Usage
35
36
```python
37
from opentelemetry.instrumentation.mistralai import MistralAiInstrumentor
38
from mistralai.client import MistralClient
39
40
# Initialize and instrument
41
instrumentor = MistralAiInstrumentor()
42
instrumentor.instrument()
43
44
# Use Mistral AI client normally - it will be automatically traced
45
client = MistralClient(api_key="your-api-key")
46
47
# Chat completion will be automatically traced
48
response = client.chat(
49
model="mistral-large-latest",
50
messages=[{"role": "user", "content": "Hello, how are you?"}]
51
)
52
53
# Streaming chat will also be traced
54
for chunk in client.chat_stream(
55
model="mistral-large-latest",
56
messages=[{"role": "user", "content": "Tell me a story"}]
57
):
58
print(chunk.choices[0].delta.content, end="")
59
60
# Embeddings will be traced too
61
embeddings = client.embeddings(
62
model="mistral-embed",
63
input=["Hello world", "How are you?"]
64
)
65
66
# Clean up when done (optional)
67
instrumentor.uninstrument()
68
```
69
70
## Capabilities
71
72
### Main Instrumentor
73
74
Core instrumentor class for enabling and disabling Mistral AI instrumentation.
75
76
```python { .api }
77
class MistralAiInstrumentor(BaseInstrumentor):
78
"""An instrumentor for Mistral AI's client library."""
79
80
def __init__(self, exception_logger=None, use_legacy_attributes: bool = True):
81
"""
82
Initialize the instrumentor.
83
84
Args:
85
exception_logger: Custom exception logger function (optional)
86
use_legacy_attributes (bool): Whether to use legacy span attributes vs
87
new event-based approach (default: True)
88
"""
89
90
def instrumentation_dependencies(self) -> Collection[str]:
91
"""Returns list of required packages: ["mistralai >= 0.2.0, < 1"]."""
92
93
def _instrument(self, **kwargs):
94
"""
95
Enable instrumentation (internal method).
96
97
Args:
98
tracer_provider: OpenTelemetry tracer provider (optional)
99
event_logger_provider: OpenTelemetry event logger provider (optional)
100
"""
101
102
def _uninstrument(self, **kwargs):
103
"""Disable instrumentation (internal method)."""
104
```
105
106
### Event Models
107
108
Data structures for representing AI model events in the new event-based telemetry approach.
109
110
```python { .api }
111
@dataclass
112
class MessageEvent:
113
"""Represents an input event for the AI model."""
114
content: Any
115
role: str = "user"
116
tool_calls: Optional[List[ToolCall]] = None
117
118
@dataclass
119
class ChoiceEvent:
120
"""Represents a completion event for the AI model."""
121
index: int
122
message: CompletionMessage
123
finish_reason: str = "unknown"
124
tool_calls: Optional[List[ToolCall]] = None
125
```
126
127
### Utility Functions
128
129
Helper functions for controlling instrumentation behavior.
130
131
```python { .api }
132
def should_send_prompts() -> bool:
133
"""
134
Determines if prompts should be logged based on TRACELOOP_TRACE_CONTENT
135
environment variable.
136
137
Returns:
138
bool: True if prompts should be sent (default), False otherwise
139
"""
140
141
def should_emit_events() -> bool:
142
"""
143
Checks if the instrumentation should emit events (non-legacy mode).
144
145
Returns:
146
bool: True if events should be emitted
147
"""
148
149
def dont_throw(func):
150
"""
151
Decorator that wraps functions to log exceptions instead of throwing them.
152
153
Args:
154
func: The function to wrap
155
156
Returns:
157
Wrapper function that catches and logs exceptions
158
"""
159
```
160
161
### Event Emission
162
163
Functions for emitting OpenTelemetry events in the new event-based approach.
164
165
```python { .api }
166
def emit_event(event: Union[MessageEvent, ChoiceEvent], event_logger: Union[EventLogger, None]) -> None:
167
"""
168
Emit an event to the OpenTelemetry SDK.
169
170
Args:
171
event: The event to emit (MessageEvent or ChoiceEvent)
172
event_logger: The OpenTelemetry event logger
173
174
Returns:
175
None
176
"""
177
178
class Roles(Enum):
179
"""Enum of valid message roles."""
180
USER = "user"
181
ASSISTANT = "assistant"
182
SYSTEM = "system"
183
TOOL = "tool"
184
185
VALID_MESSAGE_ROLES = {"user", "assistant", "system", "tool"}
186
"""Set of valid roles for naming message events."""
187
188
EVENT_ATTRIBUTES = {"gen_ai.system": "mistral_ai"}
189
"""Default attributes to be used for events."""
190
```
191
192
### Configuration
193
194
Configuration options for the instrumentation.
195
196
```python { .api }
197
class Config:
198
"""Configuration class for the instrumentation."""
199
exception_logger = None # Custom exception logger
200
use_legacy_attributes = True # Whether to use legacy attributes
201
```
202
203
## Types
204
205
```python { .api }
206
class _FunctionToolCall(TypedDict):
207
"""Internal type for function tool call details."""
208
function_name: str
209
arguments: Optional[dict[str, Any]]
210
211
class ToolCall(TypedDict):
212
"""Represents a tool call in the AI model."""
213
id: str
214
function: _FunctionToolCall
215
type: Literal["function"]
216
217
class CompletionMessage(TypedDict):
218
"""Represents a message in the AI model."""
219
content: Any
220
role: str
221
```
222
223
## Environment Variables
224
225
- **`TRACELOOP_TRACE_CONTENT`**: Controls whether prompts and completions are logged (default: "true"). Set to "false" to disable content logging for privacy.
226
227
## Constants
228
229
```python { .api }
230
TRACELOOP_TRACE_CONTENT = "TRACELOOP_TRACE_CONTENT"
231
"""Environment variable name for controlling content tracing."""
232
233
_instruments = ("mistralai >= 0.2.0, < 1",)
234
"""Required package dependencies tuple."""
235
236
WRAPPED_METHODS = [
237
{
238
"method": "chat",
239
"span_name": "mistralai.chat",
240
"streaming": False,
241
},
242
{
243
"method": "chat_stream",
244
"span_name": "mistralai.chat",
245
"streaming": True,
246
},
247
{
248
"method": "embeddings",
249
"span_name": "mistralai.embeddings",
250
"streaming": False,
251
},
252
]
253
"""Configuration for methods to be instrumented."""
254
255
VALID_MESSAGE_ROLES = {"user", "assistant", "system", "tool"}
256
"""Set of valid roles for naming message events."""
257
258
EVENT_ATTRIBUTES = {"gen_ai.system": "mistral_ai"}
259
"""Default attributes to be used for events."""
260
```
261
262
### Internal Helper Functions
263
264
Internal functions used by the instrumentation (advanced usage).
265
266
```python { .api }
267
def _llm_request_type_by_method(method_name: str) -> str:
268
"""
269
Determine LLM request type based on method name.
270
271
Args:
272
method_name: Name of the method being instrumented
273
274
Returns:
275
str: LLM request type ("chat", "embedding", or "unknown")
276
"""
277
278
def _set_span_attribute(span, name: str, value):
279
"""
280
Set a span attribute if value is not None or empty.
281
282
Args:
283
span: OpenTelemetry span object
284
name: Attribute name
285
value: Attribute value
286
"""
287
288
def _with_tracer_wrapper(func):
289
"""
290
Helper decorator for providing tracer for wrapper functions.
291
292
Args:
293
func: Function to wrap with tracer
294
295
Returns:
296
Wrapped function with tracer access
297
"""
298
```
299
300
## Instrumented Methods
301
302
The library automatically instruments these Mistral AI client methods:
303
304
- **`MistralClient.chat`**: Synchronous chat completions
305
- **`MistralClient.chat_stream`**: Synchronous streaming chat completions
306
- **`MistralClient.embeddings`**: Synchronous embeddings
307
- **`MistralAsyncClient.chat`**: Asynchronous chat completions
308
- **`MistralAsyncClient.chat_stream`**: Asynchronous streaming chat completions
309
- **`MistralAsyncClient.embeddings`**: Asynchronous embeddings
310
311
## OpenTelemetry Integration
312
313
- Supports both legacy span attributes and new event-based telemetry
314
- Integrates with OpenTelemetry TracerProvider and EventLoggerProvider
315
- Creates spans with OpenTelemetry semantic conventions for AI/LLM operations
316
- Supports context propagation and distributed tracing
317
- Follows OpenTelemetry LLM semantic conventions for span attributes and event data
318
319
## Privacy Considerations
320
321
By default, this instrumentation logs prompts, completions, and embeddings to span attributes. For privacy reasons or to reduce trace size, disable content logging:
322
323
```bash
324
export TRACELOOP_TRACE_CONTENT=false
325
```
326
327
## Version Information
328
329
```python { .api }
330
__version__ = "0.46.2"
331
```