or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

index.md

index.mddocs/

0

# OpenTelemetry OpenAI Instrumentation

1

2

OpenTelemetry instrumentation for the OpenAI Python library, enabling automatic tracing and observability for OpenAI API calls including prompts, completions, and embeddings. It integrates seamlessly with the OpenTelemetry ecosystem to provide distributed tracing capabilities for LLM applications.

3

4

## Package Information

5

6

- **Package Name**: opentelemetry-instrumentation-openai

7

- **Language**: Python

8

- **Installation**: `pip install opentelemetry-instrumentation-openai`

9

10

## Core Imports

11

12

```python

13

from opentelemetry.instrumentation.openai import OpenAIInstrumentor

14

```

15

16

Additional imports for advanced usage:

17

18

```python

19

from opentelemetry.instrumentation.openai.shared.config import Config

20

from opentelemetry.instrumentation.openai.utils import (

21

is_openai_v1,

22

should_send_prompts,

23

TRACELOOP_TRACE_CONTENT

24

)

25

from opentelemetry.instrumentation.openai.shared.event_models import (

26

MessageEvent,

27

ChoiceEvent,

28

ToolCall,

29

CompletionMessage

30

)

31

from opentelemetry.instrumentation.openai.shared.event_emitter import (

32

emit_event,

33

Roles

34

)

35

```

36

37

## Basic Usage

38

39

```python

40

from opentelemetry.instrumentation.openai import OpenAIInstrumentor

41

import openai

42

43

# Basic instrumentation setup

44

OpenAIInstrumentor().instrument()

45

46

# Now OpenAI calls will be automatically traced

47

client = openai.OpenAI()

48

response = client.chat.completions.create(

49

model="gpt-3.5-turbo",

50

messages=[{"role": "user", "content": "Hello, world!"}]

51

)

52

```

53

54

Advanced usage with custom configuration:

55

56

```python

57

async def upload_image(trace_id, span_id, image_name, base64_data):

58

# Custom image upload handler

59

return f"https://example.com/images/{image_name}"

60

61

def exception_handler(error):

62

# Custom exception logging

63

print(f"OpenAI instrumentation error: {error}")

64

65

def get_metrics_attributes():

66

# Custom metrics attributes

67

return {"service.name": "my-llm-app"}

68

69

# Configure instrumentor with custom settings

70

instrumentor = OpenAIInstrumentor(

71

enrich_assistant=True,

72

exception_logger=exception_handler,

73

get_common_metrics_attributes=get_metrics_attributes,

74

upload_base64_image=upload_image,

75

enable_trace_context_propagation=True,

76

use_legacy_attributes=False

77

)

78

79

instrumentor.instrument(

80

tracer_provider=tracer_provider,

81

meter_provider=meter_provider,

82

event_logger_provider=event_logger_provider

83

)

84

```

85

86

## Architecture

87

88

The instrumentation works by wrapping OpenAI API calls at the client level:

89

90

- **Auto-detection**: Automatically detects OpenAI library version (v0.x vs v1.x) and applies appropriate instrumentation

91

- **Tracing**: Captures detailed spans for all OpenAI operations with prompts, completions, and metadata

92

- **Metrics**: Collects operational metrics including token usage, duration, and error rates

93

- **Events**: Emits structured events for LLM operations following OpenTelemetry semantic conventions

94

- **Privacy Controls**: Configurable content logging with environment variable controls

95

96

## Capabilities

97

98

### Instrumentation Management

99

100

Core functionality for setting up and managing OpenAI instrumentation lifecycle.

101

102

```python { .api }

103

class OpenAIInstrumentor(BaseInstrumentor):

104

def __init__(

105

self,

106

enrich_assistant: bool = False,

107

exception_logger: Optional[Callable] = None,

108

get_common_metrics_attributes: Callable[[], dict] = lambda: {},

109

upload_base64_image: Optional[Callable[[str, str, str, str], Coroutine[None, None, str]]] = lambda *args: "",

110

enable_trace_context_propagation: bool = True,

111

use_legacy_attributes: bool = True,

112

):

113

"""

114

Initialize the OpenAI instrumentor.

115

116

Parameters:

117

- enrich_assistant: bool, enable assistant enrichment for additional context

118

- exception_logger: Optional[Callable], custom exception logging handler

119

- get_common_metrics_attributes: Callable[[], dict], provider for common metrics attributes

120

- upload_base64_image: Optional[Callable], handler for base64 image uploads in traces

121

- enable_trace_context_propagation: bool, enable trace context propagation

122

- use_legacy_attributes: bool, use legacy attribute format (vs new semantic conventions)

123

"""

124

125

def instrument(self, **kwargs) -> None:

126

"""

127

Start instrumentation of OpenAI library.

128

129

Parameters:

130

- tracer_provider: OpenTelemetry tracer provider

131

- meter_provider: OpenTelemetry meter provider

132

- event_logger_provider: OpenTelemetry event logger provider

133

"""

134

135

def uninstrument(self, **kwargs) -> None:

136

"""Stop instrumentation and restore original OpenAI functions."""

137

138

def instrumentation_dependencies(self) -> Collection[str]:

139

"""Return list of required package dependencies."""

140

```

141

142

### Configuration

143

144

Global configuration settings that control instrumentation behavior.

145

146

```python { .api }

147

class Config:

148

enrich_assistant: bool = False

149

exception_logger: Optional[Callable] = None

150

get_common_metrics_attributes: Callable[[], dict] = lambda: {}

151

upload_base64_image: Union[Callable[[str, str, str, str], str], Callable[[str, str, str, str], Coroutine[None, None, str]]]

152

enable_trace_context_propagation: bool = True

153

use_legacy_attributes: bool = True

154

event_logger: Optional[EventLogger] = None

155

```

156

157

### Utility Functions

158

159

Helper functions for version detection, environment configuration, and instrumentation control.

160

161

```python { .api }

162

def is_openai_v1() -> bool:

163

"""Check if OpenAI library version is >= 1.0.0."""

164

165

def is_reasoning_supported() -> bool:

166

"""Check if reasoning is supported (OpenAI >= 1.58.0)."""

167

168

def is_azure_openai(instance) -> bool:

169

"""Check if instance is Azure OpenAI client."""

170

171

def is_metrics_enabled() -> bool:

172

"""Check if metrics collection is enabled via TRACELOOP_METRICS_ENABLED."""

173

174

def should_send_prompts() -> bool:

175

"""Check if prompt content should be traced based on TRACELOOP_TRACE_CONTENT."""

176

177

def should_emit_events() -> bool:

178

"""Check if events should be emitted (non-legacy mode with event logger)."""

179

180

async def start_as_current_span_async(tracer, *args, **kwargs):

181

"""Async context manager for starting spans."""

182

183

def dont_throw(func) -> Callable:

184

"""

185

Decorator that wraps functions to log exceptions instead of throwing them.

186

Works for both synchronous and asynchronous functions.

187

"""

188

189

def run_async(method) -> None:

190

"""Run async method in appropriate event loop context."""

191

192

# Constants

193

TRACELOOP_TRACE_CONTENT: str = "TRACELOOP_TRACE_CONTENT"

194

"""Environment variable name for controlling content tracing."""

195

```

196

197

### Event Models

198

199

Data structures for representing AI model interactions and tool calls in structured events.

200

201

```python { .api }

202

from typing import Union, List, Optional, Callable, Any, Literal, Coroutine

203

from typing_extensions import TypedDict

204

from dataclasses import dataclass

205

from enum import Enum

206

207

class ToolCall(TypedDict):

208

"""Represents a tool call in the AI model."""

209

id: str

210

function: dict[str, Any] # Contains function_name and optional arguments

211

type: Literal["function"]

212

213

class CompletionMessage(TypedDict):

214

"""Represents a message in the AI model."""

215

content: Any

216

role: str = "assistant"

217

218

@dataclass

219

class MessageEvent:

220

"""Represents an input event for the AI model."""

221

content: Any

222

role: str = "user"

223

tool_calls: Optional[List[ToolCall]] = None

224

225

@dataclass

226

class ChoiceEvent:

227

"""Represents a completion event for the AI model."""

228

index: int

229

message: CompletionMessage

230

finish_reason: str = "unknown"

231

tool_calls: Optional[List[ToolCall]] = None

232

```

233

234

### Event Emission

235

236

Functions for emitting structured events following OpenTelemetry semantic conventions.

237

238

```python { .api }

239

def emit_event(event: Union[MessageEvent, ChoiceEvent]) -> None:

240

"""

241

Emit an event to the OpenTelemetry SDK.

242

243

Parameters:

244

- event: MessageEvent or ChoiceEvent to emit

245

246

Raises:

247

- TypeError: If event type is not supported

248

"""

249

250

class Roles(Enum):

251

"""Valid message roles for AI interactions."""

252

USER = "user"

253

ASSISTANT = "assistant"

254

SYSTEM = "system"

255

TOOL = "tool"

256

```

257

258

### Version Information

259

260

Package version information.

261

262

```python { .api }

263

__version__: str = "0.46.2"

264

```

265

266

## Environment Variables

267

268

Control instrumentation behavior through environment variables:

269

270

```python

271

# Control content logging (default: "true")

272

TRACELOOP_TRACE_CONTENT = "false" # Disable logging prompts/completions for privacy

273

274

# Control metrics collection (default: "true")

275

TRACELOOP_METRICS_ENABLED = "false" # Disable metrics collection

276

```

277

278

## Supported Operations

279

280

The instrumentation automatically traces the following OpenAI operations:

281

282

### OpenAI v1.x (>= 1.0.0)

283

- Chat completions (sync/async): `client.chat.completions.create()`

284

- Completions (sync/async): `client.completions.create()`

285

- Embeddings (sync/async): `client.embeddings.create()`

286

- Image generation: `client.images.generate()`

287

- Assistants API (beta): `client.beta.assistants.*`

288

- Threads and runs (beta): `client.beta.threads.*`

289

- Responses API: `client.responses.*`

290

291

### OpenAI v0.x (>= 0.27.0, < 1.0.0)

292

- Chat completions (sync/async): `openai.ChatCompletion.create()`

293

- Completions (sync/async): `openai.Completion.create()`

294

- Embeddings (sync/async): `openai.Embedding.create()`

295

296

## Privacy and Security

297

298

### Content Control

299

By default, the instrumentation logs prompts, completions, and embeddings to span attributes for visibility and debugging. To disable content logging for privacy:

300

301

```bash

302

export TRACELOOP_TRACE_CONTENT=false

303

```

304

305

### Metrics Control

306

Metrics collection can be disabled:

307

308

```bash

309

export TRACELOOP_METRICS_ENABLED=false

310

```

311

312

### Custom Exception Handling

313

Provide custom exception logging to handle instrumentation errors:

314

315

```python

316

def custom_exception_handler(error):

317

# Log to your preferred logging system

318

logger.warning(f"OpenAI instrumentation error: {error}")

319

320

instrumentor = OpenAIInstrumentor(exception_logger=custom_exception_handler)

321

```

322

323

## Integration Examples

324

325

### Basic Tracing Setup

326

327

```python

328

from opentelemetry import trace

329

from opentelemetry.exporter.jaeger.thrift import JaegerExporter

330

from opentelemetry.sdk.trace import TracerProvider

331

from opentelemetry.sdk.trace.export import BatchSpanProcessor

332

from opentelemetry.instrumentation.openai import OpenAIInstrumentor

333

334

# Configure tracing

335

trace.set_tracer_provider(TracerProvider())

336

jaeger_exporter = JaegerExporter(

337

agent_host_name="localhost",

338

agent_port=6831,

339

)

340

span_processor = BatchSpanProcessor(jaeger_exporter)

341

trace.get_tracer_provider().add_span_processor(span_processor)

342

343

# Instrument OpenAI

344

OpenAIInstrumentor().instrument()

345

```

346

347

### Metrics Collection

348

349

```python

350

from opentelemetry import metrics

351

from opentelemetry.sdk.metrics import MeterProvider

352

from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader

353

from opentelemetry.exporter.prometheus import PrometheusMetricReader

354

from opentelemetry.instrumentation.openai import OpenAIInstrumentor

355

356

# Configure metrics

357

metric_reader = PrometheusMetricReader()

358

metrics.set_meter_provider(MeterProvider(metric_readers=[metric_reader]))

359

360

# Instrument with metrics

361

OpenAIInstrumentor().instrument(meter_provider=metrics.get_meter_provider())

362

```

363

364

### Event Logging

365

366

```python

367

from opentelemetry.sdk._logs import LoggerProvider

368

from opentelemetry.sdk._events import EventLoggerProvider

369

from opentelemetry.instrumentation.openai import OpenAIInstrumentor

370

371

# Configure event logging

372

logger_provider = LoggerProvider()

373

event_logger_provider = EventLoggerProvider(logger_provider)

374

375

# Instrument with events (requires use_legacy_attributes=False)

376

instrumentor = OpenAIInstrumentor(use_legacy_attributes=False)

377

instrumentor.instrument(event_logger_provider=event_logger_provider)

378

```