or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

application-security.mdautomatic-instrumentation.mdconfiguration-settings.mdcore-tracing.mdindex.mdopentelemetry-integration.mdprofiling.md
tile.json

tessl/pypi-ddtrace

Datadog APM client library providing distributed tracing, continuous profiling, error tracking, test optimization, deployment tracking, code hotspots analysis, and dynamic instrumentation for Python applications.

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
pypipkg:pypi/ddtrace@3.12.x

To install, run

npx @tessl/cli install tessl/pypi-ddtrace@3.12.0

index.mddocs/

ddtrace

Datadog's APM (Application Performance Monitoring) client library for Python provides comprehensive distributed tracing, continuous profiling, error tracking, test optimization, deployment tracking, code hotspots analysis, and dynamic instrumentation. It enables deep insights into application behavior and performance bottlenecks through automatic instrumentation of popular Python frameworks and libraries, sophisticated monkey-patching system, comprehensive telemetry collection with built-in context propagation and span management, and advanced profiling capabilities including memory allocation tracking and stack sampling.

Package Information

  • Package Name: ddtrace
  • Language: Python
  • Installation: pip install ddtrace

Core Imports

import ddtrace

Common imports for tracing:

from ddtrace import tracer
from ddtrace import patch, patch_all, config

For profiling:

from ddtrace.profiling import Profiler

For OpenTelemetry compatibility:

from ddtrace.opentelemetry import TracerProvider

Basic Usage

import ddtrace
from ddtrace import tracer, patch

# Configure basic settings
ddtrace.config.service = "my-python-app"
ddtrace.config.env = "production"
ddtrace.config.version = "1.0.0"

# Manual instrumentation for specific libraries
patch(redis=True, psycopg=True, requests=True)

# Create custom spans
with tracer.trace("custom-operation") as span:
    span.set_tag("user.id", "12345")
    span.set_tag("operation.type", "data-processing")
    # Your application logic here
    result = process_data()
    span.set_tag("result.count", len(result))

# Manual span creation without context manager
span = tracer.trace("background-task")
span.set_tag("task.type", "cleanup")
try:
    cleanup_operation()
    span.set_tag("status", "success")
except Exception as e:
    span.set_error(e)
    span.set_tag("status", "error")
finally:
    span.finish()

Architecture

ddtrace uses a modular architecture with several key components:

  • Tracer: Central coordinator for span creation, context management, and trace submission
  • Spans: Individual units of work with timing, metadata, and parent-child relationships
  • Context: Thread-local storage for active span state and trace propagation
  • Patches: Monkey-patching system for automatic instrumentation of third-party libraries
  • Writers: Backend communication layer for submitting traces to Datadog Agent
  • Processors: Middleware for span enrichment, filtering, and transformation
  • Profiler: Continuous profiling engine for CPU, memory, and lock contention analysis

The library integrates seamlessly with 80+ popular Python libraries through automatic instrumentation while providing manual instrumentation APIs for custom applications.

Capabilities

Core Tracing

Fundamental distributed tracing functionality including span creation, context management, trace filtering, and integration configuration. Provides the essential building blocks for observability.

class Tracer:
    def trace(self, name: str, service: str = None, resource: str = None, span_type: str = None) -> Span: ...
    def wrap(self, name: str = None, service: str = None, resource: str = None, span_type: str = None): ...

def patch(**patch_modules: bool) -> None: ...
def patch_all(**patch_modules: bool) -> None: ...  # DEPRECATED: Use DD_PATCH_MODULES environment variable instead

tracer: Tracer
config: Config

Core Tracing

Profiling

Continuous profiling capabilities for CPU usage, memory allocation, lock contention, and wall time analysis. Enables identification of performance bottlenecks and resource consumption patterns.

class Profiler:
    def __init__(self, service: str = None, env: str = None, version: str = None, tags: Dict[str, str] = None): ...
    def start(self, stop_on_exit: bool = True, profile_children: bool = True) -> None: ...
    def stop(self, flush: bool = True) -> None: ...

Profiling

Application Security

Application security monitoring including Interactive Application Security Testing (IAST), runtime security monitoring, and AI/LLM security features for threat detection and vulnerability identification.

# Configuration constants
APPSEC_ENV: str  # "DD_APPSEC_ENABLED"
IAST_ENV: str    # "DD_IAST_ENABLED"

Application Security

Automatic Instrumentation

Comprehensive automatic instrumentation for web frameworks, databases, HTTP clients, message queues, AI/ML libraries, and other popular Python packages through monkey-patching.

def patch(
    django: bool = None,
    flask: bool = None,
    fastapi: bool = None,
    psycopg: bool = None,
    redis: bool = None,
    requests: bool = None,
    openai: bool = None,
    # ... 80+ more integrations
    raise_errors: bool = True,
    **kwargs
) -> None: ...

Automatic Instrumentation

OpenTelemetry Integration

OpenTelemetry API compatibility layer enabling interoperability between ddtrace and OpenTelemetry instrumentation while maintaining Datadog-specific features and optimizations.

class TracerProvider:
    def get_tracer(
        self,
        instrumenting_module_name: str,
        instrumenting_library_version: str = None,
        schema_url: str = None
    ) -> Tracer: ...

OpenTelemetry Integration

Configuration and Settings

Comprehensive configuration system for service identification, sampling, trace filtering, integration settings, and environment-specific customization through environment variables and programmatic APIs.

class Config:
    service: str
    env: str
    version: str
    tags: Dict[str, str]
    trace_enabled: bool
    analytics_enabled: bool
    priority_sampling: bool
    
config: Config

Configuration and Settings

Constants

# Tag keys
ENV_KEY: str = "env"
VERSION_KEY: str = "version"
SERVICE_KEY: str = "service.name"
SERVICE_VERSION_KEY: str = "service.version"
SPAN_KIND: str = "span.kind"

# Error tags
ERROR_MSG: str = "error.message"
ERROR_TYPE: str = "error.type"
ERROR_STACK: str = "error.stack"

# Sampling decisions
USER_REJECT: int = -1
AUTO_REJECT: int = 0
AUTO_KEEP: int = 1
USER_KEEP: int = 2

# Manual sampling
MANUAL_DROP_KEY: str = "manual.drop"
MANUAL_KEEP_KEY: str = "manual.keep"

# Environment variables
APPSEC_ENV: str = "DD_APPSEC_ENABLED"
IAST_ENV: str = "DD_IAST_ENABLED"

# Process identification
PID: str = "process_id"

Types

class Span:
    def set_tag(self, key: str, value: str) -> None: ...
    def set_error(self, error: Exception, traceback: str = None) -> None: ...
    def finish(self, finish_time: float = None) -> None: ...
    def set_metric(self, key: str, value: float) -> None: ...

class Context:
    def clone(self) -> 'Context': ...

class Pin:
    def __init__(self, service: str = None, app: str = None, tags: Dict[str, str] = None, tracer: Tracer = None): ...
    def onto(self, obj: object) -> Pin: ...
    def get_from(self, obj: object) -> Pin: ...

class TraceFilter:
    def process_trace(self, trace: List[Span]) -> List[Span]: ...

class BaseContextProvider:
    def activate(self, span: Span) -> object: ...
    def active(self) -> Span: ...