CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-ddtrace

Datadog APM client library providing distributed tracing, continuous profiling, error tracking, test optimization, deployment tracking, code hotspots analysis, and dynamic instrumentation for Python applications.

Pending
Overview
Eval results
Files

profiling.mddocs/

Profiling

Continuous profiling capabilities provide deep insights into application performance through CPU usage analysis, memory allocation tracking, lock contention monitoring, and wall time measurement. The profiler helps identify performance bottlenecks, memory leaks, and resource consumption patterns in production applications.

Capabilities

Profiler Management

The main Profiler class manages the lifecycle of continuous profiling and data collection across multiple profiling dimensions.

class Profiler:
    def __init__(
        self,
        service: str = None,
        env: str = None,
        version: str = None,
        tags: Dict[str, str] = None
    ):
        """
        Initialize a new profiler instance.

        Parameters:
        - service: Service name to profile (defaults to global service)
        - env: Environment name (e.g., 'production', 'staging')
        - version: Application version
        - tags: Additional tags to attach to profiles
        """

    def start(self, stop_on_exit: bool = True, profile_children: bool = True) -> None:
        """
        Start the profiler and begin collecting profile data.

        Parameters:
        - stop_on_exit: Whether to automatically stop profiling on program exit
        - profile_children: Whether to start a profiler in child processes
        """

    def stop(self, flush: bool = True) -> None:
        """
        Stop the profiler and optionally flush remaining data.

        Parameters:
        - flush: Whether to upload any remaining profile data
        """

Usage examples:

from ddtrace.profiling import Profiler

# Basic profiler setup
profiler = Profiler(
    service="web-service",
    env="production",
    version="2.1.0",
    tags={"team": "backend", "region": "us-east-1"}
)

# Start profiling
profiler.start()

# Your application code runs here
run_application()

# Stop profiling (optional - automatically stops on exit)
profiler.stop()

Profile Collection Types

The profiler automatically collects multiple types of performance data:

CPU Profiling

Tracks CPU usage and call stack samples to identify performance hotspots:

# CPU profiling is enabled by default
profiler = Profiler()
profiler.start()

# The profiler will sample call stacks periodically
# and identify functions consuming the most CPU time

Memory Profiling

Monitors memory allocation patterns and tracks memory usage over time:

# Memory profiling tracks allocations and can help identify memory leaks
profiler = Profiler()
profiler.start()

# Large memory allocations and patterns will be captured
large_data = allocate_large_dataset()
process_data(large_data)

Lock Contention Profiling

Identifies threading bottlenecks and synchronization issues:

import threading

profiler = Profiler()
profiler.start()

# Lock contention will be automatically detected
lock = threading.Lock()
with lock:
    # Critical section - contention will be measured
    shared_resource_operation()

Wall Time Profiling

Measures total elapsed time including I/O wait and blocking operations:

profiler = Profiler()
profiler.start()

# Wall time profiling captures total execution time
# including blocking I/O operations
with open('large-file.txt') as f:
    data = f.read()  # I/O wait time is captured
    process_file_data(data)

Advanced Configuration

Environment-based Configuration

The profiler can be configured through environment variables:

import os

# Configure via environment variables
os.environ['DD_PROFILING_ENABLED'] = 'true'
os.environ['DD_SERVICE'] = 'my-python-service'
os.environ['DD_ENV'] = 'production'
os.environ['DD_VERSION'] = '1.2.3'

# Profiler picks up environment configuration automatically
profiler = Profiler()
profiler.start()

Custom Profiling Configuration

# Advanced profiler configuration
profiler = Profiler(
    service="api-server",
    env="staging",
    version="1.0.0-beta",
    tags={
        "datacenter": "us-west-2",
        "instance_type": "c5.large",
        "deployment": "canary"
    }
)

# Custom profiler configuration
profiler.start(stop_on_exit=False)

# Manual control over profiler lifecycle
import time
while application_running():
    time.sleep(60)  # Run for 1 minute
    # Profiles are automatically uploaded periodically

profiler.stop(flush=True)

Integration with Tracing

Profiling data is automatically correlated with distributed traces when both profiling and tracing are enabled:

from ddtrace import tracer
from ddtrace.profiling import Profiler

# Enable both tracing and profiling
profiler = Profiler()
profiler.start()

# Traces and profiles are automatically correlated
with tracer.trace("expensive-operation") as span:
    span.set_tag("operation.type", "data-processing")
    
    # This operation will appear in both:
    # 1. The distributed trace (timing and metadata)
    # 2. The profiling data (CPU/memory usage details)
    cpu_intensive_operation()
    memory_intensive_operation()

Profiling in Production

Performance Impact

The profiler is designed for production use with minimal overhead:

# Production-ready profiler setup
profiler = Profiler(
    service="production-api",
    env="production",
    version="2.3.1"
)

# Start with production-optimized settings
profiler.start()

# Profiler overhead is typically <2% CPU and <1% memory
# Safe to run continuously in production

Error Handling

from ddtrace.profiling import Profiler

try:
    profiler = Profiler()
    profiler.start()
    
    # Application code
    run_application()
    
except Exception as e:
    # Profiler errors don't affect application execution
    print(f"Profiler error (non-fatal): {e}")
    
finally:
    # Always try to stop cleanly
    try:
        profiler.stop()
    except:
        pass  # Ignore cleanup errors

Profiling Best Practices

Service Identification

# Use descriptive service names for multi-service applications
web_profiler = Profiler(service="web-frontend")
api_profiler = Profiler(service="api-backend")
worker_profiler = Profiler(service="background-worker")

# Start appropriate profiler based on service type
if is_web_server():
    web_profiler.start()
elif is_api_server():
    api_profiler.start()
elif is_worker():
    worker_profiler.start()

Contextual Tagging

# Add context-specific tags for better profiling insights
profiler = Profiler(
    service="payment-processor",
    tags={
        "payment_provider": "stripe",
        "region": get_deployment_region(),
        "instance_id": get_instance_id(),
        "version": get_application_version()
    }
)
profiler.start()

Development vs Production

import os

# Different configurations for different environments
if os.environ.get('ENVIRONMENT') == 'development':
    # Development configuration
    profiler = Profiler(env="development")
else:
    # Standard production configuration
    profiler = Profiler(env="production")

profiler.start()

Profile Data Analysis

The collected profiling data appears in the Datadog UI and provides insights into:

  • CPU Hotspots: Functions consuming the most CPU time
  • Memory Allocation Patterns: Which functions allocate the most memory
  • Lock Contention: Threading bottlenecks and synchronization issues
  • I/O Wait Times: Blocking operations and external service dependencies
  • Garbage Collection Impact: Memory management overhead

This data is automatically correlated with:

  • Distributed traces (when tracing is enabled)
  • Application logs (when log correlation is configured)
  • Infrastructure metrics (when Datadog Agent is deployed)

Troubleshooting

Common profiling issues and solutions:

# Basic profiler lifecycle
profiler = Profiler()
profiler.start()

# Your application code here
run_application()

# Stop profiler for debugging
try:
    profiler.stop(flush=True)
    print("Profiler stopped and flushed successfully")
except Exception as e:
    print(f"Profiler stop failed: {e}")

Install with Tessl CLI

npx tessl i tessl/pypi-ddtrace

docs

application-security.md

automatic-instrumentation.md

configuration-settings.md

core-tracing.md

index.md

opentelemetry-integration.md

profiling.md

tile.json