CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-openinference-instrumentation

OpenInference instrumentation utilities for tracking application metadata such as sessions, users, and custom metadata using Python context managers

Pending
Overview
Eval results
Files

configuration.mddocs/

Configuration & Control

Configuration utilities for controlling tracing behavior, data privacy settings, and instrumentation suppression in OpenInference instrumentation.

Capabilities

Trace Configuration

Central configuration class for controlling tracing configurations including data privacy settings and payload size limits.

@dataclass(frozen=True)
class TraceConfig:
    """
    Configuration for controlling tracing behavior and data privacy.
    
    Args:
        hide_llm_invocation_parameters (Optional[bool]): Hide LLM invocation parameters
        hide_inputs (Optional[bool]): Hide input values and messages  
        hide_outputs (Optional[bool]): Hide output values and messages
        hide_input_messages (Optional[bool]): Hide all input messages
        hide_output_messages (Optional[bool]): Hide all output messages
        hide_input_images (Optional[bool]): Hide images from input messages
        hide_input_text (Optional[bool]): Hide text from input messages
        hide_output_text (Optional[bool]): Hide text from output messages
        hide_embedding_vectors (Optional[bool]): Hide embedding vectors
        hide_prompts (Optional[bool]): Hide LLM prompts
        base64_image_max_length (Optional[int]): Limit characters in base64 image encodings
    """
    hide_llm_invocation_parameters: Optional[bool] = None
    hide_inputs: Optional[bool] = None  
    hide_outputs: Optional[bool] = None
    hide_input_messages: Optional[bool] = None
    hide_output_messages: Optional[bool] = None
    hide_input_images: Optional[bool] = None
    hide_input_text: Optional[bool] = None
    hide_output_text: Optional[bool] = None
    hide_embedding_vectors: Optional[bool] = None
    hide_prompts: Optional[bool] = None
    base64_image_max_length: Optional[int] = None
    
    def mask(
        self, 
        key: str, 
        value: Union[AttributeValue, Callable[[], AttributeValue]]
    ) -> Optional[AttributeValue]:
        """
        Apply masking to attribute values based on configuration.
        
        Args:
            key (str): The attribute key
            value (Union[AttributeValue, Callable[[], AttributeValue]]): The attribute value or value factory
            
        Returns:
            Optional[AttributeValue]: Masked value, None to exclude, or original value
        """

Usage Example:

from openinference.instrumentation import TraceConfig, TracerProvider

# Create configuration with privacy settings
config = TraceConfig(
    hide_llm_invocation_parameters=True,
    hide_inputs=True,
    hide_embedding_vectors=True,
    base64_image_max_length=1000
)

# Use with TracerProvider
tracer_provider = TracerProvider(config=config)
tracer = tracer_provider.get_tracer(__name__)

# Configuration can also read from environment variables
# OPENINFERENCE_HIDE_INPUTS=true
# OPENINFERENCE_HIDE_LLM_INVOCATION_PARAMETERS=true
config_from_env = TraceConfig()  # Will use env vars

Environment Variable Configuration

TraceConfig automatically reads from environment variables when values are not explicitly provided:

  • OPENINFERENCE_HIDE_LLM_INVOCATION_PARAMETERS
  • OPENINFERENCE_HIDE_INPUTS
  • OPENINFERENCE_HIDE_OUTPUTS
  • OPENINFERENCE_HIDE_INPUT_MESSAGES
  • OPENINFERENCE_HIDE_OUTPUT_MESSAGES
  • OPENINFERENCE_HIDE_INPUT_IMAGES
  • OPENINFERENCE_HIDE_INPUT_TEXT
  • OPENINFERENCE_HIDE_OUTPUT_TEXT
  • OPENINFERENCE_HIDE_EMBEDDING_VECTORS
  • OPENINFERENCE_HIDE_PROMPTS
  • OPENINFERENCE_BASE64_IMAGE_MAX_LENGTH

Usage Example:

# Set environment variables
export OPENINFERENCE_HIDE_INPUTS=true
export OPENINFERENCE_BASE64_IMAGE_MAX_LENGTH=2000
# TraceConfig will automatically use these environment variables
config = TraceConfig()
print(config.hide_inputs)  # True (from environment)
print(config.base64_image_max_length)  # 2000 (from environment)

Tracing Suppression

Context manager to pause OpenTelemetry instrumentation within a specific scope.

class suppress_tracing:
    """
    Context manager to pause OpenTelemetry instrumentation.
    No spans will be created within this context.
    """
    def __enter__(self) -> "suppress_tracing": ...
    def __exit__(
        self, 
        exc_type: Optional[Type[BaseException]], 
        exc_value: Optional[BaseException], 
        traceback: Optional[TracebackType]
    ) -> None: ...
    def __aenter__(self) -> "suppress_tracing": ...
    def __aexit__(
        self,
        exc_type: Optional[Type[BaseException]],
        exc_value: Optional[BaseException], 
        traceback: Optional[TracebackType]
    ) -> None: ...

Usage Example:

from openinference.instrumentation import suppress_tracing

# Synchronous usage
with suppress_tracing():
    # No tracing will occur within this block
    sensitive_database_operation()
    internal_cache_update()

# Asynchronous usage  
async with suppress_tracing():
    # No tracing for async operations either
    await sensitive_async_operation()

# Can be used as decorator
@suppress_tracing()
def internal_helper_function():
    # This function won't be traced
    pass

Redacted Value Constant

Constant value used when data is redacted due to privacy settings.

REDACTED_VALUE: str = "__REDACTED__"

Usage Example:

from openinference.instrumentation import REDACTED_VALUE, TraceConfig

config = TraceConfig(hide_inputs=True)

# When inputs are hidden, they will show as REDACTED_VALUE
# span.attributes["input.value"] = "__REDACTED__"

Data Privacy Features

The TraceConfig system provides comprehensive data privacy controls:

Input/Output Privacy

  • hide_inputs: Hides all input values and MIME types
  • hide_outputs: Hides all output values and MIME types
  • hide_input_messages: Hides all LLM input messages
  • hide_output_messages: Hides all LLM output messages

Content-Specific Privacy

  • hide_input_text: Hides text content from input messages only
  • hide_output_text: Hides text content from output messages only
  • hide_input_images: Hides image content from input messages
  • hide_embedding_vectors: Hides vector data from embeddings

System Privacy

  • hide_llm_invocation_parameters: Hides LLM configuration parameters
  • hide_prompts: Hides LLM prompt content
  • base64_image_max_length: Limits size of base64-encoded images

Masking Behavior

When privacy settings are enabled:

  1. Exclusion: Some attributes are completely excluded (return None)
  2. Redaction: Some attributes are replaced with REDACTED_VALUE
  3. Truncation: Long base64 images are truncated to the specified length

The mask() method applies these transformations consistently across all attribute generation functions.

Install with Tessl CLI

npx tessl i tessl/pypi-openinference-instrumentation

docs

attribute-generation.md

configuration.md

context-management.md

index.md

tracer-spans.md

type-definitions.md

utilities.md

tile.json