CtrlK
CommunityDocumentationLog inGet started
Tessl Logo

tessl/maven-org-springframework-ai--spring-ai-autoconfigure-model-chat-observation

Spring Boot auto-configuration for observability of Spring AI chat model operations through Micrometer metrics and distributed tracing

Overview
Eval results
Files

Spring AI Chat Observation Auto-Configuration

Spring Boot auto-configuration module that provides comprehensive observability capabilities for Spring AI chat model operations through Micrometer metrics collection and distributed tracing integration.

Package Information

  • Package Name: spring-ai-autoconfigure-model-chat-observation
  • Package Type: Maven
  • Maven Coordinates: org.springframework.ai:spring-ai-autoconfigure-model-chat-observation:1.1.2
  • Language: Java
  • Java Version: 17+
  • License: Apache-2.0
  • Installation: Add Maven dependency to your Spring Boot project

Overview

This auto-configuration module automatically configures observation handlers for monitoring and tracing Spring AI chat model interactions when added to a Spring Boot application. It seamlessly integrates with Spring Boot Actuator and Micrometer to provide:

  • Meter-based metrics collection for chat operations
  • Optional distributed tracing through Micrometer Tracing
  • Configurable logging of prompts, completions, and errors
  • Security-conscious defaults with warnings for sensitive data exposure

The module activates automatically when ChatModel is on the classpath and adapts its configuration based on available dependencies (MeterRegistry, Tracer).

Architecture

This module leverages Spring Boot's auto-configuration mechanism to provide zero-configuration observability for Spring AI chat operations. Understanding its architecture helps explain how it integrates seamlessly into Spring Boot applications.

Auto-Configuration Discovery

Spring Boot automatically discovers this module through the standard auto-configuration mechanism:

  1. META-INF Registration: The module registers itself in META-INF/spring/org.springframework.boot.autoconfigure.AutoConfiguration.imports:

    org.springframework.ai.model.chat.observation.autoconfigure.ChatObservationAutoConfiguration
  2. Conditional Activation: The auto-configuration only activates when:

    • ChatModel.class is on the classpath (from spring-ai-client-chat module)
    • Runs after ObservationAutoConfiguration from Spring Boot Actuator
  3. No Explicit Configuration Required: Developers simply add the Maven dependency - Spring Boot handles the rest.

Conditional Bean Registration Strategy

The module implements a sophisticated two-path strategy based on the presence of distributed tracing:

Path 1: With Micrometer Tracing (Tracer Available)

When io.micrometer.tracing.Tracer class and bean are present:

ChatObservationAutoConfiguration
├── ChatModelMeterObservationHandler (always, if MeterRegistry present)
└── TracerPresentObservationConfiguration
    ├── TracingAwareLoggingObservationHandler<ChatModelObservationContext> (for prompts, if enabled)
    ├── TracingAwareLoggingObservationHandler<ChatModelObservationContext> (for completions, if enabled)
    └── ErrorLoggingObservationHandler (if enabled)

Benefits: Logs include trace IDs and span IDs for correlation in distributed systems.

Path 2: Without Micrometer Tracing (Tracer Not Available)

When io.micrometer.tracing.Tracer class is not on the classpath:

ChatObservationAutoConfiguration
├── ChatModelMeterObservationHandler (always, if MeterRegistry present)
└── TracerNotPresentObservationConfiguration
    ├── ChatModelPromptContentObservationHandler (for prompts, if enabled)
    └── ChatModelCompletionObservationHandler (for completions, if enabled)

Benefits: Simpler logging without tracing overhead for non-distributed applications.

Bean Creation Hierarchy

The module creates beans in this order:

  1. ChatObservationProperties: Configuration properties bound from spring.ai.chat.observations.*
  2. ChatModelMeterObservationHandler: Metrics collection (requires MeterRegistry)
  3. Logging Handlers (conditionally based on properties and tracer availability):
    • Prompt logging handler
    • Completion logging handler
    • Error logging handler (only with tracer)

All beans use @ConditionalOnMissingBean, allowing custom implementations to override defaults.

Integration Points

With Spring Boot Actuator

Spring Boot Application
└── Spring Boot Actuator
    ├── ObservationRegistry (core observation infrastructure)
    ├── MeterRegistry (metrics collection)
    └── Observation Handlers
        └── ChatModelMeterObservationHandler (registered by this module)
            └── Collects metrics to MeterRegistry
                └── Exposed via /actuator/metrics endpoints

With Micrometer Tracing

Spring Boot Application
└── Micrometer Tracing
    ├── Tracer (trace context propagation)
    └── Observation Handlers
        ├── TracingAwareLoggingObservationHandler (prompts)
        ├── TracingAwareLoggingObservationHandler (completions)
        └── ErrorLoggingObservationHandler
            └── All logs include trace/span IDs

With Spring AI Chat Models

ChatModel Implementation (e.g., OpenAiChatModel)
└── Executes chat operations
    └── Creates ChatModelObservationContext
        └── Triggers all registered ObservationHandlers
            ├── ChatModelMeterObservationHandler → Metrics
            ├── Prompt logging handler → Logs (if enabled)
            ├── Completion logging handler → Logs (if enabled)
            └── Error logging handler → Error logs (if enabled)

Design Principles

  1. Zero Configuration: Works out-of-the-box with sensible defaults
  2. Conditional Composition: Adapts to available dependencies
  3. Security First: All sensitive logging disabled by default
  4. Extensibility: All beans can be overridden with custom implementations
  5. Non-Intrusive: Leverages Spring's observation abstraction - no code changes needed

Configuration Properties Flow

application.properties
├── spring.ai.chat.observations.log-prompt=true
├── spring.ai.chat.observations.log-completion=true
└── spring.ai.chat.observations.include-error-logging=true
    ↓
ChatObservationProperties (bound by @ConfigurationProperties)
    ↓
@ConditionalOnProperty annotations on bean methods
    ↓
Selective bean creation based on property values

Maven Dependency

<dependency>
    <groupId>org.springframework.ai</groupId>
    <artifactId>spring-ai-autoconfigure-model-chat-observation</artifactId>
    <version>1.1.2</version>
</dependency>

Required Dependencies (typically provided by Spring Boot starters):

<!-- Core Spring AI chat client -->
<dependency>
    <groupId>org.springframework.ai</groupId>
    <artifactId>spring-ai-client-chat</artifactId>
</dependency>

<!-- Spring Boot Actuator for observability infrastructure -->
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

Optional Dependencies (enables additional features):

<!-- For distributed tracing support -->
<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-tracing</artifactId>
</dependency>

Core Imports

This module primarily works through Spring Boot auto-configuration and does not require explicit imports in most cases. However, for programmatic access to configuration or custom handlers:

// For programmatic access to observation properties
import org.springframework.ai.model.chat.observation.autoconfigure.ChatObservationProperties;

// For creating custom observation handlers (from spring-ai-client-chat module)
import org.springframework.ai.chat.observation.ChatModelMeterObservationHandler;
import org.springframework.ai.chat.observation.ChatModelPromptContentObservationHandler;
import org.springframework.ai.chat.observation.ChatModelCompletionObservationHandler;
import org.springframework.ai.model.observation.ErrorLoggingObservationHandler;
import org.springframework.ai.observation.TracingAwareLoggingObservationHandler;

Note: The auto-configuration class (ChatObservationAutoConfiguration) is automatically discovered and applied by Spring Boot. You do not need to import or reference it directly unless creating custom configurations.

Basic Usage

This module works through Spring Boot auto-configuration. Simply add the dependency and configure properties as needed.

Default Configuration (Metrics Only)

With just the module on the classpath and Spring Boot Actuator available:

import org.springframework.ai.chat.model.ChatModel;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class MyAiApplication {
    public static void main(String[] args) {
        SpringApplication.run(MyAiApplication.class, args);
    }
}

The auto-configuration will automatically register:

  • ChatModelMeterObservationHandler for metrics collection (if MeterRegistry is present)
  • Observation infrastructure for chat model operations

No explicit configuration required - metrics are collected automatically.

Enabling Prompt and Completion Logging

Security Warning: These settings may expose sensitive information in logs and traces. Use with caution.

# application.properties

# Enable logging of prompt content (security warning issued at startup)
spring.ai.chat.observations.log-prompt=true

# Enable logging of completion content (security warning issued at startup)
spring.ai.chat.observations.log-completion=true

# Enable error logging across multiple model contexts
spring.ai.chat.observations.include-error-logging=true

Or in YAML:

# application.yml
spring:
  ai:
    chat:
      observations:
        log-prompt: true        # Default: false
        log-completion: true    # Default: false
        include-error-logging: true  # Default: false

With Distributed Tracing

When micrometer-tracing is on the classpath and a Tracer bean is configured:

<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-tracing-bridge-brave</artifactId>
</dependency>
<dependency>
    <groupId>io.zipkin.reporter2</groupId>
    <artifactId>zipkin-reporter-brave</artifactId>
</dependency>

The module automatically registers tracing-aware observation handlers:

  • TracingAwareLoggingObservationHandler for prompt content (when enabled)
  • TracingAwareLoggingObservationHandler for completion content (when enabled)
  • ErrorLoggingObservationHandler for errors (when enabled)

Capabilities

Auto-Configuration

The module uses Spring Boot's conditional auto-configuration to adapt to the runtime environment.

ChatObservationAutoConfiguration

package org.springframework.ai.model.chat.observation.autoconfigure;

@AutoConfiguration(afterName = "org.springframework.boot.actuate.autoconfigure.observation.ObservationAutoConfiguration")
@ConditionalOnClass(ChatModel.class)
@EnableConfigurationProperties(ChatObservationProperties.class)
public class ChatObservationAutoConfiguration {
    // Bean definitions (see below)
}

Activation Conditions:

  • Runs after ObservationAutoConfiguration
  • Requires ChatModel.class on classpath
  • Enables ChatObservationProperties configuration binding

Bean Definitions:

Meter Observation Handler
@Bean
@ConditionalOnMissingBean
@ConditionalOnBean(MeterRegistry.class)
ChatModelMeterObservationHandler chatModelMeterObservationHandler(ObjectProvider<MeterRegistry> meterRegistry);

Creates a meter-based observation handler for collecting metrics about chat model operations (execution time, token usage, etc.). Only created when:

  • No existing ChatModelMeterObservationHandler bean
  • MeterRegistry bean is available
Tracing-Aware Prompt Logging Handler
@Bean
@ConditionalOnMissingBean(value = ChatModelPromptContentObservationHandler.class,
                         name = "chatModelPromptContentObservationHandler")
@ConditionalOnProperty(prefix = "spring.ai.chat.observations",
                      name = "log-prompt",
                      havingValue = "true")
TracingAwareLoggingObservationHandler<ChatModelObservationContext>
    chatModelPromptContentObservationHandler(Tracer tracer);

Available when Tracer is present. Wraps prompt logging with distributed tracing context. Only created when:

  • Property spring.ai.chat.observations.log-prompt=true
  • No existing bean with matching type or name
  • Tracer class and bean are available

Security: Logs warning at startup about potential sensitive data exposure.

Tracing-Aware Completion Logging Handler
@Bean
@ConditionalOnMissingBean(value = ChatModelCompletionObservationHandler.class,
                         name = "chatModelCompletionObservationHandler")
@ConditionalOnProperty(prefix = "spring.ai.chat.observations",
                      name = "log-completion",
                      havingValue = "true")
TracingAwareLoggingObservationHandler<ChatModelObservationContext>
    chatModelCompletionObservationHandler(Tracer tracer);

Available when Tracer is present. Wraps completion logging with distributed tracing context. Only created when:

  • Property spring.ai.chat.observations.log-completion=true
  • No existing bean with matching type or name
  • Tracer class and bean are available

Security: Logs warning at startup about potential sensitive data exposure.

Error Logging Handler
@Bean
@ConditionalOnMissingBean
@ConditionalOnProperty(prefix = "spring.ai.chat.observations",
                      name = "include-error-logging",
                      havingValue = "true")
ErrorLoggingObservationHandler errorLoggingObservationHandler(Tracer tracer);

Available when Tracer is present. Logs errors across multiple observation context types. Only created when:

  • Property spring.ai.chat.observations.include-error-logging=true
  • No existing ErrorLoggingObservationHandler bean
  • Tracer class and bean are available

Supported Context Types:

  • EmbeddingModelObservationContext - Embedding model operations
  • ImageModelObservationContext - Image model operations
  • ChatModelObservationContext - Chat model operations
  • ChatClientObservationContext - Chat client operations
  • AdvisorObservationContext - Advisor operations
Simple Prompt Logging Handler (No Tracing)
@Bean
@ConditionalOnMissingBean
@ConditionalOnProperty(prefix = "spring.ai.chat.observations",
                      name = "log-prompt",
                      havingValue = "true")
ChatModelPromptContentObservationHandler chatModelPromptContentObservationHandler();

Available when Tracer is NOT present. Provides basic prompt logging without tracing. Only created when:

  • Property spring.ai.chat.observations.log-prompt=true
  • No existing ChatModelPromptContentObservationHandler bean
  • Tracer class is NOT on classpath

Security: Logs warning at startup about potential sensitive data exposure.

Simple Completion Logging Handler (No Tracing)
@Bean
@ConditionalOnMissingBean
@ConditionalOnProperty(prefix = "spring.ai.chat.observations",
                      name = "log-completion",
                      havingValue = "true")
ChatModelCompletionObservationHandler chatModelCompletionObservationHandler();

Available when Tracer is NOT present. Provides basic completion logging without tracing. Only created when:

  • Property spring.ai.chat.observations.log-completion=true
  • No existing ChatModelCompletionObservationHandler bean
  • Tracer class is NOT on classpath

Security: Logs warning at startup about potential sensitive data exposure.

Configuration Properties

All observation behavior is controlled through configuration properties.

ChatObservationProperties

package org.springframework.ai.model.chat.observation.autoconfigure;

@ConfigurationProperties("spring.ai.chat.observations")
public class ChatObservationProperties {

    public static final String CONFIG_PREFIX = "spring.ai.chat.observations";

    private boolean logCompletion = false;
    private boolean logPrompt = false;
    private boolean includeErrorLogging = false;

    // Getters and setters
    public boolean isLogCompletion();
    public void setLogCompletion(boolean logCompletion);

    public boolean isLogPrompt();
    public void setLogPrompt(boolean logPrompt);

    public boolean isIncludeErrorLogging();
    public void setIncludeErrorLogging(boolean includeErrorLogging);
}

Property: logCompletion

  • Type: boolean
  • Default: false
  • Property Key: spring.ai.chat.observations.log-completion
  • Description: Whether to log completion content in observations
  • Security Impact: When enabled, completion content (AI responses) will be logged, potentially exposing sensitive information
  • Methods: isLogCompletion(), setLogCompletion(boolean)

Property: logPrompt

  • Type: boolean
  • Default: false
  • Property Key: spring.ai.chat.observations.log-prompt
  • Description: Whether to log prompt content in observations
  • Security Impact: When enabled, prompt content (user inputs) will be logged, potentially exposing sensitive information
  • Methods: isLogPrompt(), setLogPrompt(boolean)

Property: includeErrorLogging

  • Type: boolean
  • Default: false
  • Property Key: spring.ai.chat.observations.include-error-logging
  • Description: Whether to include error logging in observations across multiple model contexts
  • Methods: isIncludeErrorLogging(), setIncludeErrorLogging(boolean)

Package Annotations

The module enforces null-safety at the package level:

@NonNullApi
@NonNullFields
package org.springframework.ai.model.chat.observation.autoconfigure;
  • @NonNullApi: All methods and parameters are non-null by default unless explicitly annotated with @Nullable
  • @NonNullFields: All fields are non-null by default unless explicitly annotated with @Nullable

Usage Scenarios

Scenario 1: Basic Metrics Collection

Default setup with metrics only (no sensitive data logging):

# No configuration needed - metrics collection is automatic

Automatically collects:

  • Chat operation execution time
  • Token usage metrics
  • Operation counts

Access metrics via Spring Boot Actuator endpoints:

GET /actuator/metrics/gen.ai.client.operation
GET /actuator/metrics/gen.ai.client.token.usage

Scenario 2: Development Environment with Full Logging

Enable all logging for debugging (development only):

# application-dev.properties
spring.ai.chat.observations.log-prompt=true
spring.ai.chat.observations.log-completion=true
spring.ai.chat.observations.include-error-logging=true

Warning Messages logged at startup:

WARN ... You have enabled logging out the prompt content with the risk of exposing
         sensitive or private information. Please, be careful!
WARN ... You have enabled logging out the completion content with the risk of exposing
         sensitive or private information. Please, be careful!

Scenario 3: Production with Distributed Tracing

Production setup with tracing but no content logging:

<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-tracing-bridge-brave</artifactId>
</dependency>
# application-prod.properties
spring.ai.chat.observations.log-prompt=false
spring.ai.chat.observations.log-completion=false
spring.ai.chat.observations.include-error-logging=true

Provides:

  • Distributed trace context propagation
  • Error logging with trace IDs
  • No sensitive content in logs

Scenario 4: Custom Observation Handler

Override default handlers with custom implementations:

import org.springframework.ai.chat.observation.ChatModelMeterObservationHandler;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import io.micrometer.core.instrument.MeterRegistry;

@Configuration
public class CustomObservationConfig {

    @Bean
    public ChatModelMeterObservationHandler chatModelMeterObservationHandler(
            MeterRegistry meterRegistry) {
        // Custom meter handler with additional tags or behavior
        return new ChatModelMeterObservationHandler(meterRegistry) {
            // Override methods to customize behavior
        };
    }
}

The auto-configuration respects @ConditionalOnMissingBean, so your custom bean takes precedence.

Scenario 5: Programmatic Configuration Access

Access configuration properties programmatically:

import org.springframework.ai.model.chat.observation.autoconfigure.ChatObservationProperties;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

@Component
public class ObservationStatusChecker {

    @Autowired
    private ChatObservationProperties properties;

    public void checkObservationSettings() {
        if (properties.isLogPrompt()) {
            // Prompt logging is enabled
            System.out.println("Prompt logging: ENABLED (security risk)");
        }

        if (properties.isLogCompletion()) {
            // Completion logging is enabled
            System.out.println("Completion logging: ENABLED (security risk)");
        }

        if (properties.isIncludeErrorLogging()) {
            // Error logging is enabled
            System.out.println("Error logging: ENABLED");
        }
    }
}

Scenario 6: Custom Logging Handler with Filtering

Implement a custom logging handler that filters sensitive data:

import org.springframework.ai.chat.observation.ChatModelPromptContentObservationHandler;
import org.springframework.ai.chat.observation.ChatModelObservationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import io.micrometer.observation.Observation;

@Configuration
public class FilteredLoggingConfig {

    @Bean
    public ChatModelPromptContentObservationHandler chatModelPromptContentObservationHandler() {
        return new ChatModelPromptContentObservationHandler() {
            @Override
            public void onStart(ChatModelObservationContext context) {
                // Filter or redact sensitive information before logging
                String prompt = extractPromptContent(context);
                String filtered = redactSensitiveData(prompt);
                logFiltered(filtered);
            }

            private String extractPromptContent(ChatModelObservationContext context) {
                if (context.getRequest() != null && 
                    context.getRequest().getInstructions() != null) {
                    return context.getRequest().getInstructions().toString();
                }
                return "";
            }

            private String redactSensitiveData(String content) {
                // Implement filtering logic (e.g., regex patterns for emails, SSNs, etc.)
                return content.replaceAll("\\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Z|a-z]{2,}\\b", "[EMAIL]")
                             .replaceAll("\\b\\d{3}-\\d{2}-\\d{4}\\b", "[SSN]");
            }

            private void logFiltered(String content) {
                // Custom logging implementation
            }
        };
    }
}

Scenario 7: Metrics-Based Alerting Configuration

Configure custom meters with additional tags for alerting:

import org.springframework.ai.chat.observation.ChatModelMeterObservationHandler;
import org.springframework.ai.chat.observation.ChatModelObservationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.Tag;
import io.micrometer.core.instrument.Tags;

@Configuration
public class CustomMetricsConfig {

    @Bean
    public ChatModelMeterObservationHandler chatModelMeterObservationHandler(
            MeterRegistry meterRegistry) {
        return new ChatModelMeterObservationHandler(meterRegistry) {
            @Override
            public void onStop(ChatModelObservationContext context) {
                super.onStop(context);
                
                // Add custom metrics with additional tags
                if (context.getResponse() != null) {
                    Tags tags = Tags.of(
                        Tag.of("model", context.getModelName()),
                        Tag.of("provider", context.getModelProvider()),
                        Tag.of("status", "success")
                    );
                    
                    meterRegistry.counter("ai.chat.requests", tags).increment();
                }
            }

            @Override
            public void onError(ChatModelObservationContext context) {
                super.onError(context);
                
                // Track errors separately
                Tags tags = Tags.of(
                    Tag.of("model", context.getModelName()),
                    Tag.of("provider", context.getModelProvider()),
                    Tag.of("status", "error")
                );
                
                meterRegistry.counter("ai.chat.requests", tags).increment();
            }
        };
    }
}

Scenario 8: Conditional Logging Based on Environment

Enable different logging levels based on Spring profiles:

import org.springframework.ai.chat.observation.ChatModelPromptContentObservationHandler;
import org.springframework.ai.chat.observation.ChatModelCompletionObservationHandler;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;

@Configuration
public class ProfileBasedObservationConfig {

    @Bean
    @Profile("dev")
    public ChatModelPromptContentObservationHandler devPromptObservationHandler() {
        // Full logging in development
        return new ChatModelPromptContentObservationHandler();
    }

    @Bean
    @Profile("dev")
    public ChatModelCompletionObservationHandler devCompletionObservationHandler() {
        // Full logging in development
        return new ChatModelCompletionObservationHandler();
    }

    // No logging beans for production profile
    // Metrics-only configuration will be used
}

Corresponding application properties:

# application-dev.properties
spring.ai.chat.observations.log-prompt=true
spring.ai.chat.observations.log-completion=true
spring.profiles.active=dev

# application-prod.properties
spring.ai.chat.observations.log-prompt=false
spring.ai.chat.observations.log-completion=false
spring.profiles.active=prod

Configuration Examples

Complete application.properties Example

# Chat Observation Configuration
spring.ai.chat.observations.log-prompt=false
spring.ai.chat.observations.log-completion=false
spring.ai.chat.observations.include-error-logging=true

# Actuator configuration for metrics exposure
management.endpoints.web.exposure.include=health,info,metrics,prometheus
management.metrics.export.prometheus.enabled=true

# Tracing configuration (if using Zipkin)
management.tracing.sampling.probability=1.0
management.zipkin.tracing.endpoint=http://localhost:9411/api/v2/spans

Complete application.yml Example

spring:
  ai:
    chat:
      observations:
        log-prompt: false           # Default: false
        log-completion: false       # Default: false
        include-error-logging: true # Default: false

management:
  endpoints:
    web:
      exposure:
        include:
          - health
          - info
          - metrics
          - prometheus
  metrics:
    export:
      prometheus:
        enabled: true
  tracing:
    sampling:
      probability: 1.0
  zipkin:
    tracing:
      endpoint: http://localhost:9411/api/v2/spans

Profile-Specific Configuration

Different settings for different environments:

# application.properties (common defaults)
spring.ai.chat.observations.log-prompt=false
spring.ai.chat.observations.log-completion=false
spring.ai.chat.observations.include-error-logging=false
# application-dev.properties (development)
spring.ai.chat.observations.log-prompt=true
spring.ai.chat.observations.log-completion=true
spring.ai.chat.observations.include-error-logging=true
# application-prod.properties (production)
spring.ai.chat.observations.log-prompt=false
spring.ai.chat.observations.log-completion=false
spring.ai.chat.observations.include-error-logging=true

Multi-Environment Configuration with Kubernetes ConfigMap

# configmap.yaml for Kubernetes deployment
apiVersion: v1
kind: ConfigMap
metadata:
  name: ai-app-config
  namespace: production
data:
  application.properties: |
    # Production observability settings
    spring.ai.chat.observations.log-prompt=false
    spring.ai.chat.observations.log-completion=false
    spring.ai.chat.observations.include-error-logging=true
    
    # Metrics export to Prometheus
    management.endpoints.web.exposure.include=health,metrics,prometheus
    management.metrics.export.prometheus.enabled=true
    
    # Distributed tracing to Jaeger
    management.tracing.sampling.probability=0.1
    management.otlp.tracing.endpoint=http://jaeger-collector:4318/v1/traces

Observable Metrics

When the module is active, the following metrics are automatically collected (provided by Spring AI core):

Meter-Based Metrics

gen.ai.client.operation

  • Type: Timer
  • Description: Measures execution time of chat model operations
  • Tags: model provider, operation name, model name, etc.
  • Unit: Seconds
  • Usage: Query execution time statistics for performance monitoring

gen.ai.client.token.usage

  • Type: Counter
  • Description: Tracks token consumption (input, output, total)
  • Tags: token type (input/output/total), model provider, model name
  • Unit: Tokens
  • Usage: Monitor token consumption for cost tracking and rate limiting

Prometheus Format Examples

When exported to Prometheus:

gen_ai_client_operation_seconds_count{...}
gen_ai_client_operation_seconds_sum{...}
gen_ai_client_operation_seconds_max{...}
gen_ai_client_token_usage_total{token_type="input",...}
gen_ai_client_token_usage_total{token_type="output",...}

Accessing Metrics Programmatically

import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.Timer;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

@Component
public class MetricsAnalyzer {

    @Autowired
    private MeterRegistry meterRegistry;

    public void analyzeOperationMetrics() {
        Timer timer = meterRegistry.find("gen.ai.client.operation").timer();
        
        if (timer != null) {
            long count = timer.count();
            double meanMs = timer.mean(java.util.concurrent.TimeUnit.MILLISECONDS);
            double maxMs = timer.max(java.util.concurrent.TimeUnit.MILLISECONDS);
            
            System.out.println("Total operations: " + count);
            System.out.println("Average latency: " + meanMs + " ms");
            System.out.println("Max latency: " + maxMs + " ms");
        }
    }

    public void analyzeTokenUsage() {
        meterRegistry.find("gen.ai.client.token.usage")
            .counters()
            .forEach(counter -> {
                String tokenType = counter.getId().getTag("token_type");
                double totalTokens = counter.count();
                System.out.println("Token type: " + tokenType + 
                                   ", Total: " + totalTokens);
            });
    }
}

Observation Context Types

The error logging handler supports multiple observation context types from Spring AI:

ChatModelObservationContext

  • Observes direct chat model operations
  • Captured by chat model implementations

ChatClientObservationContext

  • Observes ChatClient API operations
  • Captured when using fluent ChatClient interface

AdvisorObservationContext

  • Observes advisor execution in chat client
  • Tracks advisor chain operations

EmbeddingModelObservationContext

  • Observes embedding model operations
  • Included in error logging support

ImageModelObservationContext

  • Observes image generation operations
  • Included in error logging support

Integration with Spring Boot Actuator

The module integrates seamlessly with Spring Boot Actuator:

Actuator Dependencies

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

Exposed Endpoints

Metrics Endpoint: /actuator/metrics

Query specific metrics:

# Get chat operation metrics
curl http://localhost:8080/actuator/metrics/gen.ai.client.operation

# Get token usage metrics
curl http://localhost:8080/actuator/metrics/gen.ai.client.token.usage

Prometheus Endpoint: /actuator/prometheus

# Get all metrics in Prometheus format
curl http://localhost:8080/actuator/prometheus | grep gen_ai

Health Indicators

While this module doesn't provide health indicators directly, the observation data can be used by custom health indicators:

import org.springframework.boot.actuate.health.Health;
import org.springframework.boot.actuate.health.HealthIndicator;
import org.springframework.stereotype.Component;
import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.Timer;

@Component
public class ChatModelHealthIndicator implements HealthIndicator {

    private final MeterRegistry meterRegistry;

    public ChatModelHealthIndicator(MeterRegistry meterRegistry) {
        this.meterRegistry = meterRegistry;
    }

    @Override
    public Health health() {
        Timer timer = meterRegistry.find("gen.ai.client.operation").timer();
        if (timer != null && timer.count() > 0) {
            double avgLatencyMs = timer.mean(java.util.concurrent.TimeUnit.MILLISECONDS);
            if (avgLatencyMs < 1000) {
                return Health.up()
                    .withDetail("avgLatencyMs", avgLatencyMs)
                    .build();
            } else {
                return Health.degraded()
                    .withDetail("avgLatencyMs", avgLatencyMs)
                    .withDetail("message", "High latency detected")
                    .build();
            }
        }
        return Health.unknown().build();
    }
}

Advanced Health Indicator with Token Usage Monitoring

import org.springframework.boot.actuate.health.Health;
import org.springframework.boot.actuate.health.HealthIndicator;
import org.springframework.stereotype.Component;
import io.micrometer.core.instrument.Counter;
import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.Timer;

@Component
public class AdvancedChatModelHealthIndicator implements HealthIndicator {

    private final MeterRegistry meterRegistry;
    private static final double LATENCY_THRESHOLD_MS = 2000.0;
    private static final double TOKEN_RATE_THRESHOLD = 10000.0; // tokens per minute

    public AdvancedChatModelHealthIndicator(MeterRegistry meterRegistry) {
        this.meterRegistry = meterRegistry;
    }

    @Override
    public Health health() {
        Timer operationTimer = meterRegistry.find("gen.ai.client.operation").timer();
        Counter tokenCounter = meterRegistry.find("gen.ai.client.token.usage").counter();

        if (operationTimer == null) {
            return Health.unknown()
                .withDetail("message", "No chat operations observed yet")
                .build();
        }

        double avgLatencyMs = operationTimer.mean(java.util.concurrent.TimeUnit.MILLISECONDS);
        double maxLatencyMs = operationTimer.max(java.util.concurrent.TimeUnit.MILLISECONDS);
        long operationCount = operationTimer.count();

        Health.Builder healthBuilder = Health.up();
        healthBuilder.withDetail("operationCount", operationCount)
                    .withDetail("avgLatencyMs", avgLatencyMs)
                    .withDetail("maxLatencyMs", maxLatencyMs);

        if (tokenCounter != null) {
            double totalTokens = tokenCounter.count();
            healthBuilder.withDetail("totalTokensUsed", totalTokens);

            // Estimate token rate (rough approximation)
            if (operationCount > 0) {
                double avgTokensPerOperation = totalTokens / operationCount;
                healthBuilder.withDetail("avgTokensPerOperation", avgTokensPerOperation);
            }
        }

        // Determine health status
        if (avgLatencyMs > LATENCY_THRESHOLD_MS) {
            return healthBuilder.down()
                .withDetail("issue", "Average latency exceeds threshold")
                .withDetail("threshold", LATENCY_THRESHOLD_MS)
                .build();
        } else if (avgLatencyMs > LATENCY_THRESHOLD_MS * 0.7) {
            return healthBuilder.status("DEGRADED")
                .withDetail("warning", "Latency approaching threshold")
                .build();
        }

        return healthBuilder.build();
    }
}

Integration with Distributed Tracing

When Micrometer Tracing is available, the module automatically creates tracing-aware handlers.

Tracing Dependencies

<!-- Brave (Zipkin) tracing bridge -->
<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-tracing-bridge-brave</artifactId>
</dependency>

<!-- Zipkin reporter -->
<dependency>
    <groupId>io.zipkin.reporter2</groupId>
    <artifactId>zipkin-reporter-brave</artifactId>
</dependency>

Or for OpenTelemetry:

<!-- OpenTelemetry tracing bridge -->
<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-tracing-bridge-otel</artifactId>
</dependency>

<!-- OpenTelemetry exporter -->
<dependency>
    <groupId>io.opentelemetry</groupId>
    <artifactId>opentelemetry-exporter-zipkin</artifactId>
</dependency>

Tracing Configuration

# Enable tracing
management.tracing.sampling.probability=1.0

# Zipkin endpoint
management.zipkin.tracing.endpoint=http://localhost:9411/api/v2/spans

# Propagation format
management.tracing.propagation.type=w3c

Trace Context Propagation

With tracing enabled, each chat operation creates a span with:

  • Span Name: Based on operation type
  • Trace ID: Unique identifier for the entire trace
  • Span ID: Unique identifier for this operation
  • Parent Span ID: If part of a larger operation chain

Logged content (when enabled) includes trace context:

TraceId: 5f3e8d9a7b2c1f4e SpanId: 9a7b2c1f Prompt: [prompt content]
TraceId: 5f3e8d9a7b2c1f4e SpanId: 9a7b2c1f Completion: [completion content]

Programmatic Trace Access

import io.micrometer.tracing.Span;
import io.micrometer.tracing.Tracer;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;

@Component
public class TracingExample {

    @Autowired(required = false)
    private Tracer tracer;

    public void performTracedOperation() {
        if (tracer != null) {
            Span currentSpan = tracer.currentSpan();
            if (currentSpan != null) {
                String traceId = currentSpan.context().traceId();
                String spanId = currentSpan.context().spanId();
                
                System.out.println("Current trace: " + traceId);
                System.out.println("Current span: " + spanId);
                
                // Add custom tags to the span
                currentSpan.tag("custom.tag", "value");
                currentSpan.event("custom.event");
            }
        }
    }
}

Custom Span Creation for Chat Operations

import io.micrometer.tracing.Span;
import io.micrometer.tracing.Tracer;
import org.springframework.ai.chat.model.ChatModel;
import org.springframework.ai.chat.prompt.Prompt;
import org.springframework.ai.chat.model.ChatResponse;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;

@Service
public class TracedChatService {

    @Autowired
    private ChatModel chatModel;

    @Autowired(required = false)
    private Tracer tracer;

    public ChatResponse callWithCustomSpan(Prompt prompt) {
        if (tracer != null) {
            Span customSpan = tracer.nextSpan().name("custom.chat.operation");
            try (Tracer.SpanInScope ws = tracer.withSpan(customSpan.start())) {
                // Add metadata to span
                customSpan.tag("prompt.size", String.valueOf(
                    prompt.getInstructions().size()));
                
                ChatResponse response = chatModel.call(prompt);
                
                // Add response metadata
                if (response.getResult() != null) {
                    customSpan.tag("response.generated", "true");
                }
                
                return response;
            } catch (Exception e) {
                customSpan.error(e);
                throw e;
            } finally {
                customSpan.end();
            }
        } else {
            return chatModel.call(prompt);
        }
    }
}

Security Considerations

Default Security Posture

The module is secure by default:

  • All logging options default to false
  • Metrics collection does NOT include sensitive content
  • Security warnings logged when sensitive logging is enabled

Security Warnings

When enabling content logging, warnings are logged at startup:

WARN o.s.a.m.c.o.a.ChatObservationAutoConfiguration : You have enabled logging out
     the prompt content with the risk of exposing sensitive or private information.
     Please, be careful!

WARN o.s.a.m.c.o.a.ChatObservationAutoConfiguration : You have enabled logging out
     the completion content with the risk of exposing sensitive or private information.
     Please, be careful!

Best Practices

  1. Never enable content logging in production unless you have specific compliance requirements and safeguards
  2. Use profile-specific configuration to enable content logging only in development/testing
  3. Secure actuator endpoints with Spring Security:
    @Configuration
    public class ActuatorSecurityConfig {
        
        @Bean
        public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {
            http
                .securityMatcher(EndpointRequest.toAnyEndpoint())
                .authorizeHttpRequests(authorize -> authorize
                    .anyRequest().hasRole("ACTUATOR")
                )
                .httpBasic(withDefaults());
            return http.build();
        }
    }
  4. Monitor logs for sensitive data if content logging is enabled
  5. Rotate API keys if logs containing prompts/completions are exposed
  6. Implement log sanitization if you must log content in production:
    @Bean
    public ChatModelPromptContentObservationHandler chatModelPromptContentObservationHandler() {
        return new ChatModelPromptContentObservationHandler() {
            // Override to sanitize sensitive data before logging
        };
    }

Advanced Security: Log Redaction

import org.springframework.ai.chat.observation.ChatModelPromptContentObservationHandler;
import org.springframework.ai.chat.observation.ChatModelObservationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class SecureLoggingConfig {

    @Bean
    public ChatModelPromptContentObservationHandler chatModelPromptContentObservationHandler() {
        return new ChatModelPromptContentObservationHandler() {
            @Override
            public void onStart(ChatModelObservationContext context) {
                if (context.getRequest() != null) {
                    String content = extractAndRedact(context.getRequest().toString());
                    logRedacted(content);
                }
            }

            private String extractAndRedact(String content) {
                // Redact email addresses
                content = content.replaceAll(
                    "\\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Z|a-z]{2,}\\b", 
                    "[EMAIL_REDACTED]");
                
                // Redact phone numbers
                content = content.replaceAll(
                    "\\b\\d{3}[-.]?\\d{3}[-.]?\\d{4}\\b", 
                    "[PHONE_REDACTED]");
                
                // Redact SSN
                content = content.replaceAll(
                    "\\b\\d{3}-\\d{2}-\\d{4}\\b", 
                    "[SSN_REDACTED]");
                
                // Redact credit card numbers
                content = content.replaceAll(
                    "\\b\\d{4}[\\s-]?\\d{4}[\\s-]?\\d{4}[\\s-]?\\d{4}\\b", 
                    "[CC_REDACTED]");
                
                // Redact API keys (common patterns)
                content = content.replaceAll(
                    "\\b[A-Za-z0-9_-]{20,}\\b", 
                    "[API_KEY_REDACTED]");
                
                return content;
            }

            private void logRedacted(String content) {
                // Use appropriate logger
                System.out.println("[REDACTED PROMPT] " + content);
            }
        };
    }
}

Compliance Considerations

If your application processes regulated data (GDPR, HIPAA, PCI-DSS):

  • Disable all content logging in production
  • Review observation data for compliance requirements
  • Implement data retention policies for metrics and traces
  • Use encryption for trace/log storage
  • Audit access to observability data

GDPR Compliance Example

import org.springframework.ai.chat.observation.ChatModelCompletionObservationHandler;
import org.springframework.ai.chat.observation.ChatModelObservationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Profile;

@Configuration
@Profile("gdpr-compliant")
public class GdprCompliantObservationConfig {

    // No prompt or completion logging beans
    // Only metrics without PII

    @Bean
    public ObservationAuditLogger observationAuditLogger() {
        return new ObservationAuditLogger();
    }

    public static class ObservationAuditLogger {
        public void logAccess(String userId, String action) {
            // Log access to observation data for audit trail
            // Store in append-only audit log
            System.out.println("AUDIT: User " + userId + 
                             " performed " + action + 
                             " at " + java.time.Instant.now());
        }
    }
}

Troubleshooting

Metrics Not Appearing

Problem: No metrics visible in actuator endpoints

Solutions:

  1. Verify spring-boot-starter-actuator is on classpath
  2. Check actuator endpoints are exposed:
    management.endpoints.web.exposure.include=metrics
  3. Verify MeterRegistry bean exists:
    @Autowired
    private MeterRegistry meterRegistry; // Should not be null
  4. Confirm ChatModel is on classpath and auto-configuration activated

Diagnostic Code:

import io.micrometer.core.instrument.MeterRegistry;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.stereotype.Component;

@Component
public class MetricsDiagnostics implements CommandLineRunner {

    @Autowired(required = false)
    private MeterRegistry meterRegistry;

    @Override
    public void run(String... args) {
        if (meterRegistry == null) {
            System.err.println("ERROR: MeterRegistry not available. " +
                             "Add spring-boot-starter-actuator dependency.");
            return;
        }

        System.out.println("MeterRegistry available: " + 
                         meterRegistry.getClass().getName());
        
        // Check for AI metrics
        boolean hasAiMetrics = meterRegistry.find("gen.ai.client.operation")
                                           .timer() != null;
        
        if (!hasAiMetrics) {
            System.out.println("WARNING: No AI metrics found. " +
                             "Ensure ChatModel is being used.");
        } else {
            System.out.println("SUCCESS: AI metrics are being collected.");
        }
    }
}

Tracing Not Working

Problem: Trace context not propagated or handlers not created

Solutions:

  1. Verify micrometer-tracing dependency is present
  2. Check Tracer bean is configured:
    @Autowired
    private Tracer tracer; // Should not be null
  3. Verify tracing configuration:
    management.tracing.sampling.probability=1.0
  4. Check logs for auto-configuration report:
    logging.level.org.springframework.boot.autoconfigure=DEBUG

Diagnostic Code:

import io.micrometer.tracing.Tracer;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.stereotype.Component;

@Component
public class TracingDiagnostics implements CommandLineRunner {

    @Autowired(required = false)
    private Tracer tracer;

    @Override
    public void run(String... args) {
        if (tracer == null) {
            System.out.println("INFO: Tracer not available. " +
                             "Tracing is disabled. " +
                             "Add micrometer-tracing dependency if needed.");
            return;
        }

        System.out.println("Tracer available: " + tracer.getClass().getName());
        
        // Test span creation
        try {
            var span = tracer.nextSpan().name("test").start();
            var traceId = span.context().traceId();
            span.end();
            
            System.out.println("SUCCESS: Tracing is working. Test trace ID: " + 
                             traceId);
        } catch (Exception e) {
            System.err.println("ERROR: Tracing failed: " + e.getMessage());
        }
    }
}

Content Logging Not Appearing

Problem: Prompts/completions not logged despite enabled properties

Solutions:

  1. Verify properties are correctly set:
    spring.ai.chat.observations.log-prompt=true
    spring.ai.chat.observations.log-completion=true
  2. Check for typos in property names (common: log-prompts vs log-prompt)
  3. Verify logging level allows output:
    logging.level.org.springframework.ai=DEBUG
  4. Confirm security warnings appear at startup (indicates handlers created)
  5. Check for custom bean definitions that may override auto-configured beans

Diagnostic Code:

import org.springframework.ai.model.chat.observation.autoconfigure.ChatObservationProperties;
import org.springframework.ai.chat.observation.ChatModelPromptContentObservationHandler;
import org.springframework.ai.chat.observation.ChatModelCompletionObservationHandler;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.context.ApplicationContext;
import org.springframework.stereotype.Component;

@Component
public class LoggingDiagnostics implements CommandLineRunner {

    @Autowired
    private ChatObservationProperties properties;

    @Autowired
    private ApplicationContext context;

    @Override
    public void run(String... args) {
        System.out.println("=== Chat Observation Configuration ===");
        System.out.println("Log Prompt: " + properties.isLogPrompt());
        System.out.println("Log Completion: " + properties.isLogCompletion());
        System.out.println("Include Error Logging: " + 
                         properties.isIncludeErrorLogging());

        // Check for handler beans
        boolean hasPromptHandler = context.containsBean(
            "chatModelPromptContentObservationHandler");
        boolean hasCompletionHandler = context.containsBean(
            "chatModelCompletionObservationHandler");

        System.out.println("Prompt Handler Bean: " + hasPromptHandler);
        System.out.println("Completion Handler Bean: " + hasCompletionHandler);

        if (properties.isLogPrompt() && !hasPromptHandler) {
            System.err.println("WARNING: Prompt logging enabled but handler " +
                             "bean not found!");
        }

        if (properties.isLogCompletion() && !hasCompletionHandler) {
            System.err.println("WARNING: Completion logging enabled but " +
                             "handler bean not found!");
        }
    }
}

Auto-Configuration Not Activating

Problem: Module present but no beans created

Solutions:

  1. Verify ChatModel is on classpath:
    <dependency>
        <groupId>org.springframework.ai</groupId>
        <artifactId>spring-ai-client-chat</artifactId>
    </dependency>
  2. Enable auto-configuration debugging:
    debug=true
  3. Check auto-configuration report in logs for condition match failures
  4. Verify no @EnableAutoConfiguration exclusions:
    @SpringBootApplication(exclude = {ChatObservationAutoConfiguration.class}) // Don't do this

Diagnostic Code:

import org.springframework.ai.model.chat.observation.autoconfigure.ChatObservationAutoConfiguration;
import org.springframework.boot.CommandLineRunner;
import org.springframework.context.ApplicationContext;
import org.springframework.stereotype.Component;

@Component
public class AutoConfigurationDiagnostics implements CommandLineRunner {

    private final ApplicationContext context;

    public AutoConfigurationDiagnostics(ApplicationContext context) {
        this.context = context;
    }

    @Override
    public void run(String... args) {
        // Check if auto-configuration class is loaded
        boolean autoConfigPresent = context.containsBean(
            "chatObservationAutoConfiguration");
        
        System.out.println("ChatObservationAutoConfiguration present: " + 
                         autoConfigPresent);

        // List all observation-related beans
        System.out.println("\n=== Observation Beans ===");
        String[] beanNames = context.getBeanNamesForType(
            io.micrometer.observation.ObservationHandler.class);
        
        for (String name : beanNames) {
            System.out.println("- " + name + ": " + 
                             context.getBean(name).getClass().getName());
        }

        if (beanNames.length == 0) {
            System.err.println("WARNING: No ObservationHandler beans found. " +
                             "Auto-configuration may not be active.");
        }
    }
}

Bean Conflicts and Overrides

Problem: Custom beans not taking effect or unexpected beans created

Solution: Verify bean naming and conditions

import org.springframework.ai.chat.observation.ChatModelMeterObservationHandler;
import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
import io.micrometer.core.instrument.MeterRegistry;

@Configuration
public class CustomObservationOverrideConfig {

    // This bean will override the auto-configured one
    @Bean
    @Primary  // Mark as primary if there are multiple candidates
    public ChatModelMeterObservationHandler customChatModelMeterObservationHandler(
            MeterRegistry meterRegistry) {
        return new ChatModelMeterObservationHandler(meterRegistry) {
            // Custom implementation
        };
    }

    // Diagnostic bean to verify override
    @Bean
    public CommandLineRunner verifyBeanOverride(
            ChatModelMeterObservationHandler handler) {
        return args -> {
            System.out.println("Active ChatModelMeterObservationHandler: " + 
                             handler.getClass().getName());
        };
    }
}

Types

This section defines the key types used in this module's API. Most of these types are defined in other Spring AI modules (spring-ai-client-chat, spring-ai-model) and are provided here for reference to ensure complete understanding of the API surface.

Observation Handler Types

These types are defined in org.springframework.ai:spring-ai-client-chat and org.springframework.ai:spring-ai-model modules:

ChatModelMeterObservationHandler

package org.springframework.ai.chat.observation;

/**
 * Observation handler that collects meter-based metrics for chat model operations.
 * Automatically tracks execution time, token usage, and operation counts.
 * Integrates with Micrometer's MeterRegistry for metrics collection.
 */
public class ChatModelMeterObservationHandler
    implements ObservationHandler<ChatModelObservationContext> {

    /**
     * Creates a meter observation handler with the provided meter registry.
     *
     * @param meterRegistry the Micrometer meter registry for collecting metrics
     */
    public ChatModelMeterObservationHandler(MeterRegistry meterRegistry);

    // ObservationHandler methods for lifecycle management
    public boolean supportsContext(Observation.Context context);
    public void onStart(ChatModelObservationContext context);
    public void onStop(ChatModelObservationContext context);
    public void onError(ChatModelObservationContext context);
}

Method Details:

  • supportsContext(Observation.Context context): Returns true if this handler supports the given context type (ChatModelObservationContext)
  • onStart(ChatModelObservationContext context): Called when chat operation starts. Initializes timing measurement.
  • onStop(ChatModelObservationContext context): Called when chat operation completes successfully. Records execution time and token usage metrics.
  • onError(ChatModelObservationContext context): Called when chat operation fails. Records error metrics.

ChatModelPromptContentObservationHandler

package org.springframework.ai.chat.observation;

/**
 * Observation handler that logs prompt content during chat model operations.
 * WARNING: Enabling this handler may expose sensitive information in logs.
 */
public class ChatModelPromptContentObservationHandler
    implements ObservationHandler<ChatModelObservationContext> {

    /**
     * Creates a prompt content logging handler.
     */
    public ChatModelPromptContentObservationHandler();

    // ObservationHandler methods for lifecycle management
    public boolean supportsContext(Observation.Context context);
    public void onStart(ChatModelObservationContext context);
}

Method Details:

  • supportsContext(Observation.Context context): Returns true for ChatModelObservationContext instances
  • onStart(ChatModelObservationContext context): Called at operation start. Extracts and logs prompt content from context.getRequest().

Usage Pattern:

// Access prompt content in custom handler
@Override
public void onStart(ChatModelObservationContext context) {
    if (context.getRequest() != null) {
        List<Message> messages = context.getRequest().getInstructions();
        for (Message msg : messages) {
            String content = msg.getContent();
            // Log or process prompt content
        }
    }
}

ChatModelCompletionObservationHandler

package org.springframework.ai.chat.observation;

/**
 * Observation handler that logs completion content during chat model operations.
 * WARNING: Enabling this handler may expose sensitive information in logs.
 */
public class ChatModelCompletionObservationHandler
    implements ObservationHandler<ChatModelObservationContext> {

    /**
     * Creates a completion content logging handler.
     */
    public ChatModelCompletionObservationHandler();

    // ObservationHandler methods for lifecycle management
    public boolean supportsContext(Observation.Context context);
    public void onStop(ChatModelObservationContext context);
}

Method Details:

  • supportsContext(Observation.Context context): Returns true for ChatModelObservationContext instances
  • onStop(ChatModelObservationContext context): Called when operation completes. Extracts and logs completion content from context.getResponse().

Usage Pattern:

// Access completion content in custom handler
@Override
public void onStop(ChatModelObservationContext context) {
    if (context.getResponse() != null && 
        context.getResponse().getResult() != null) {
        String completion = context.getResponse().getResult()
                                  .getOutput().getContent();
        // Log or process completion content
    }
}

ErrorLoggingObservationHandler

package org.springframework.ai.model.observation;

/**
 * Observation handler that logs errors across multiple observation context types.
 * Supports ChatModelObservationContext, EmbeddingModelObservationContext,
 * ImageModelObservationContext, ChatClientObservationContext, and AdvisorObservationContext.
 */
public class ErrorLoggingObservationHandler implements ObservationHandler<Observation.Context> {

    /**
     * Creates an error logging handler with the provided tracer.
     *
     * @param tracer the Micrometer tracing tracer for trace correlation
     */
    public ErrorLoggingObservationHandler(Tracer tracer);

    // ObservationHandler methods for lifecycle management
    public boolean supportsContext(Observation.Context context);
    public void onError(Observation.Context context);
}

Method Details:

  • supportsContext(Observation.Context context): Returns true for supported AI model context types
  • onError(Observation.Context context): Called when operation fails. Logs error with trace context if available.

Supported Context Type Check Pattern:

@Override
public boolean supportsContext(Observation.Context context) {
    return context instanceof ChatModelObservationContext ||
           context instanceof EmbeddingModelObservationContext ||
           context instanceof ImageModelObservationContext ||
           context instanceof ChatClientObservationContext ||
           context instanceof AdvisorObservationContext;
}

TracingAwareLoggingObservationHandler

package org.springframework.ai.observation;

/**
 * Generic observation handler that wraps logging functionality with distributed tracing context.
 * Logs content with trace IDs and span IDs for correlation in distributed systems.
 *
 * @param <T> the observation context type
 */
public class TracingAwareLoggingObservationHandler<T extends Observation.Context>
    implements ObservationHandler<T> {

    /**
     * Creates a tracing-aware logging handler.
     *
     * @param tracer the Micrometer tracing tracer
     * @param logFunction the function to extract content to log from context
     * @param logMessage the log message template
     */
    public TracingAwareLoggingObservationHandler(
        Tracer tracer,
        Function<T, String> logFunction,
        String logMessage
    );

    // ObservationHandler methods for lifecycle management
    public boolean supportsContext(Observation.Context context);
    public void onStart(T context);
    public void onStop(T context);
}

Method Details:

  • constructor: Accepts a tracer, extraction function, and log message template
  • logFunction: Function that extracts content to log from the context (e.g., context -> context.getRequest().toString())
  • logMessage: Template for log output (may include placeholders for trace/span IDs)
  • onStart(T context) / onStop(T context): Called at appropriate lifecycle points, extracts trace context and logs with correlation IDs

Factory Pattern for Creating Tracing Handlers:

// Example of creating a tracing-aware handler for prompts
TracingAwareLoggingObservationHandler<ChatModelObservationContext> promptHandler = 
    new TracingAwareLoggingObservationHandler<>(
        tracer,
        context -> {
            if (context.getRequest() != null) {
                return context.getRequest().getInstructions().toString();
            }
            return "";
        },
        "Prompt content: %s"
    );

Observation Context Types

These context types carry observation metadata and are defined in various Spring AI modules:

ChatModelObservationContext

package org.springframework.ai.chat.observation;

/**
 * Observation context for chat model operations.
 * Contains metadata about the chat request, response, and model configuration.
 */
public class ChatModelObservationContext extends Observation.Context {

    /**
     * Gets the chat request prompt.
     * 
     * @return the Prompt containing user messages and instructions
     */
    public Prompt getRequest();

    /**
     * Gets the chat response.
     * 
     * @return the ChatResponse containing AI-generated content and metadata
     */
    public ChatResponse getResponse();

    /**
     * Gets the chat options/configuration.
     * 
     * @return ChatOptions with model parameters (temperature, max tokens, etc.)
     */
    public ChatOptions getChatOptions();

    // Additional metadata methods
    
    /**
     * Gets the AI model provider name.
     * 
     * @return provider identifier (e.g., "openai", "anthropic", "azure")
     */
    public String getModelProvider();

    /**
     * Gets the specific model name being used.
     * 
     * @return model identifier (e.g., "gpt-4", "claude-3-opus")
     */
    public String getModelName();

    /**
     * Sets the request prompt.
     * 
     * @param request the Prompt to set
     */
    public void setRequest(Prompt request);

    /**
     * Sets the response.
     * 
     * @param response the ChatResponse to set
     */
    public void setResponse(ChatResponse response);
}

Usage Pattern:

// Accessing context data in custom handler
@Override
public void onStop(ChatModelObservationContext context) {
    Prompt request = context.getRequest();
    ChatResponse response = context.getResponse();
    
    if (request != null && response != null) {
        int inputMessages = request.getInstructions().size();
        String output = response.getResult().getOutput().getContent();
        
        System.out.println("Model: " + context.getModelName());
        System.out.println("Provider: " + context.getModelProvider());
        System.out.println("Input messages: " + inputMessages);
        System.out.println("Output length: " + output.length());
    }
}

ChatClientObservationContext

package org.springframework.ai.chat.client.observation;

/**
 * Observation context for ChatClient API operations.
 * Captured when using the fluent ChatClient interface.
 */
public class ChatClientObservationContext extends Observation.Context {

    /**
     * Gets the chat client request.
     * 
     * @return the ChatClient.Request with call configuration
     */
    public ChatClient.Request getRequest();

    /**
     * Gets the chat client response.
     * Response type varies based on call method (ChatResponse, String, Entity, etc.)
     * 
     * @return the response object from the chat client call
     */
    public Object getResponse();

    /**
     * Sets the request.
     * 
     * @param request the ChatClient.Request to set
     */
    public void setRequest(ChatClient.Request request);

    /**
     * Sets the response.
     * 
     * @param response the response object to set
     */
    public void setResponse(Object response);
}

Usage Pattern:

// Handling ChatClient observations
@Override
public void onStop(ChatClientObservationContext context) {
    Object response = context.getResponse();
    
    // Response type depends on ChatClient call method
    if (response instanceof ChatResponse) {
        ChatResponse chatResponse = (ChatResponse) response;
        // Process ChatResponse
    } else if (response instanceof String) {
        String stringResponse = (String) response;
        // Process String response
    }
}

AdvisorObservationContext

package org.springframework.ai.chat.client.advisor.observation;

/**
 * Observation context for advisor execution in the chat client.
 * Tracks advisor chain operations and transformations.
 */
public class AdvisorObservationContext extends Observation.Context {

    /**
     * Gets the advisor name.
     * 
     * @return identifier for the advisor being executed
     */
    public String getAdvisorName();

    /**
     * Gets the advisor type.
     * 
     * @return type classification of the advisor (e.g., "request", "response")
     */
    public String getAdvisorType();

    /**
     * Sets the advisor name.
     * 
     * @param advisorName the advisor identifier
     */
    public void setAdvisorName(String advisorName);

    /**
     * Sets the advisor type.
     * 
     * @param advisorType the advisor type classification
     */
    public void setAdvisorType(String advisorType);
}

Usage Pattern:

// Tracking advisor execution
@Override
public void onStop(AdvisorObservationContext context) {
    String advisorName = context.getAdvisorName();
    String advisorType = context.getAdvisorType();
    
    System.out.println("Advisor executed: " + advisorName + 
                       " (type: " + advisorType + ")");
}

EmbeddingModelObservationContext

package org.springframework.ai.embedding.observation;

/**
 * Observation context for embedding model operations.
 * Contains metadata about embedding requests and responses.
 */
public class EmbeddingModelObservationContext extends Observation.Context {

    /**
     * Gets the embedding request.
     * 
     * @return EmbeddingRequest with text inputs and options
     */
    public EmbeddingRequest getRequest();

    /**
     * Gets the embedding response.
     * 
     * @return EmbeddingResponse with generated embeddings
     */
    public EmbeddingResponse getResponse();

    /**
     * Sets the embedding request.
     * 
     * @param request the EmbeddingRequest to set
     */
    public void setRequest(EmbeddingRequest request);

    /**
     * Sets the embedding response.
     * 
     * @param response the EmbeddingResponse to set
     */
    public void setResponse(EmbeddingResponse response);

    /**
     * Gets the model provider name.
     * 
     * @return provider identifier
     */
    public String getModelProvider();

    /**
     * Gets the model name.
     * 
     * @return model identifier
     */
    public String getModelName();
}

ImageModelObservationContext

package org.springframework.ai.image.observation;

/**
 * Observation context for image generation model operations.
 * Contains metadata about image generation requests and responses.
 */
public class ImageModelObservationContext extends Observation.Context {

    /**
     * Gets the image generation request.
     * 
     * @return ImageRequest with prompts and generation options
     */
    public ImageRequest getRequest();

    /**
     * Gets the image generation response.
     * 
     * @return ImageResponse with generated images
     */
    public ImageResponse getResponse();

    /**
     * Sets the image request.
     * 
     * @param request the ImageRequest to set
     */
    public void setRequest(ImageRequest request);

    /**
     * Sets the image response.
     * 
     * @param response the ImageResponse to set
     */
    public void setResponse(ImageResponse response);

    /**
     * Gets the model provider name.
     * 
     * @return provider identifier
     */
    public String getModelProvider();

    /**
     * Gets the model name.
     * 
     * @return model identifier
     */
    public String getModelName();
}

Micrometer Types

These types are from the Micrometer observability library:

MeterRegistry

package io.micrometer.core.instrument;

/**
 * Registry for creating and managing meters (timers, counters, gauges).
 * Part of Micrometer Core - the metrics collection library.
 * Typically auto-configured by Spring Boot Actuator.
 */
public interface MeterRegistry {

    /**
     * Creates or retrieves a timer for measuring operation duration.
     * 
     * @param name the metric name
     * @param tags optional tags for categorization
     * @return Timer instance for recording time measurements
     */
    Timer timer(String name, String... tags);

    /**
     * Creates or retrieves a counter for counting events.
     * 
     * @param name the metric name
     * @param tags optional tags for categorization
     * @return Counter instance for incrementing counts
     */
    Counter counter(String name, String... tags);

    /**
     * Searches for meters by name.
     * 
     * @param name the metric name to search for
     * @return Search instance for querying meters
     */
    Search find(String name);

    /**
     * Creates or retrieves a gauge for measuring current value.
     * 
     * @param name the metric name
     * @param obj the object to observe
     * @param valueFunction function to extract gauge value
     * @return the observed object
     */
    <T> T gauge(String name, T obj, ToDoubleFunction<T> valueFunction);

    /**
     * Returns all registered meters.
     * 
     * @return list of all Meter instances
     */
    List<Meter> getMeters();
}

Common Usage Patterns:

// Recording timer measurements
Timer timer = meterRegistry.timer("operation.duration", 
    "operation", "chat", 
    "model", "gpt-4");
timer.record(() -> {
    // Operation to time
});

// Incrementing counters
Counter counter = meterRegistry.counter("operation.count", 
    "operation", "chat",
    "status", "success");
counter.increment();

// Creating gauges
meterRegistry.gauge("queue.size", 
    Tags.of("queue", "requests"),
    myQueue,
    Queue::size);

Tracer

package io.micrometer.tracing;

/**
 * Interface for distributed tracing systems.
 * Part of Micrometer Tracing - enables distributed trace context propagation.
 * Available when micrometer-tracing dependency is present.
 */
public interface Tracer {

    /**
     * Gets the current span in the trace.
     * 
     * @return current Span or null if no span is active
     */
    Span currentSpan();

    /**
     * Creates a new span with the given name.
     * The span becomes current when started.
     * 
     * @return new Span builder
     */
    Span nextSpan();

    /**
     * Creates a new child span from the current span.
     * 
     * @param parent the parent span
     * @return new Span builder with parent context
     */
    Span nextSpan(Span parent);

    /**
     * Gets the current trace context.
     * 
     * @return TraceContext for current trace or null
     */
    TraceContext currentTraceContext();

    /**
     * Creates a scope that makes the given span current.
     * Must be closed to restore previous span.
     * 
     * @param span the span to make current
     * @return SpanInScope that must be closed
     */
    SpanInScope withSpan(Span span);
}

Common Usage Patterns:

// Creating and using spans
Span span = tracer.nextSpan().name("my.operation");
try (Tracer.SpanInScope ws = tracer.withSpan(span.start())) {
    // Operation code here
    span.tag("custom.tag", "value");
    span.event("milestone.reached");
    
    // Get trace identifiers
    String traceId = span.context().traceId();
    String spanId = span.context().spanId();
} catch (Exception e) {
    span.error(e);
    throw e;
} finally {
    span.end();
}

// Accessing current span
Span current = tracer.currentSpan();
if (current != null) {
    current.tag("operation", "chat");
}

Spring Framework Types

These types are from the Spring Framework:

ObjectProvider

package org.springframework.beans.factory;

/**
 * A variant of ObjectFactory designed for injection points that allow
 * for optional dependency injection and lazy access.
 * Part of Spring Framework's dependency injection system.
 *
 * @param <T> the type of object provided
 */
public interface ObjectProvider<T> extends ObjectFactory<T>, Iterable<T> {

    /**
     * Returns an instance (possibly shared or independent) of the object.
     * Throws NoSuchBeanDefinitionException if not available.
     * 
     * @return an instance of the bean
     * @throws BeansException if the bean could not be created
     */
    T getObject() throws BeansException;

    /**
     * Returns an instance if available, or null otherwise.
     * 
     * @return an instance or null
     */
    T getIfAvailable();

    /**
     * Returns an instance if available, or the provided default otherwise.
     * 
     * @param defaultSupplier supplier for default value
     * @return an instance or default value
     */
    T getIfAvailable(Supplier<T> defaultSupplier);

    /**
     * Returns an instance if available, otherwise returns result from supplier.
     * 
     * @return an instance or computed default
     */
    T getIfUnique();

    /**
     * Executes action if an instance is available.
     * 
     * @param action the action to perform
     */
    void ifAvailable(Consumer<T> action);

    /**
     * Returns stream of all matching beans.
     * 
     * @return stream of instances
     */
    Stream<T> stream();

    /**
     * Returns stream of all matching beans sorted by order.
     * 
     * @return ordered stream of instances
     */
    Stream<T> orderedStream();
}

Usage in Auto-Configuration:

@Bean
public ChatModelMeterObservationHandler handler(
        ObjectProvider<MeterRegistry> meterRegistryProvider) {
    // Get registry if available, or throw exception
    MeterRegistry registry = meterRegistryProvider.getObject();
    
    // Or get if available with default
    MeterRegistry registryOrDefault = meterRegistryProvider.getIfAvailable(
        () -> new SimpleMeterRegistry());
    
    // Or perform action only if available
    meterRegistryProvider.ifAvailable(registry -> {
        // Configure registry
    });
    
    return new ChatModelMeterObservationHandler(registry);
}

ChatModel

package org.springframework.ai.chat.model;

/**
 * Core interface for chat/completion models in Spring AI.
 * Implemented by various AI model providers (OpenAI, Anthropic, etc.).
 * Presence of this class on the classpath triggers auto-configuration activation.
 */
public interface ChatModel extends Model<Prompt, ChatResponse>, StreamingModel<Prompt, ChatResponse> {

    /**
     * Generates a chat response for the given prompt.
     * Blocking synchronous operation.
     * 
     * @param prompt the input prompt with messages and options
     * @return ChatResponse containing generated text and metadata
     */
    ChatResponse call(Prompt prompt);

    /**
     * Generates a streaming chat response for the given prompt.
     * Returns a reactive stream of response chunks.
     * 
     * @param prompt the input prompt with messages and options
     * @return Flux of ChatResponse chunks
     */
    Flux<ChatResponse> stream(Prompt prompt);

    /**
     * Gets the default options for this chat model.
     * 
     * @return ChatOptions with model-specific defaults
     */
    default ChatOptions getDefaultOptions() {
        return ChatOptions.builder().build();
    }
}

Implementation Example:

@Service
public class MyChatService {

    private final ChatModel chatModel;

    public MyChatService(ChatModel chatModel) {
        this.chatModel = chatModel;
    }

    public String chat(String userMessage) {
        Prompt prompt = new Prompt(userMessage);
        ChatResponse response = chatModel.call(prompt);
        return response.getResult().getOutput().getContent();
    }

    public Flux<String> chatStream(String userMessage) {
        Prompt prompt = new Prompt(userMessage);
        return chatModel.stream(prompt)
            .map(response -> response.getResult().getOutput().getContent());
    }
}

Related Types and Dependencies

This module creates beans of the following types, which are defined in other Spring AI modules:

From spring-ai-client-chat

ChatModel

  • Core chat model interface
  • Trigger for auto-configuration activation
  • Package: org.springframework.ai.chat.model

ChatModelMeterObservationHandler

  • Meters handler created by this module
  • Collects metrics for chat operations
  • Package: org.springframework.ai.chat.observation

ChatModelPromptContentObservationHandler

  • Handler for logging prompt content
  • Created when log-prompt=true and tracing not available
  • Package: org.springframework.ai.chat.observation

ChatModelCompletionObservationHandler

  • Handler for logging completion content
  • Created when log-completion=true and tracing not available
  • Package: org.springframework.ai.chat.observation

ChatModelObservationContext

  • Observation context for chat operations
  • Contains operation metadata
  • Package: org.springframework.ai.chat.observation

ErrorLoggingObservationHandler

  • Handler for logging errors across contexts
  • Created when include-error-logging=true
  • Package: org.springframework.ai.model.observation

TracingAwareLoggingObservationHandler

  • Wrapper that adds trace context to logging handlers
  • Created when tracing is available
  • Package: org.springframework.ai.observation

From Micrometer

MeterRegistry

  • Registry for metrics collection
  • Required for meter observation handler
  • Package: io.micrometer.core.instrument

Tracer

  • Distributed tracing tracer
  • Determines which configuration path activates
  • Package: io.micrometer.tracing

Version Compatibility

This module version 1.1.2 is compatible with:

Spring Boot: 3.5.x Spring AI: 1.1.x Java: 17+ Micrometer: (version managed by Spring Boot) Micrometer Tracing: (version managed by Spring Boot, optional)

For Spring Boot 4.x compatibility, use Spring AI 2.x.

Version-Specific Considerations

Java 17 Requirements:

  • Records support (if used in custom handlers)
  • Pattern matching for instanceof
  • Sealed classes support

Spring Boot 3.x Features:

  • Native ahead-of-time (AOT) compilation support
  • Observability API improvements
  • Micrometer 1.10+ with enhanced tags

Additional Resources

  • Spring AI Observability Documentation
  • Spring Boot Actuator Reference
  • Micrometer Documentation
  • Micrometer Tracing
  • Spring AI Reference Documentation
  • OpenTelemetry Java
  • Zipkin Documentation

Edge Cases and Advanced Patterns

Handling Null Context Values

When implementing custom observation handlers, always check for null values:

@Override
public void onStop(ChatModelObservationContext context) {
    // Defensive null checking pattern
    if (context == null) {
        return;
    }

    ChatResponse response = context.getResponse();
    if (response == null) {
        logger.warn("Chat operation completed with null response");
        return;
    }

    Generation result = response.getResult();
    if (result == null || result.getOutput() == null) {
        logger.warn("Chat response has no result or output");
        return;
    }

    String content = result.getOutput().getContent();
    if (content != null && !content.isEmpty()) {
        // Process content
    }
}

Concurrent Request Handling

When dealing with high concurrency, ensure thread-safe metrics collection:

import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.atomic.AtomicLong;

@Component
public class ConcurrentMetricsHandler {

    private final ConcurrentHashMap<String, AtomicLong> requestCounts = 
        new ConcurrentHashMap<>();

    public void recordRequest(String modelName) {
        requestCounts
            .computeIfAbsent(modelName, k -> new AtomicLong(0))
            .incrementAndGet();
    }

    public long getRequestCount(String modelName) {
        AtomicLong count = requestCounts.get(modelName);
        return count != null ? count.get() : 0;
    }
}

Streaming Response Observation

For streaming responses, observation handlers need special consideration:

import org.springframework.ai.chat.model.ChatModel;
import org.springframework.ai.chat.prompt.Prompt;
import reactor.core.publisher.Flux;

@Service
public class StreamingObservationService {

    private final ChatModel chatModel;
    private final MeterRegistry meterRegistry;

    public Flux<String> chatWithObservation(Prompt prompt) {
        Timer.Sample sample = Timer.start(meterRegistry);
        AtomicLong tokenCount = new AtomicLong(0);

        return chatModel.stream(prompt)
            .doOnNext(response -> {
                // Count tokens in each chunk
                if (response.getResult() != null) {
                    tokenCount.incrementAndGet();
                }
            })
            .doOnComplete(() -> {
                // Record metrics when stream completes
                sample.stop(meterRegistry.timer("ai.stream.duration"));
                meterRegistry.counter("ai.stream.tokens").increment(tokenCount.get());
            })
            .doOnError(error -> {
                sample.stop(meterRegistry.timer("ai.stream.duration", 
                    "status", "error"));
            })
            .map(response -> response.getResult().getOutput().getContent());
    }
}

Rate Limiting Based on Metrics

Use observed metrics to implement rate limiting:

import io.micrometer.core.instrument.MeterRegistry;
import org.springframework.stereotype.Component;

@Component
public class MetricsBasedRateLimiter {

    private final MeterRegistry meterRegistry;
    private static final double MAX_REQUESTS_PER_MINUTE = 60;

    public MetricsBasedRateLimiter(MeterRegistry meterRegistry) {
        this.meterRegistry = meterRegistry;
    }

    public boolean allowRequest() {
        var counter = meterRegistry.find("gen.ai.client.operation").counter();
        if (counter == null) {
            return true; // No metrics yet, allow
        }

        // Simple rate check (production would use sliding window)
        double count = counter.count();
        double ratePerMinute = estimateRatePerMinute(count);
        
        return ratePerMinute < MAX_REQUESTS_PER_MINUTE;
    }

    private double estimateRatePerMinute(double totalCount) {
        // Simplified estimation - production would track time windows
        return totalCount; // Replace with actual rate calculation
    }
}

Fallback Configuration When Dependencies Missing

Handle graceful degradation when optional dependencies are absent:

@Configuration
public class GracefulObservationConfig {

    @Bean
    @ConditionalOnMissingBean(MeterRegistry.class)
    public MeterRegistry fallbackMeterRegistry() {
        return new SimpleMeterRegistry();
    }

    @Bean
    @ConditionalOnMissingBean(name = "chatModelMeterObservationHandler")
    public ObservationHandler<ChatModelObservationContext> noOpHandler() {
        return new ObservationHandler<ChatModelObservationContext>() {
            @Override
            public boolean supportsContext(Observation.Context context) {
                return context instanceof ChatModelObservationContext;
            }
            
            @Override
            public void onStart(ChatModelObservationContext context) {
                // No-op when metrics infrastructure unavailable
            }
        };
    }
}
tessl i tessl/maven-org-springframework-ai--spring-ai-autoconfigure-model-chat-observation@1.1.1
Workspace
tessl
Visibility
Public
Created
Last updated
Describes
mavenpkg:maven/org.springframework.ai/spring-ai-autoconfigure-model-chat-observation@1.1.x