CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/maven-io-quarkiverse-langchain4j--quarkus-langchain4j-openai

Quarkus LangChain4j OpenAI extension provides seamless integration between Quarkus and OpenAI's Large Language Models, enabling developers to easily incorporate LLMs into their applications with support for chat, streaming, embeddings, moderation, and image generation.

Overview
Eval results
Files

moderation-models.mddocs/

Moderation Models

Comprehensive API reference for OpenAI moderation models in Quarkus, providing content safety and policy enforcement capabilities. The extension enables automatic content filtering, policy violation detection, and safe AI application development through OpenAI's moderation API.

Introduction to Content Moderation

Content moderation is essential for building safe AI applications that prevent harmful content from being processed or generated. OpenAI's moderation API provides real-time content policy violation detection across multiple categories including hate speech, self-harm, sexual content, and violence.

The Quarkus LangChain4j OpenAI extension provides seamless integration with OpenAI's moderation capabilities through:

  • Automatic moderation of user inputs before processing with @Moderate annotation
  • Manual moderation via direct ModerationModel API calls
  • Category-based scoring for fine-grained content analysis
  • Configurable thresholds for custom policy enforcement
  • Batch moderation for processing multiple texts efficiently

Use Cases

  • User-generated content filtering - Screen user inputs before processing
  • Pre-generation moderation - Validate prompts before sending to chat models
  • Post-generation validation - Verify generated content meets safety standards
  • Compliance enforcement - Ensure content meets regulatory requirements
  • Risk assessment - Monitor and log potentially problematic content

Architecture Overview

The moderation models implementation uses an SPI-based pattern where Quarkus-enhanced builders are automatically used when creating OpenAI moderation models. The builders extend LangChain4j's base builders to add:

  • Named configurations for managing multiple moderation policies
  • TLS configuration for secure enterprise communications
  • HTTP proxy support for corporate network environments
  • Enhanced logging with curl-style request debugging
  • Configuration-driven development with automatic CDI integration

Capabilities

QuarkusOpenAiModerationModelBuilderFactory

Factory class implementing the Service Provider Interface for creating Quarkus-enhanced OpenAI moderation model builders.

/**
 * SPI factory for creating OpenAI moderation models with Quarkus extensions.
 *
 * Registered via: META-INF/services/dev.langchain4j.model.openai.spi.OpenAiModerationModelBuilderFactory
 *
 * This factory is automatically discovered and used when calling
 * OpenAiModerationModel.builder(), providing Quarkus-specific functionality
 * transparently through Java's Service Provider Interface mechanism.
 *
 * The factory pattern enables seamless integration with Quarkus features
 * like named configurations, TLS management, and HTTP proxies without
 * modifying LangChain4j code.
 */
public class QuarkusOpenAiModerationModelBuilderFactory
    implements OpenAiModerationModelBuilderFactory {

    /**
     * Creates a new Quarkus-enhanced moderation model builder instance.
     *
     * Returns:
     *     Builder instance with both Quarkus-specific and LangChain4j methods
     *
     * This method is automatically invoked by LangChain4j's SPI discovery
     * mechanism when OpenAiModerationModel.builder() is called, ensuring
     * Quarkus enhancements are available without explicit API changes.
     */
    @Override
    public OpenAiModerationModel.OpenAiModerationModelBuilder get();
}

QuarkusOpenAiModerationModelBuilderFactory.Builder

Enhanced builder class extending LangChain4j's OpenAiModerationModelBuilder with Quarkus-specific methods.

/**
 * Enhanced builder for OpenAI moderation models with Quarkus features.
 *
 * Extends: dev.langchain4j.model.openai.OpenAiModerationModel.OpenAiModerationModelBuilder
 *
 * Provides all standard LangChain4j builder methods plus Quarkus-specific
 * enhancements for enterprise integration. This builder enables both
 * programmatic configuration and integration with Quarkus configuration
 * system.
 *
 * Usage:
 *     ModerationModel model = OpenAiModerationModel.builder()
 *         .configName("strict-policy")        // Quarkus-specific
 *         .tlsConfigurationName("custom-tls") // Quarkus-specific
 *         .proxy(proxy)                       // Quarkus-specific
 *         .logCurl(true)                      // Quarkus-specific
 *         .apiKey("sk-...")                   // LangChain4j inherited
 *         .modelName("omni-moderation-latest") // LangChain4j inherited
 *         .build();
 */
public static class Builder extends OpenAiModerationModel.OpenAiModerationModelBuilder {

    /**
     * Set the named configuration to use for this moderation model.
     *
     * Parameters:
     *     configName - Name of configuration defined in application.properties
     *
     * Returns:
     *     This builder for method chaining
     *
     * When specified, the builder loads settings from the named configuration
     * instead of the default configuration. This enables multiple moderation
     * models with different policies or API keys.
     *
     * For example, configName("strict-policy") loads configuration from
     * quarkus.langchain4j.openai.strict-policy.* properties, allowing
     * different moderation thresholds for different content types.
     *
     * Example:
     *     .configName("strict-policy")  // Uses quarkus.langchain4j.openai.strict-policy.*
     *     .configName("lenient")        // Uses quarkus.langchain4j.openai.lenient.*
     */
    public Builder configName(String configName);

    /**
     * Set the named TLS configuration for secure HTTPS connections.
     *
     * Parameters:
     *     tlsConfigurationName - Name of Quarkus TLS configuration
     *
     * Returns:
     *     This builder for method chaining
     *
     * References a Quarkus named TLS configuration defined via
     * quarkus.tls.{name}.* properties. This enables custom certificates,
     * mutual TLS authentication, custom trust stores, or specific cipher
     * suites required in enterprise environments.
     *
     * Essential for organizations requiring custom certificate authorities,
     * client certificate authentication, or specific security policies.
     *
     * Example:
     *     .tlsConfigurationName("enterprise-ca")  // Custom CA certificates
     *     .tlsConfigurationName("mtls")           // Mutual TLS authentication
     */
    public Builder tlsConfigurationName(String tlsConfigurationName);

    /**
     * Set HTTP proxy for network traffic routing.
     *
     * Parameters:
     *     proxy - Java Proxy instance (HTTP or SOCKS)
     *
     * Returns:
     *     This builder for method chaining
     *
     * Configures HTTP or SOCKS proxy for moderation API requests. Required
     * in corporate environments where direct internet access is restricted
     * and all HTTP traffic must route through corporate proxies.
     *
     * The proxy configuration applies to all requests made by this
     * moderation model instance, including retries and error scenarios.
     *
     * Example:
     *     Proxy proxy = new Proxy(Proxy.Type.HTTP,
     *         new InetSocketAddress("proxy.company.com", 8080));
     *     .proxy(proxy)
     */
    public Builder proxy(Proxy proxy);

    /**
     * Enable curl-style request logging for debugging.
     *
     * Parameters:
     *     logCurl - true to enable curl logging, false to disable
     *
     * Returns:
     *     This builder for method chaining
     *
     * When enabled, logs each API request in curl command format, allowing
     * developers to reproduce requests manually for debugging. The curl
     * commands include all headers, body content, and can be executed
     * directly in a terminal.
     *
     * WARNING: Curl logs include API keys and request content. Only enable
     * in development environments or ensure logs are properly secured.
     *
     * Example:
     *     .logCurl(true)  // Logs: curl -X POST https://api.openai.com/v1/moderations ...
     */
    public Builder logCurl(boolean logCurl);

    /**
     * Build the OpenAI moderation model instance.
     *
     * Returns:
     *     Configured OpenAiModerationModel ready for content moderation
     *
     * Constructs the moderation model with all specified configurations,
     * validates required parameters (API key, base URL), and prepares the
     * HTTP client with all configured properties including TLS, proxy,
     * and logging settings.
     *
     * Throws:
     *     IllegalArgumentException if required configuration is missing
     *     ConfigValidationException if configuration is invalid
     */
    @Override
    public OpenAiModerationModel build();

    /**
     * Public fields (direct access, though builder methods are recommended).
     */
    public String configName;               // Named configuration reference
    public String tlsConfigurationName;     // Named TLS configuration
    public boolean logCurl;                 // Curl logging flag
    public Proxy proxy;                     // HTTP proxy configuration
}

Inherited LangChain4j Builder Methods

The Builder class inherits the following methods from OpenAiModerationModel.OpenAiModerationModelBuilder:

/**
 * Set the OpenAI API base URL.
 *
 * Parameters:
 *     baseUrl - Base URL for OpenAI API (default: "https://api.openai.com/v1/")
 *
 * Returns:
 *     This builder for method chaining
 *
 * Override the default OpenAI API endpoint. Useful for:
 * - Using OpenAI-compatible services (LocalAI, vLLM, etc.)
 * - Routing through API gateways
 * - Testing with mock servers
 * - Using regional OpenAI endpoints
 *
 * Example:
 *     .baseUrl("https://api.openai.com/v1/")     // Default OpenAI
 *     .baseUrl("https://api-gateway.company.com/openai/") // API gateway
 */
public Builder baseUrl(String baseUrl);

/**
 * Set the OpenAI API key for authentication.
 *
 * Parameters:
 *     apiKey - OpenAI API key (format: sk-...)
 *
 * Returns:
 *     This builder for method chaining
 *
 * Required for authenticating with OpenAI API. API keys are obtained from
 * the OpenAI dashboard. Different keys can be used for different
 * environments (development, staging, production).
 *
 * SECURITY: Never hardcode API keys. Use environment variables or
 * secure configuration management.
 *
 * Example:
 *     .apiKey(System.getenv("OPENAI_API_KEY"))
 *     .apiKey("sk-proj-...")  // Only for demos, never in production
 */
public Builder apiKey(String apiKey);

/**
 * Set the OpenAI organization ID for multi-organization accounts.
 *
 * Parameters:
 *     organizationId - Organization identifier (format: org-...)
 *
 * Returns:
 *     This builder for method chaining
 *
 * Required for users belonging to multiple organizations. The organization
 * ID determines which organization's usage quota is consumed and which
 * organization receives usage analytics.
 *
 * Example:
 *     .organizationId("org-...")
 */
public Builder organizationId(String organizationId);

/**
 * Set the moderation model name to use.
 *
 * Parameters:
 *     modelName - Model identifier (default: "omni-moderation-latest")
 *
 * Returns:
 *     This builder for method chaining
 *
 * Available OpenAI moderation models:
 * - omni-moderation-latest: Latest multi-modal moderation (text and images)
 * - omni-moderation-2024-09-26: Specific omni-moderation version
 * - text-moderation-latest: Latest text-only moderation
 * - text-moderation-stable: Stable text moderation for consistent results
 * - text-moderation-007: Specific text moderation version
 *
 * The "latest" variants automatically use the newest model version,
 * while specific versions ensure consistent behavior over time.
 *
 * Example:
 *     .modelName("omni-moderation-latest")    // Best quality, auto-updates
 *     .modelName("text-moderation-stable")    // Consistent results
 */
public Builder modelName(String modelName);

/**
 * Set request timeout duration.
 *
 * Parameters:
 *     timeout - Maximum time to wait for API response
 *
 * Returns:
 *     This builder for method chaining
 *
 * Controls how long to wait for moderation API responses. Moderation
 * requests typically complete quickly (< 1 second), but timeout should
 * account for network latency and rate limiting.
 *
 * Example:
 *     .timeout(Duration.ofSeconds(10))   // Standard timeout
 *     .timeout(Duration.ofSeconds(30))   // More tolerant for slow networks
 */
public Builder timeout(Duration timeout);

/**
 * Set maximum number of retry attempts for failed requests.
 *
 * Parameters:
 *     maxRetries - Maximum retry attempts (minimum: 1)
 *
 * Returns:
 *     This builder for method chaining
 *
 * Configures automatic retry behavior for transient failures (network
 * errors, rate limiting, server errors). The client uses exponential
 * backoff between retries to avoid overwhelming the API.
 *
 * Example:
 *     .maxRetries(3)  // Retry up to 3 times on failure
 */
public Builder maxRetries(Integer maxRetries);

/**
 * Enable logging of moderation API requests.
 *
 * Parameters:
 *     logRequests - true to log requests, false to disable
 *
 * Returns:
 *     This builder for method chaining
 *
 * Logs complete API requests including headers and body content. Useful
 * for debugging integration issues and monitoring API usage patterns.
 *
 * WARNING: Request logs include content being moderated. Ensure logs
 * are secured and comply with privacy policies.
 *
 * Example:
 *     .logRequests(true)
 */
public Builder logRequests(Boolean logRequests);

/**
 * Enable logging of moderation API responses.
 *
 * Parameters:
 *     logResponses - true to log responses, false to disable
 *
 * Returns:
 *     This builder for method chaining
 *
 * Logs complete API responses including moderation results, category
 * scores, and flags. Essential for auditing moderation decisions and
 * tuning moderation policies.
 *
 * Example:
 *     .logResponses(true)
 */
public Builder logResponses(Boolean logResponses);

Configuration Reference

ModerationModelConfig

Configuration interface for moderation models, used with Quarkus SmallRye Config for declarative configuration.

/**
 * Configuration group for OpenAI moderation models.
 *
 * ConfigRoot: quarkus.langchain4j.openai.moderation-model
 * Named configs: quarkus.langchain4j.openai.{name}.moderation-model
 *
 * Enables declarative configuration of moderation models through
 * application.properties or application.yaml without programmatic setup.
 * All moderation model instances created via CDI injection automatically
 * use these configuration values.
 *
 * Example configuration:
 *     quarkus.langchain4j.openai.moderation-model.model-name=omni-moderation-latest
 *     quarkus.langchain4j.openai.moderation-model.log-requests=false
 */
@ConfigGroup
public interface ModerationModelConfig {

    /**
     * The OpenAI moderation model name to use.
     *
     * Returns:
     *     Model identifier string
     *
     * Default: "omni-moderation-latest"
     *
     * Determines which OpenAI moderation model processes content. The
     * "latest" variants automatically use the newest model version with
     * improved accuracy and coverage, while specific versions ensure
     * consistent behavior for compliance requirements.
     *
     * Configuration:
     *     quarkus.langchain4j.openai.moderation-model.model-name=omni-moderation-latest
     *     quarkus.langchain4j.openai.strict.moderation-model.model-name=text-moderation-stable
     */
    @WithDefault("omni-moderation-latest")
    String modelName();

    /**
     * Whether to log moderation API requests.
     *
     * Returns:
     *     Optional Boolean, empty uses parent logRequests setting
     *
     * Default: false (via ConfigDocDefault)
     *
     * Enables request logging for debugging and monitoring. When not set,
     * inherits from parent quarkus.langchain4j.openai.log-requests setting.
     * Request logs include content being moderated and API parameters.
     *
     * Configuration:
     *     quarkus.langchain4j.openai.moderation-model.log-requests=true
     */
    @ConfigDocDefault("false")
    Optional<Boolean> logRequests();

    /**
     * Whether to log moderation API responses.
     *
     * Returns:
     *     Optional Boolean, empty uses parent logResponses setting
     *
     * Default: false (via ConfigDocDefault)
     *
     * Enables response logging for audit trails and policy tuning. When
     * not set, inherits from parent quarkus.langchain4j.openai.log-responses
     * setting. Response logs include category scores and flagging decisions.
     *
     * Configuration:
     *     quarkus.langchain4j.openai.moderation-model.log-responses=true
     */
    @ConfigDocDefault("false")
    Optional<Boolean> logResponses();
}

Available Moderation Models

OpenAI provides several moderation models optimized for different use cases:

omni-moderation-latest

The latest multi-modal moderation model supporting both text and image content. Automatically updated to the newest version for best accuracy.

  • Capabilities: Text and image moderation
  • Categories: All categories including visual content violations
  • Use case: Modern applications with multi-modal content
  • Updates: Automatic, may change behavior

omni-moderation-2024-09-26

Specific version of the omni-moderation model, fixed at September 2024 release.

  • Capabilities: Text and image moderation
  • Categories: All categories including visual content violations
  • Use case: Applications requiring consistent behavior
  • Updates: None, frozen at specific version

text-moderation-latest

The latest text-only moderation model with automatic updates to newest version.

  • Capabilities: Text-only moderation
  • Categories: All text-based categories
  • Use case: Text-only applications wanting best accuracy
  • Updates: Automatic, may change behavior

text-moderation-stable

Stable text moderation model optimized for consistent results over time.

  • Capabilities: Text-only moderation
  • Categories: All text-based categories
  • Use case: Applications requiring stable, predictable results
  • Updates: Rare, only for critical fixes

text-moderation-007

Specific version of text moderation model, frozen at version 007.

  • Capabilities: Text-only moderation
  • Categories: All text-based categories
  • Use case: Compliance scenarios requiring version pinning
  • Updates: None, frozen at specific version

Moderation Categories

OpenAI's moderation API evaluates content across multiple policy categories, each with a score (0.0 to 1.0) and flag (true/false):

Primary Categories

hate

  • Content expressing, inciting, or promoting hate based on identity
  • Includes: racism, sexism, religious discrimination
  • Example violations: slurs, dehumanizing language, calls for segregation

harassment

  • Content intended to torment, humiliate, or intimidate individuals
  • Includes: bullying, threatening statements, coordinated harassment
  • Example violations: personal attacks, doxxing, revenge content

self-harm

  • Content encouraging, promoting, or depicting self-injury
  • Includes: suicide encouragement, eating disorder promotion, cutting
  • Example violations: suicide instructions, pro-ana content

sexual

  • Sexually explicit or suggestive content
  • Includes: sexual acts, erotic content, sexual services
  • Example violations: explicit descriptions, sexual solicitation

violence

  • Content depicting, glorifying, or encouraging violence
  • Includes: physical violence, weapons, gore
  • Example violations: violence instructions, graphic descriptions, threats

Sub-Categories

hate/threatening

  • Hateful content that includes violence or serious harm threats
  • More severe than general hate category
  • Example: "We should eliminate all [protected group]"

harassment/threatening

  • Harassment including credible threats
  • More severe than general harassment
  • Example: "I know where you live and I'm coming for you"

self-harm/intent

  • Content expressing personal intent to self-harm
  • Indicates immediate risk vs. general discussion
  • Example: "I'm going to hurt myself tonight"

self-harm/instructions

  • Detailed instructions or encouragement for self-harm
  • More severe than general discussion
  • Example: Step-by-step self-injury guides

sexual/minors

  • Sexual content involving or suggesting minors
  • Zero tolerance, highest severity
  • Example: Any sexualization of children

violence/graphic

  • Extremely graphic violence descriptions or depictions
  • More severe than general violence
  • Example: Detailed descriptions of gore or injury

Category Scores and Flagging

Each moderation result includes:

  1. Category Scores - Confidence scores (0.0 to 1.0) for each category

    • Higher scores indicate stronger policy violations
    • Scores below OpenAI's thresholds don't trigger flags
    • Useful for custom policy enforcement
  2. Category Flags - Boolean flags indicating policy violations

    • true means content violates OpenAI's policies for that category
    • Based on OpenAI's internal thresholds
    • Used by @Moderate annotation to block content
  3. Overall Flagged - Boolean indicating if any category was flagged

    • true if one or more categories were flagged
    • Used for quick violation detection
    • Triggers ModerationException in AI services

Example moderation result:

{
  "flagged": true,
  "categories": {
    "hate": true,
    "violence": false,
    "sexual": false
  },
  "category_scores": {
    "hate": 0.95,        // High score, flagged
    "violence": 0.12,    // Low score, not flagged
    "sexual": 0.03       // Very low score, not flagged
  }
}

Usage Examples

Example 1: Basic Content Moderation with CDI

Simplest approach using CDI injection with default configuration:

import jakarta.inject.Inject;
import dev.langchain4j.model.moderation.ModerationModel;
import dev.langchain4j.model.moderation.Moderation;
import dev.langchain4j.model.output.Response;

public class ContentModerationService {

    @Inject
    ModerationModel moderationModel;

    public void validateUserComment(String comment) {
        // Moderate user-generated content
        Response<Moderation> response = moderationModel.moderate(comment);
        Moderation moderation = response.content();

        if (moderation.flagged()) {
            // Content violates policies
            throw new IllegalArgumentException(
                "Comment violates content policy: " +
                moderation.flaggedText()
            );
        }

        // Safe to process comment
        processComment(comment);
    }

    private void processComment(String comment) {
        // Store comment in database, etc.
    }
}

Configuration in application.properties:

quarkus.langchain4j.openai.api-key=sk-...
quarkus.langchain4j.openai.moderation-model.model-name=omni-moderation-latest

Example 2: Moderating User-Generated Content with Detailed Analysis

Advanced moderation with category-specific handling:

import jakarta.inject.Inject;
import dev.langchain4j.model.moderation.ModerationModel;
import dev.langchain4j.model.moderation.Moderation;
import dev.langchain4j.model.output.Response;
import org.jboss.logging.Logger;

public class ContentModerationService {

    private static final Logger LOG = Logger.getLogger(ContentModerationService.class);

    @Inject
    ModerationModel moderationModel;

    public ModerationResult moderateUserContent(String content) {
        Response<Moderation> response = moderationModel.moderate(content);
        Moderation moderation = response.content();

        ModerationResult result = new ModerationResult();
        result.setContent(content);
        result.setFlagged(moderation.flagged());

        // Analyze specific categories for detailed reporting
        if (moderation.flagged()) {
            if (moderation.hateSpeech() > 0.7) {
                result.addViolation("hate", moderation.hateSpeech(),
                    "Content contains hate speech");
                LOG.warnf("Hate speech detected: score=%.2f",
                    moderation.hateSpeech());
            }

            if (moderation.sexualContent() > 0.7) {
                result.addViolation("sexual", moderation.sexualContent(),
                    "Content contains sexual material");
            }

            if (moderation.violence() > 0.7) {
                result.addViolation("violence", moderation.violence(),
                    "Content contains violent material");
            }

            if (moderation.selfHarm() > 0.7) {
                result.addViolation("self-harm", moderation.selfHarm(),
                    "Content discusses self-harm");
                // Escalate to support team
                notifySupportTeam(content, moderation);
            }
        }

        return result;
    }

    private void notifySupportTeam(String content, Moderation moderation) {
        // Send alert to human moderators for high-risk content
    }

    public static class ModerationResult {
        private String content;
        private boolean flagged;
        private List<Violation> violations = new ArrayList<>();

        public void addViolation(String category, double score, String message) {
            violations.add(new Violation(category, score, message));
        }

        // Getters and setters
    }

    public static class Violation {
        private String category;
        private double score;
        private String message;

        public Violation(String category, double score, String message) {
            this.category = category;
            this.score = score;
            this.message = message;
        }

        // Getters
    }
}

Example 3: Moderating Chat Messages with @Moderate Annotation

Automatic moderation integrated into AI services:

import io.quarkiverse.langchain4j.RegisterAiService;
import dev.langchain4j.service.Moderate;
import dev.langchain4j.service.SystemMessage;
import dev.langchain4j.service.UserMessage;

@RegisterAiService(
    moderationModelSupplier = RegisterAiService.BeanIfExistsModerationModelSupplier.class
)
public interface SafeChatService {

    /**
     * Chat method with automatic input moderation.
     *
     * The @Moderate annotation automatically screens user input before
     * sending to the chat model. If content is flagged, a
     * ModerationException is thrown, preventing unsafe content from
     * reaching the model.
     */
    @SystemMessage("You are a helpful assistant for customer support.")
    @Moderate
    String chat(@UserMessage String userMessage);
}

Usage with exception handling:

import jakarta.inject.Inject;
import dev.langchain4j.service.ModerationException;

public class CustomerSupportController {

    @Inject
    SafeChatService chatService;

    public String handleUserQuery(String query) {
        try {
            // Input is automatically moderated before processing
            String response = chatService.chat(query);
            return response;

        } catch (ModerationException e) {
            // User input violated content policy
            return "Your message contains inappropriate content and " +
                   "cannot be processed. Please revise your message.";
        }
    }
}

Configuration for automatic moderation:

quarkus.langchain4j.openai.api-key=sk-...
quarkus.langchain4j.openai.moderation-model.model-name=omni-moderation-latest

Example 4: Custom Threshold Handling with Multiple Policies

Implementing custom moderation policies with different thresholds:

import jakarta.enterprise.context.ApplicationScoped;
import dev.langchain4j.model.moderation.ModerationModel;
import dev.langchain4j.model.moderation.Moderation;
import dev.langchain4j.model.output.Response;
import dev.langchain4j.model.openai.OpenAiModerationModel;

@ApplicationScoped
public class PolicyEnforcementService {

    private final ModerationModel strictModel;
    private final ModerationModel lenientModel;

    public PolicyEnforcementService() {
        // Strict policy for public content
        this.strictModel = OpenAiModerationModel.builder()
            .configName("strict")
            .modelName("text-moderation-stable")
            .build();

        // Lenient policy for internal content
        this.lenientModel = OpenAiModerationModel.builder()
            .configName("lenient")
            .modelName("text-moderation-stable")
            .build();
    }

    /**
     * Enforce strict policy with custom thresholds.
     * Used for public-facing content like comments and posts.
     */
    public boolean isPublicContentSafe(String content) {
        Response<Moderation> response = strictModel.moderate(content);
        Moderation moderation = response.content();

        // Custom strict thresholds for public content
        if (moderation.hateSpeech() > 0.3) return false;
        if (moderation.violence() > 0.3) return false;
        if (moderation.sexualContent() > 0.2) return false;
        if (moderation.selfHarm() > 0.1) return false;  // Zero tolerance

        return true;
    }

    /**
     * Enforce lenient policy for internal communications.
     * Used for private messages between verified users.
     */
    public boolean isInternalContentSafe(String content) {
        Response<Moderation> response = lenientModel.moderate(content);
        Moderation moderation = response.content();

        // More lenient thresholds for internal content
        if (moderation.hateSpeech() > 0.8) return false;
        if (moderation.violence() > 0.8) return false;
        if (moderation.selfHarm() > 0.5) return false;

        return true;
    }

    /**
     * Get detailed moderation scores for risk assessment.
     */
    public ContentRisk assessContentRisk(String content) {
        Response<Moderation> response = strictModel.moderate(content);
        Moderation moderation = response.content();

        ContentRisk risk = new ContentRisk();
        risk.setHateScore(moderation.hateSpeech());
        risk.setViolenceScore(moderation.violence());
        risk.setSexualScore(moderation.sexualContent());
        risk.setSelfHarmScore(moderation.selfHarm());

        // Calculate overall risk level
        double maxScore = Math.max(
            Math.max(moderation.hateSpeech(), moderation.violence()),
            Math.max(moderation.sexualContent(), moderation.selfHarm())
        );

        if (maxScore > 0.8) risk.setLevel(RiskLevel.HIGH);
        else if (maxScore > 0.5) risk.setLevel(RiskLevel.MEDIUM);
        else if (maxScore > 0.3) risk.setLevel(RiskLevel.LOW);
        else risk.setLevel(RiskLevel.MINIMAL);

        return risk;
    }

    public enum RiskLevel {
        MINIMAL, LOW, MEDIUM, HIGH
    }

    public static class ContentRisk {
        private double hateScore;
        private double violenceScore;
        private double sexualScore;
        private double selfHarmScore;
        private RiskLevel level;

        // Getters and setters
    }
}

Configuration in application.properties:

# Strict policy configuration
quarkus.langchain4j.openai.strict.api-key=sk-...
quarkus.langchain4j.openai.strict.moderation-model.model-name=text-moderation-stable
quarkus.langchain4j.openai.strict.moderation-model.log-responses=true

# Lenient policy configuration
quarkus.langchain4j.openai.lenient.api-key=sk-...
quarkus.langchain4j.openai.lenient.moderation-model.model-name=text-moderation-stable

Example 5: Batch Moderation for Multiple Texts

Efficient moderation of multiple content items:

import jakarta.inject.Inject;
import dev.langchain4j.model.moderation.ModerationModel;
import dev.langchain4j.model.moderation.Moderation;
import dev.langchain4j.model.output.Response;
import java.util.List;
import java.util.ArrayList;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.stream.Collectors;

public class BatchModerationService {

    @Inject
    ModerationModel moderationModel;

    private final ExecutorService executor =
        Executors.newFixedThreadPool(10);

    /**
     * Moderate multiple content items in parallel.
     * Returns map of content to moderation results.
     */
    public List<ModerationResult> moderateBatch(List<String> contents) {
        // Create futures for parallel moderation
        List<CompletableFuture<ModerationResult>> futures = contents.stream()
            .map(content -> CompletableFuture.supplyAsync(() -> {
                Response<Moderation> response = moderationModel.moderate(content);
                Moderation moderation = response.content();
                return new ModerationResult(content, moderation);
            }, executor))
            .collect(Collectors.toList());

        // Wait for all moderations to complete
        CompletableFuture.allOf(futures.toArray(new CompletableFuture[0]))
            .join();

        // Collect results
        return futures.stream()
            .map(CompletableFuture::join)
            .collect(Collectors.toList());
    }

    /**
     * Moderate batch and filter out flagged content.
     */
    public List<String> filterSafeContent(List<String> contents) {
        List<ModerationResult> results = moderateBatch(contents);

        return results.stream()
            .filter(result -> !result.getModeration().flagged())
            .map(ModerationResult::getContent)
            .collect(Collectors.toList());
    }

    /**
     * Moderate batch and get statistics.
     */
    public ModerationStatistics getBatchStatistics(List<String> contents) {
        List<ModerationResult> results = moderateBatch(contents);

        ModerationStatistics stats = new ModerationStatistics();
        stats.setTotal(results.size());
        stats.setFlagged(results.stream()
            .filter(r -> r.getModeration().flagged())
            .count());
        stats.setSafe(stats.getTotal() - stats.getFlagged());

        // Category breakdown
        for (ModerationResult result : results) {
            Moderation mod = result.getModeration();
            if (mod.hateSpeech() > 0.7) stats.incrementHate();
            if (mod.violence() > 0.7) stats.incrementViolence();
            if (mod.sexualContent() > 0.7) stats.incrementSexual();
            if (mod.selfHarm() > 0.7) stats.incrementSelfHarm();
        }

        return stats;
    }

    public static class ModerationResult {
        private String content;
        private Moderation moderation;

        public ModerationResult(String content, Moderation moderation) {
            this.content = content;
            this.moderation = moderation;
        }

        public String getContent() { return content; }
        public Moderation getModeration() { return moderation; }
    }

    public static class ModerationStatistics {
        private int total;
        private long flagged;
        private long safe;
        private int hate;
        private int violence;
        private int sexual;
        private int selfHarm;

        public void incrementHate() { hate++; }
        public void incrementViolence() { violence++; }
        public void incrementSexual() { sexual++; }
        public void incrementSelfHarm() { selfHarm++; }

        // Getters and setters
    }
}

Usage example:

import jakarta.inject.Inject;
import java.util.List;

public class CommentModerationService {

    @Inject
    BatchModerationService batchModeration;

    public void moderateNewComments(List<Comment> comments) {
        // Extract comment text
        List<String> texts = comments.stream()
            .map(Comment::getText)
            .collect(Collectors.toList());

        // Moderate all comments
        List<String> safeTexts = batchModeration.filterSafeContent(texts);

        // Get statistics
        ModerationStatistics stats = batchModeration.getBatchStatistics(texts);
        LOG.infof("Moderated %d comments: %d safe, %d flagged",
            stats.getTotal(), stats.getSafe(), stats.getFlagged());

        // Approve safe comments
        comments.stream()
            .filter(c -> safeTexts.contains(c.getText()))
            .forEach(Comment::approve);
    }
}

Integration Patterns

Pre-Generation Moderation

Validate user prompts before sending to chat models:

@RegisterAiService
public interface SafeAssistant {
    @Moderate  // Moderates input before generation
    String assist(String userPrompt);
}

Post-Generation Moderation

Validate generated content before returning to users:

public class SafeGenerationService {
    @Inject ModerationModel moderationModel;
    @Inject ChatModel chatModel;

    public String generateSafeResponse(String prompt) {
        String response = chatModel.generate(prompt);

        // Moderate generated content
        Response<Moderation> modResult =
            moderationModel.moderate(response);

        if (modResult.content().flagged()) {
            return "I cannot provide that response due to content policies.";
        }

        return response;
    }
}

Combined Moderation

Moderate both input and output for maximum safety:

@RegisterAiService(
    moderationModelSupplier = BeanIfExistsModerationModelSupplier.class
)
public interface DoublySafeAssistant {
    @Moderate  // Moderates input
    String assist(String userPrompt);
}

public class FullSafetyService {
    @Inject DoublySafeAssistant assistant;
    @Inject ModerationModel moderationModel;

    public String safeInteraction(String prompt) {
        try {
            // Input automatically moderated via @Moderate
            String response = assistant.assist(prompt);

            // Also moderate output
            Response<Moderation> outputCheck =
                moderationModel.moderate(response);

            if (outputCheck.content().flagged()) {
                return "Response was filtered due to content policies.";
            }

            return response;

        } catch (ModerationException e) {
            return "Input was filtered due to content policies.";
        }
    }
}

Best Practices

Security

  1. Never log flagged content without sanitization - Logging violations may create compliance issues
  2. Use environment variables for API keys - Never hardcode credentials
  3. Implement rate limiting - Prevent abuse of moderation API
  4. Audit moderation decisions - Log moderation results for review
  5. Handle edge cases - Network failures, API errors, timeout scenarios

Performance

  1. Batch moderation when possible - More efficient for multiple items
  2. Cache moderation results - Avoid re-moderating identical content
  3. Use async moderation - Don't block user interactions
  4. Set appropriate timeouts - Balance responsiveness with reliability
  5. Monitor API usage - Track costs and rate limit compliance

Policy Design

  1. Start with OpenAI's flags - Use default thresholds initially
  2. Tune based on your use case - Adjust thresholds for your audience
  3. Different policies for different contexts - Public vs. private content
  4. Human review for edge cases - Escalate unclear violations
  5. Regular policy review - Update as models improve

User Experience

  1. Clear error messages - Explain why content was rejected
  2. Provide guidance - Help users understand policies
  3. Allow appeals - Enable human review of automated decisions
  4. Fast feedback - Moderate before expensive operations
  5. Graceful degradation - Have fallback when moderation fails

Troubleshooting

Common Issues

ModerationException thrown unexpectedly

  • Check if content actually violates policies
  • Review category scores for borderline cases
  • Consider custom thresholds instead of default flags
  • Test with different model versions

High false positive rate

  • Use custom thresholds instead of boolean flags
  • Switch to text-moderation-stable for consistency
  • Review specific category scores causing issues
  • Consider human review workflow for edge cases

Slow moderation performance

  • Implement batch moderation for multiple items
  • Use async moderation with CompletableFuture
  • Cache results for identical content
  • Check network latency and timeout settings

API rate limiting

  • Implement exponential backoff retry logic
  • Use batch endpoints when available
  • Cache moderation results
  • Consider upgrading API tier

Configuration Examples

Development configuration with verbose logging:

quarkus.langchain4j.openai.api-key=${OPENAI_API_KEY}
quarkus.langchain4j.openai.moderation-model.model-name=omni-moderation-latest
quarkus.langchain4j.openai.moderation-model.log-requests=true
quarkus.langchain4j.openai.moderation-model.log-responses=true
quarkus.langchain4j.openai.log-requests-curl=true

Production configuration with security:

quarkus.langchain4j.openai.api-key=${OPENAI_API_KEY}
quarkus.langchain4j.openai.moderation-model.model-name=text-moderation-stable
quarkus.langchain4j.openai.moderation-model.log-requests=false
quarkus.langchain4j.openai.moderation-model.log-responses=false
quarkus.langchain4j.openai.timeout=30s

Multiple policy configuration:

# Default strict policy
quarkus.langchain4j.openai.api-key=${OPENAI_API_KEY}
quarkus.langchain4j.openai.moderation-model.model-name=text-moderation-stable

# Lenient policy for internal use
quarkus.langchain4j.openai.lenient.api-key=${OPENAI_API_KEY}
quarkus.langchain4j.openai.lenient.moderation-model.model-name=text-moderation-stable

# Strict policy for public content
quarkus.langchain4j.openai.strict.api-key=${OPENAI_API_KEY}
quarkus.langchain4j.openai.strict.moderation-model.model-name=omni-moderation-latest

Related Documentation

  • Chat Models - For integrating moderation with chat models
  • Configuration - For detailed configuration options
  • OpenAI Moderation API - Official OpenAI documentation

Install with Tessl CLI

npx tessl i tessl/maven-io-quarkiverse-langchain4j--quarkus-langchain4j-openai@1.7.0

docs

chat-models.md

configuration.md

cost-estimation.md

dev-ui-services.md

embedding-models.md

image-models.md

index.md

moderation-models.md

tile.json