Quarkus LangChain4j OpenAI extension provides seamless integration between Quarkus and OpenAI's Large Language Models, enabling developers to easily incorporate LLMs into their applications with support for chat, streaming, embeddings, moderation, and image generation.
Comprehensive API reference for OpenAI moderation models in Quarkus, providing content safety and policy enforcement capabilities. The extension enables automatic content filtering, policy violation detection, and safe AI application development through OpenAI's moderation API.
Content moderation is essential for building safe AI applications that prevent harmful content from being processed or generated. OpenAI's moderation API provides real-time content policy violation detection across multiple categories including hate speech, self-harm, sexual content, and violence.
The Quarkus LangChain4j OpenAI extension provides seamless integration with OpenAI's moderation capabilities through:
@Moderate annotationThe moderation models implementation uses an SPI-based pattern where Quarkus-enhanced builders are automatically used when creating OpenAI moderation models. The builders extend LangChain4j's base builders to add:
Factory class implementing the Service Provider Interface for creating Quarkus-enhanced OpenAI moderation model builders.
/**
* SPI factory for creating OpenAI moderation models with Quarkus extensions.
*
* Registered via: META-INF/services/dev.langchain4j.model.openai.spi.OpenAiModerationModelBuilderFactory
*
* This factory is automatically discovered and used when calling
* OpenAiModerationModel.builder(), providing Quarkus-specific functionality
* transparently through Java's Service Provider Interface mechanism.
*
* The factory pattern enables seamless integration with Quarkus features
* like named configurations, TLS management, and HTTP proxies without
* modifying LangChain4j code.
*/
public class QuarkusOpenAiModerationModelBuilderFactory
implements OpenAiModerationModelBuilderFactory {
/**
* Creates a new Quarkus-enhanced moderation model builder instance.
*
* Returns:
* Builder instance with both Quarkus-specific and LangChain4j methods
*
* This method is automatically invoked by LangChain4j's SPI discovery
* mechanism when OpenAiModerationModel.builder() is called, ensuring
* Quarkus enhancements are available without explicit API changes.
*/
@Override
public OpenAiModerationModel.OpenAiModerationModelBuilder get();
}Enhanced builder class extending LangChain4j's OpenAiModerationModelBuilder with Quarkus-specific methods.
/**
* Enhanced builder for OpenAI moderation models with Quarkus features.
*
* Extends: dev.langchain4j.model.openai.OpenAiModerationModel.OpenAiModerationModelBuilder
*
* Provides all standard LangChain4j builder methods plus Quarkus-specific
* enhancements for enterprise integration. This builder enables both
* programmatic configuration and integration with Quarkus configuration
* system.
*
* Usage:
* ModerationModel model = OpenAiModerationModel.builder()
* .configName("strict-policy") // Quarkus-specific
* .tlsConfigurationName("custom-tls") // Quarkus-specific
* .proxy(proxy) // Quarkus-specific
* .logCurl(true) // Quarkus-specific
* .apiKey("sk-...") // LangChain4j inherited
* .modelName("omni-moderation-latest") // LangChain4j inherited
* .build();
*/
public static class Builder extends OpenAiModerationModel.OpenAiModerationModelBuilder {
/**
* Set the named configuration to use for this moderation model.
*
* Parameters:
* configName - Name of configuration defined in application.properties
*
* Returns:
* This builder for method chaining
*
* When specified, the builder loads settings from the named configuration
* instead of the default configuration. This enables multiple moderation
* models with different policies or API keys.
*
* For example, configName("strict-policy") loads configuration from
* quarkus.langchain4j.openai.strict-policy.* properties, allowing
* different moderation thresholds for different content types.
*
* Example:
* .configName("strict-policy") // Uses quarkus.langchain4j.openai.strict-policy.*
* .configName("lenient") // Uses quarkus.langchain4j.openai.lenient.*
*/
public Builder configName(String configName);
/**
* Set the named TLS configuration for secure HTTPS connections.
*
* Parameters:
* tlsConfigurationName - Name of Quarkus TLS configuration
*
* Returns:
* This builder for method chaining
*
* References a Quarkus named TLS configuration defined via
* quarkus.tls.{name}.* properties. This enables custom certificates,
* mutual TLS authentication, custom trust stores, or specific cipher
* suites required in enterprise environments.
*
* Essential for organizations requiring custom certificate authorities,
* client certificate authentication, or specific security policies.
*
* Example:
* .tlsConfigurationName("enterprise-ca") // Custom CA certificates
* .tlsConfigurationName("mtls") // Mutual TLS authentication
*/
public Builder tlsConfigurationName(String tlsConfigurationName);
/**
* Set HTTP proxy for network traffic routing.
*
* Parameters:
* proxy - Java Proxy instance (HTTP or SOCKS)
*
* Returns:
* This builder for method chaining
*
* Configures HTTP or SOCKS proxy for moderation API requests. Required
* in corporate environments where direct internet access is restricted
* and all HTTP traffic must route through corporate proxies.
*
* The proxy configuration applies to all requests made by this
* moderation model instance, including retries and error scenarios.
*
* Example:
* Proxy proxy = new Proxy(Proxy.Type.HTTP,
* new InetSocketAddress("proxy.company.com", 8080));
* .proxy(proxy)
*/
public Builder proxy(Proxy proxy);
/**
* Enable curl-style request logging for debugging.
*
* Parameters:
* logCurl - true to enable curl logging, false to disable
*
* Returns:
* This builder for method chaining
*
* When enabled, logs each API request in curl command format, allowing
* developers to reproduce requests manually for debugging. The curl
* commands include all headers, body content, and can be executed
* directly in a terminal.
*
* WARNING: Curl logs include API keys and request content. Only enable
* in development environments or ensure logs are properly secured.
*
* Example:
* .logCurl(true) // Logs: curl -X POST https://api.openai.com/v1/moderations ...
*/
public Builder logCurl(boolean logCurl);
/**
* Build the OpenAI moderation model instance.
*
* Returns:
* Configured OpenAiModerationModel ready for content moderation
*
* Constructs the moderation model with all specified configurations,
* validates required parameters (API key, base URL), and prepares the
* HTTP client with all configured properties including TLS, proxy,
* and logging settings.
*
* Throws:
* IllegalArgumentException if required configuration is missing
* ConfigValidationException if configuration is invalid
*/
@Override
public OpenAiModerationModel build();
/**
* Public fields (direct access, though builder methods are recommended).
*/
public String configName; // Named configuration reference
public String tlsConfigurationName; // Named TLS configuration
public boolean logCurl; // Curl logging flag
public Proxy proxy; // HTTP proxy configuration
}The Builder class inherits the following methods from OpenAiModerationModel.OpenAiModerationModelBuilder:
/**
* Set the OpenAI API base URL.
*
* Parameters:
* baseUrl - Base URL for OpenAI API (default: "https://api.openai.com/v1/")
*
* Returns:
* This builder for method chaining
*
* Override the default OpenAI API endpoint. Useful for:
* - Using OpenAI-compatible services (LocalAI, vLLM, etc.)
* - Routing through API gateways
* - Testing with mock servers
* - Using regional OpenAI endpoints
*
* Example:
* .baseUrl("https://api.openai.com/v1/") // Default OpenAI
* .baseUrl("https://api-gateway.company.com/openai/") // API gateway
*/
public Builder baseUrl(String baseUrl);
/**
* Set the OpenAI API key for authentication.
*
* Parameters:
* apiKey - OpenAI API key (format: sk-...)
*
* Returns:
* This builder for method chaining
*
* Required for authenticating with OpenAI API. API keys are obtained from
* the OpenAI dashboard. Different keys can be used for different
* environments (development, staging, production).
*
* SECURITY: Never hardcode API keys. Use environment variables or
* secure configuration management.
*
* Example:
* .apiKey(System.getenv("OPENAI_API_KEY"))
* .apiKey("sk-proj-...") // Only for demos, never in production
*/
public Builder apiKey(String apiKey);
/**
* Set the OpenAI organization ID for multi-organization accounts.
*
* Parameters:
* organizationId - Organization identifier (format: org-...)
*
* Returns:
* This builder for method chaining
*
* Required for users belonging to multiple organizations. The organization
* ID determines which organization's usage quota is consumed and which
* organization receives usage analytics.
*
* Example:
* .organizationId("org-...")
*/
public Builder organizationId(String organizationId);
/**
* Set the moderation model name to use.
*
* Parameters:
* modelName - Model identifier (default: "omni-moderation-latest")
*
* Returns:
* This builder for method chaining
*
* Available OpenAI moderation models:
* - omni-moderation-latest: Latest multi-modal moderation (text and images)
* - omni-moderation-2024-09-26: Specific omni-moderation version
* - text-moderation-latest: Latest text-only moderation
* - text-moderation-stable: Stable text moderation for consistent results
* - text-moderation-007: Specific text moderation version
*
* The "latest" variants automatically use the newest model version,
* while specific versions ensure consistent behavior over time.
*
* Example:
* .modelName("omni-moderation-latest") // Best quality, auto-updates
* .modelName("text-moderation-stable") // Consistent results
*/
public Builder modelName(String modelName);
/**
* Set request timeout duration.
*
* Parameters:
* timeout - Maximum time to wait for API response
*
* Returns:
* This builder for method chaining
*
* Controls how long to wait for moderation API responses. Moderation
* requests typically complete quickly (< 1 second), but timeout should
* account for network latency and rate limiting.
*
* Example:
* .timeout(Duration.ofSeconds(10)) // Standard timeout
* .timeout(Duration.ofSeconds(30)) // More tolerant for slow networks
*/
public Builder timeout(Duration timeout);
/**
* Set maximum number of retry attempts for failed requests.
*
* Parameters:
* maxRetries - Maximum retry attempts (minimum: 1)
*
* Returns:
* This builder for method chaining
*
* Configures automatic retry behavior for transient failures (network
* errors, rate limiting, server errors). The client uses exponential
* backoff between retries to avoid overwhelming the API.
*
* Example:
* .maxRetries(3) // Retry up to 3 times on failure
*/
public Builder maxRetries(Integer maxRetries);
/**
* Enable logging of moderation API requests.
*
* Parameters:
* logRequests - true to log requests, false to disable
*
* Returns:
* This builder for method chaining
*
* Logs complete API requests including headers and body content. Useful
* for debugging integration issues and monitoring API usage patterns.
*
* WARNING: Request logs include content being moderated. Ensure logs
* are secured and comply with privacy policies.
*
* Example:
* .logRequests(true)
*/
public Builder logRequests(Boolean logRequests);
/**
* Enable logging of moderation API responses.
*
* Parameters:
* logResponses - true to log responses, false to disable
*
* Returns:
* This builder for method chaining
*
* Logs complete API responses including moderation results, category
* scores, and flags. Essential for auditing moderation decisions and
* tuning moderation policies.
*
* Example:
* .logResponses(true)
*/
public Builder logResponses(Boolean logResponses);Configuration interface for moderation models, used with Quarkus SmallRye Config for declarative configuration.
/**
* Configuration group for OpenAI moderation models.
*
* ConfigRoot: quarkus.langchain4j.openai.moderation-model
* Named configs: quarkus.langchain4j.openai.{name}.moderation-model
*
* Enables declarative configuration of moderation models through
* application.properties or application.yaml without programmatic setup.
* All moderation model instances created via CDI injection automatically
* use these configuration values.
*
* Example configuration:
* quarkus.langchain4j.openai.moderation-model.model-name=omni-moderation-latest
* quarkus.langchain4j.openai.moderation-model.log-requests=false
*/
@ConfigGroup
public interface ModerationModelConfig {
/**
* The OpenAI moderation model name to use.
*
* Returns:
* Model identifier string
*
* Default: "omni-moderation-latest"
*
* Determines which OpenAI moderation model processes content. The
* "latest" variants automatically use the newest model version with
* improved accuracy and coverage, while specific versions ensure
* consistent behavior for compliance requirements.
*
* Configuration:
* quarkus.langchain4j.openai.moderation-model.model-name=omni-moderation-latest
* quarkus.langchain4j.openai.strict.moderation-model.model-name=text-moderation-stable
*/
@WithDefault("omni-moderation-latest")
String modelName();
/**
* Whether to log moderation API requests.
*
* Returns:
* Optional Boolean, empty uses parent logRequests setting
*
* Default: false (via ConfigDocDefault)
*
* Enables request logging for debugging and monitoring. When not set,
* inherits from parent quarkus.langchain4j.openai.log-requests setting.
* Request logs include content being moderated and API parameters.
*
* Configuration:
* quarkus.langchain4j.openai.moderation-model.log-requests=true
*/
@ConfigDocDefault("false")
Optional<Boolean> logRequests();
/**
* Whether to log moderation API responses.
*
* Returns:
* Optional Boolean, empty uses parent logResponses setting
*
* Default: false (via ConfigDocDefault)
*
* Enables response logging for audit trails and policy tuning. When
* not set, inherits from parent quarkus.langchain4j.openai.log-responses
* setting. Response logs include category scores and flagging decisions.
*
* Configuration:
* quarkus.langchain4j.openai.moderation-model.log-responses=true
*/
@ConfigDocDefault("false")
Optional<Boolean> logResponses();
}OpenAI provides several moderation models optimized for different use cases:
The latest multi-modal moderation model supporting both text and image content. Automatically updated to the newest version for best accuracy.
Specific version of the omni-moderation model, fixed at September 2024 release.
The latest text-only moderation model with automatic updates to newest version.
Stable text moderation model optimized for consistent results over time.
Specific version of text moderation model, frozen at version 007.
OpenAI's moderation API evaluates content across multiple policy categories, each with a score (0.0 to 1.0) and flag (true/false):
hate
harassment
self-harm
sexual
violence
hate/threatening
harassment/threatening
self-harm/intent
self-harm/instructions
sexual/minors
violence/graphic
Each moderation result includes:
Category Scores - Confidence scores (0.0 to 1.0) for each category
Category Flags - Boolean flags indicating policy violations
true means content violates OpenAI's policies for that category@Moderate annotation to block contentOverall Flagged - Boolean indicating if any category was flagged
true if one or more categories were flaggedExample moderation result:
{
"flagged": true,
"categories": {
"hate": true,
"violence": false,
"sexual": false
},
"category_scores": {
"hate": 0.95, // High score, flagged
"violence": 0.12, // Low score, not flagged
"sexual": 0.03 // Very low score, not flagged
}
}Simplest approach using CDI injection with default configuration:
import jakarta.inject.Inject;
import dev.langchain4j.model.moderation.ModerationModel;
import dev.langchain4j.model.moderation.Moderation;
import dev.langchain4j.model.output.Response;
public class ContentModerationService {
@Inject
ModerationModel moderationModel;
public void validateUserComment(String comment) {
// Moderate user-generated content
Response<Moderation> response = moderationModel.moderate(comment);
Moderation moderation = response.content();
if (moderation.flagged()) {
// Content violates policies
throw new IllegalArgumentException(
"Comment violates content policy: " +
moderation.flaggedText()
);
}
// Safe to process comment
processComment(comment);
}
private void processComment(String comment) {
// Store comment in database, etc.
}
}Configuration in application.properties:
quarkus.langchain4j.openai.api-key=sk-...
quarkus.langchain4j.openai.moderation-model.model-name=omni-moderation-latestAdvanced moderation with category-specific handling:
import jakarta.inject.Inject;
import dev.langchain4j.model.moderation.ModerationModel;
import dev.langchain4j.model.moderation.Moderation;
import dev.langchain4j.model.output.Response;
import org.jboss.logging.Logger;
public class ContentModerationService {
private static final Logger LOG = Logger.getLogger(ContentModerationService.class);
@Inject
ModerationModel moderationModel;
public ModerationResult moderateUserContent(String content) {
Response<Moderation> response = moderationModel.moderate(content);
Moderation moderation = response.content();
ModerationResult result = new ModerationResult();
result.setContent(content);
result.setFlagged(moderation.flagged());
// Analyze specific categories for detailed reporting
if (moderation.flagged()) {
if (moderation.hateSpeech() > 0.7) {
result.addViolation("hate", moderation.hateSpeech(),
"Content contains hate speech");
LOG.warnf("Hate speech detected: score=%.2f",
moderation.hateSpeech());
}
if (moderation.sexualContent() > 0.7) {
result.addViolation("sexual", moderation.sexualContent(),
"Content contains sexual material");
}
if (moderation.violence() > 0.7) {
result.addViolation("violence", moderation.violence(),
"Content contains violent material");
}
if (moderation.selfHarm() > 0.7) {
result.addViolation("self-harm", moderation.selfHarm(),
"Content discusses self-harm");
// Escalate to support team
notifySupportTeam(content, moderation);
}
}
return result;
}
private void notifySupportTeam(String content, Moderation moderation) {
// Send alert to human moderators for high-risk content
}
public static class ModerationResult {
private String content;
private boolean flagged;
private List<Violation> violations = new ArrayList<>();
public void addViolation(String category, double score, String message) {
violations.add(new Violation(category, score, message));
}
// Getters and setters
}
public static class Violation {
private String category;
private double score;
private String message;
public Violation(String category, double score, String message) {
this.category = category;
this.score = score;
this.message = message;
}
// Getters
}
}Automatic moderation integrated into AI services:
import io.quarkiverse.langchain4j.RegisterAiService;
import dev.langchain4j.service.Moderate;
import dev.langchain4j.service.SystemMessage;
import dev.langchain4j.service.UserMessage;
@RegisterAiService(
moderationModelSupplier = RegisterAiService.BeanIfExistsModerationModelSupplier.class
)
public interface SafeChatService {
/**
* Chat method with automatic input moderation.
*
* The @Moderate annotation automatically screens user input before
* sending to the chat model. If content is flagged, a
* ModerationException is thrown, preventing unsafe content from
* reaching the model.
*/
@SystemMessage("You are a helpful assistant for customer support.")
@Moderate
String chat(@UserMessage String userMessage);
}Usage with exception handling:
import jakarta.inject.Inject;
import dev.langchain4j.service.ModerationException;
public class CustomerSupportController {
@Inject
SafeChatService chatService;
public String handleUserQuery(String query) {
try {
// Input is automatically moderated before processing
String response = chatService.chat(query);
return response;
} catch (ModerationException e) {
// User input violated content policy
return "Your message contains inappropriate content and " +
"cannot be processed. Please revise your message.";
}
}
}Configuration for automatic moderation:
quarkus.langchain4j.openai.api-key=sk-...
quarkus.langchain4j.openai.moderation-model.model-name=omni-moderation-latestImplementing custom moderation policies with different thresholds:
import jakarta.enterprise.context.ApplicationScoped;
import dev.langchain4j.model.moderation.ModerationModel;
import dev.langchain4j.model.moderation.Moderation;
import dev.langchain4j.model.output.Response;
import dev.langchain4j.model.openai.OpenAiModerationModel;
@ApplicationScoped
public class PolicyEnforcementService {
private final ModerationModel strictModel;
private final ModerationModel lenientModel;
public PolicyEnforcementService() {
// Strict policy for public content
this.strictModel = OpenAiModerationModel.builder()
.configName("strict")
.modelName("text-moderation-stable")
.build();
// Lenient policy for internal content
this.lenientModel = OpenAiModerationModel.builder()
.configName("lenient")
.modelName("text-moderation-stable")
.build();
}
/**
* Enforce strict policy with custom thresholds.
* Used for public-facing content like comments and posts.
*/
public boolean isPublicContentSafe(String content) {
Response<Moderation> response = strictModel.moderate(content);
Moderation moderation = response.content();
// Custom strict thresholds for public content
if (moderation.hateSpeech() > 0.3) return false;
if (moderation.violence() > 0.3) return false;
if (moderation.sexualContent() > 0.2) return false;
if (moderation.selfHarm() > 0.1) return false; // Zero tolerance
return true;
}
/**
* Enforce lenient policy for internal communications.
* Used for private messages between verified users.
*/
public boolean isInternalContentSafe(String content) {
Response<Moderation> response = lenientModel.moderate(content);
Moderation moderation = response.content();
// More lenient thresholds for internal content
if (moderation.hateSpeech() > 0.8) return false;
if (moderation.violence() > 0.8) return false;
if (moderation.selfHarm() > 0.5) return false;
return true;
}
/**
* Get detailed moderation scores for risk assessment.
*/
public ContentRisk assessContentRisk(String content) {
Response<Moderation> response = strictModel.moderate(content);
Moderation moderation = response.content();
ContentRisk risk = new ContentRisk();
risk.setHateScore(moderation.hateSpeech());
risk.setViolenceScore(moderation.violence());
risk.setSexualScore(moderation.sexualContent());
risk.setSelfHarmScore(moderation.selfHarm());
// Calculate overall risk level
double maxScore = Math.max(
Math.max(moderation.hateSpeech(), moderation.violence()),
Math.max(moderation.sexualContent(), moderation.selfHarm())
);
if (maxScore > 0.8) risk.setLevel(RiskLevel.HIGH);
else if (maxScore > 0.5) risk.setLevel(RiskLevel.MEDIUM);
else if (maxScore > 0.3) risk.setLevel(RiskLevel.LOW);
else risk.setLevel(RiskLevel.MINIMAL);
return risk;
}
public enum RiskLevel {
MINIMAL, LOW, MEDIUM, HIGH
}
public static class ContentRisk {
private double hateScore;
private double violenceScore;
private double sexualScore;
private double selfHarmScore;
private RiskLevel level;
// Getters and setters
}
}Configuration in application.properties:
# Strict policy configuration
quarkus.langchain4j.openai.strict.api-key=sk-...
quarkus.langchain4j.openai.strict.moderation-model.model-name=text-moderation-stable
quarkus.langchain4j.openai.strict.moderation-model.log-responses=true
# Lenient policy configuration
quarkus.langchain4j.openai.lenient.api-key=sk-...
quarkus.langchain4j.openai.lenient.moderation-model.model-name=text-moderation-stableEfficient moderation of multiple content items:
import jakarta.inject.Inject;
import dev.langchain4j.model.moderation.ModerationModel;
import dev.langchain4j.model.moderation.Moderation;
import dev.langchain4j.model.output.Response;
import java.util.List;
import java.util.ArrayList;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.stream.Collectors;
public class BatchModerationService {
@Inject
ModerationModel moderationModel;
private final ExecutorService executor =
Executors.newFixedThreadPool(10);
/**
* Moderate multiple content items in parallel.
* Returns map of content to moderation results.
*/
public List<ModerationResult> moderateBatch(List<String> contents) {
// Create futures for parallel moderation
List<CompletableFuture<ModerationResult>> futures = contents.stream()
.map(content -> CompletableFuture.supplyAsync(() -> {
Response<Moderation> response = moderationModel.moderate(content);
Moderation moderation = response.content();
return new ModerationResult(content, moderation);
}, executor))
.collect(Collectors.toList());
// Wait for all moderations to complete
CompletableFuture.allOf(futures.toArray(new CompletableFuture[0]))
.join();
// Collect results
return futures.stream()
.map(CompletableFuture::join)
.collect(Collectors.toList());
}
/**
* Moderate batch and filter out flagged content.
*/
public List<String> filterSafeContent(List<String> contents) {
List<ModerationResult> results = moderateBatch(contents);
return results.stream()
.filter(result -> !result.getModeration().flagged())
.map(ModerationResult::getContent)
.collect(Collectors.toList());
}
/**
* Moderate batch and get statistics.
*/
public ModerationStatistics getBatchStatistics(List<String> contents) {
List<ModerationResult> results = moderateBatch(contents);
ModerationStatistics stats = new ModerationStatistics();
stats.setTotal(results.size());
stats.setFlagged(results.stream()
.filter(r -> r.getModeration().flagged())
.count());
stats.setSafe(stats.getTotal() - stats.getFlagged());
// Category breakdown
for (ModerationResult result : results) {
Moderation mod = result.getModeration();
if (mod.hateSpeech() > 0.7) stats.incrementHate();
if (mod.violence() > 0.7) stats.incrementViolence();
if (mod.sexualContent() > 0.7) stats.incrementSexual();
if (mod.selfHarm() > 0.7) stats.incrementSelfHarm();
}
return stats;
}
public static class ModerationResult {
private String content;
private Moderation moderation;
public ModerationResult(String content, Moderation moderation) {
this.content = content;
this.moderation = moderation;
}
public String getContent() { return content; }
public Moderation getModeration() { return moderation; }
}
public static class ModerationStatistics {
private int total;
private long flagged;
private long safe;
private int hate;
private int violence;
private int sexual;
private int selfHarm;
public void incrementHate() { hate++; }
public void incrementViolence() { violence++; }
public void incrementSexual() { sexual++; }
public void incrementSelfHarm() { selfHarm++; }
// Getters and setters
}
}Usage example:
import jakarta.inject.Inject;
import java.util.List;
public class CommentModerationService {
@Inject
BatchModerationService batchModeration;
public void moderateNewComments(List<Comment> comments) {
// Extract comment text
List<String> texts = comments.stream()
.map(Comment::getText)
.collect(Collectors.toList());
// Moderate all comments
List<String> safeTexts = batchModeration.filterSafeContent(texts);
// Get statistics
ModerationStatistics stats = batchModeration.getBatchStatistics(texts);
LOG.infof("Moderated %d comments: %d safe, %d flagged",
stats.getTotal(), stats.getSafe(), stats.getFlagged());
// Approve safe comments
comments.stream()
.filter(c -> safeTexts.contains(c.getText()))
.forEach(Comment::approve);
}
}Validate user prompts before sending to chat models:
@RegisterAiService
public interface SafeAssistant {
@Moderate // Moderates input before generation
String assist(String userPrompt);
}Validate generated content before returning to users:
public class SafeGenerationService {
@Inject ModerationModel moderationModel;
@Inject ChatModel chatModel;
public String generateSafeResponse(String prompt) {
String response = chatModel.generate(prompt);
// Moderate generated content
Response<Moderation> modResult =
moderationModel.moderate(response);
if (modResult.content().flagged()) {
return "I cannot provide that response due to content policies.";
}
return response;
}
}Moderate both input and output for maximum safety:
@RegisterAiService(
moderationModelSupplier = BeanIfExistsModerationModelSupplier.class
)
public interface DoublySafeAssistant {
@Moderate // Moderates input
String assist(String userPrompt);
}
public class FullSafetyService {
@Inject DoublySafeAssistant assistant;
@Inject ModerationModel moderationModel;
public String safeInteraction(String prompt) {
try {
// Input automatically moderated via @Moderate
String response = assistant.assist(prompt);
// Also moderate output
Response<Moderation> outputCheck =
moderationModel.moderate(response);
if (outputCheck.content().flagged()) {
return "Response was filtered due to content policies.";
}
return response;
} catch (ModerationException e) {
return "Input was filtered due to content policies.";
}
}
}ModerationException thrown unexpectedly
High false positive rate
Slow moderation performance
API rate limiting
Development configuration with verbose logging:
quarkus.langchain4j.openai.api-key=${OPENAI_API_KEY}
quarkus.langchain4j.openai.moderation-model.model-name=omni-moderation-latest
quarkus.langchain4j.openai.moderation-model.log-requests=true
quarkus.langchain4j.openai.moderation-model.log-responses=true
quarkus.langchain4j.openai.log-requests-curl=trueProduction configuration with security:
quarkus.langchain4j.openai.api-key=${OPENAI_API_KEY}
quarkus.langchain4j.openai.moderation-model.model-name=text-moderation-stable
quarkus.langchain4j.openai.moderation-model.log-requests=false
quarkus.langchain4j.openai.moderation-model.log-responses=false
quarkus.langchain4j.openai.timeout=30sMultiple policy configuration:
# Default strict policy
quarkus.langchain4j.openai.api-key=${OPENAI_API_KEY}
quarkus.langchain4j.openai.moderation-model.model-name=text-moderation-stable
# Lenient policy for internal use
quarkus.langchain4j.openai.lenient.api-key=${OPENAI_API_KEY}
quarkus.langchain4j.openai.lenient.moderation-model.model-name=text-moderation-stable
# Strict policy for public content
quarkus.langchain4j.openai.strict.api-key=${OPENAI_API_KEY}
quarkus.langchain4j.openai.strict.moderation-model.model-name=omni-moderation-latestInstall with Tessl CLI
npx tessl i tessl/maven-io-quarkiverse-langchain4j--quarkus-langchain4j-openai@1.7.0