Build LLM-powered applications in Java with support for chatbots, agents, RAG, tools, and much more
Input and output validation and filtering for AI services. Guardrails provide a way to validate, filter, or transform inputs before they reach the LLM and outputs before they are returned to the caller.
Annotations for configuring guardrails at the class level.
package dev.langchain4j.service.guardrail;
/**
* Annotation for configuring input guardrails at class level
* Input guardrails are applied before the input reaches the LLM
*/
@Target(TYPE)
@Retention(RUNTIME)
public @interface InputGuardrails {
/**
* Array of guardrail class types to apply
* @return Guardrail classes
*/
Class<?>[] value() default {};
}
/**
* Annotation for configuring output guardrails at class level
* Output guardrails are applied after receiving output from the LLM
*/
@Target(TYPE)
@Retention(RUNTIME)
public @interface OutputGuardrails {
/**
* Array of guardrail class types to apply
* @return Guardrail classes
*/
Class<?>[] value() default {};
}Thread Safety: Annotations themselves are thread-safe as they are processed at compile/load time. However, the guardrail classes referenced by these annotations must be thread-safe since they may be invoked concurrently by multiple threads using the same AI service instance.
Common Pitfalls:
Edge Cases:
@InputGuardrails({})) are valid but have no effectPerformance Notes:
Exception Handling:
AiServices.create() or .build()Related APIs: See AiServices Builder Methods, GuardrailService
Service interface for guardrails.
package dev.langchain4j.service.guardrail;
/**
* Service interface for guardrails
* Guardrails can inspect, validate, or transform data
*/
public interface GuardrailService {
// Service methods for guardrail processing
}Thread Safety: Implementations of GuardrailService must be thread-safe. The same instance may be invoked concurrently by multiple threads. Use immutable state or proper synchronization for mutable state.
Common Pitfalls:
Edge Cases:
Performance Notes:
Exception Handling:
IllegalArgumentException or IllegalStateException for validation failuresRelated APIs: See InputGuardrails, OutputGuardrails, AiServices Builder Methods
Methods for configuring guardrails programmatically.
package dev.langchain4j.service;
/**
* Builder methods for configuring guardrails
*/
public class AiServicesBuilder<T> {
/**
* Configure input guardrails configuration object
* @param inputGuardrailsConfig Input guardrails configuration
* @return Builder instance
*/
public Builder<T> inputGuardrailsConfig(InputGuardrailsConfig inputGuardrailsConfig);
/**
* Configure output guardrails configuration object
* @param outputGuardrailsConfig Output guardrails configuration
* @return Builder instance
*/
public Builder<T> outputGuardrailsConfig(OutputGuardrailsConfig outputGuardrailsConfig);
/**
* Set input guardrail classes
* @param guardrailClasses List of guardrail classes
* @return Builder instance
*/
public <I> Builder<T> inputGuardrailClasses(List<Class<? extends I>> guardrailClasses);
/**
* Set input guardrail classes (varargs)
* @param guardrailClasses Guardrail classes
* @return Builder instance
*/
public <I> Builder<T> inputGuardrailClasses(Class<? extends I>... guardrailClasses);
/**
* Set input guardrails instances
* @param guardrails List of guardrail instances
* @return Builder instance
*/
public <I> Builder<T> inputGuardrails(List<I> guardrails);
/**
* Set input guardrails instances (varargs)
* @param guardrails Guardrail instances
* @return Builder instance
*/
public <I> Builder<T> inputGuardrails(I... guardrails);
/**
* Set output guardrail classes
* @param guardrailClasses List of guardrail classes
* @return Builder instance
*/
public <O> Builder<T> outputGuardrailClasses(List<Class<? extends O>> guardrailClasses);
/**
* Set output guardrail classes (varargs)
* @param guardrailClasses Guardrail classes
* @return Builder instance
*/
public <O> Builder<T> outputGuardrailClasses(Class<? extends O>... guardrailClasses);
/**
* Set output guardrails instances
* @param guardrails List of guardrail instances
* @return Builder instance
*/
public <O> Builder<T> outputGuardrails(List<O> guardrails);
/**
* Set output guardrails instances (varargs)
* @param guardrails Guardrail instances
* @return Builder instance
*/
public <O> Builder<T> outputGuardrails(O... guardrails);
}Thread Safety: The builder itself is not thread-safe and should be used by a single thread. However, the built AI service instance is thread-safe and can be used concurrently by multiple threads. Guardrail instances provided to the builder must be thread-safe.
Common Pitfalls:
inputGuardrails() and inputGuardrailClasses()) - the last call wins, potentially overwriting previous configurationEdge Cases:
Performance Notes:
inputGuardrails()) allows you to share state and avoid repeated initialization overheadinputGuardrailClasses()) creates new instances, which is safer for isolation but may be less efficient.build(), not per requestException Handling:
Related APIs: See InputGuardrails, OutputGuardrails, GuardrailService, InputGuardrailsConfig, OutputGuardrailsConfig
import dev.langchain4j.service.AiServices;
import dev.langchain4j.service.guardrail.InputGuardrails;
import dev.langchain4j.service.guardrail.OutputGuardrails;
// Define guardrail classes (implementation not shown)
class ProfanityFilter { /* ... */ }
class PIIFilter { /* ... */ }
class ContentValidator { /* ... */ }
@InputGuardrails({ProfanityFilter.class, PIIFilter.class})
@OutputGuardrails({ContentValidator.class})
interface Assistant {
String chat(String message);
}
Assistant assistant = AiServices.create(Assistant.class, chatModel);
// Input will be filtered for profanity and PII before reaching LLM
// Output will be validated before being returned
String response = assistant.chat("Some user input");import dev.langchain4j.service.AiServices;
// Create guardrail instances
ProfanityFilter profanityFilter = new ProfanityFilter();
PIIFilter piiFilter = new PIIFilter();
ContentValidator validator = new ContentValidator();
interface Assistant {
String chat(String message);
}
Assistant assistant = AiServices.builder(Assistant.class)
.chatModel(chatModel)
.inputGuardrails(profanityFilter, piiFilter)
.outputGuardrails(validator)
.build();
String response = assistant.chat("Some user input");import dev.langchain4j.service.AiServices;
interface Assistant {
String chat(String message);
}
// Configure using classes (will be instantiated automatically)
Assistant assistant = AiServices.builder(Assistant.class)
.chatModel(chatModel)
.inputGuardrailClasses(ProfanityFilter.class, PIIFilter.class)
.outputGuardrailClasses(ContentValidator.class)
.build();
String response = assistant.chat("Some user input");// Example guardrail implementation (conceptual)
class InputLengthGuardrail {
private final int maxLength;
public InputLengthGuardrail(int maxLength) {
this.maxLength = maxLength;
}
public String validate(String input) {
if (input.length() > maxLength) {
throw new IllegalArgumentException(
"Input exceeds maximum length of " + maxLength
);
}
return input;
}
}
interface Assistant {
String chat(String message);
}
Assistant assistant = AiServices.builder(Assistant.class)
.chatModel(chatModel)
.inputGuardrails(new InputLengthGuardrail(1000))
.build();
try {
String response = assistant.chat("User input...");
} catch (IllegalArgumentException e) {
System.err.println("Input validation failed: " + e.getMessage());
}// Example guardrail implementation (conceptual)
class OutputSanitizer {
public String sanitize(String output) {
// Remove sensitive patterns
output = output.replaceAll("\\b\\d{3}-\\d{2}-\\d{4}\\b", "[SSN REDACTED]");
output = output.replaceAll("\\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\\.[A-Z]{2,}\\b",
"[EMAIL REDACTED]");
return output;
}
}
interface Assistant {
String chat(String message);
}
Assistant assistant = AiServices.builder(Assistant.class)
.chatModel(chatModel)
.outputGuardrails(new OutputSanitizer())
.build();
// Output will have sensitive information redacted
String response = assistant.chat("What is your email address?");import dev.langchain4j.service.AiServices;
// Multiple guardrails are applied in order
interface Assistant {
String chat(String message);
}
Assistant assistant = AiServices.builder(Assistant.class)
.chatModel(chatModel)
.inputGuardrails(
new InputLengthGuardrail(1000),
new ProfanityFilter(),
new PIIFilter()
)
.outputGuardrails(
new OutputSanitizer(),
new ContentValidator(),
new FormattingGuardrail()
)
.build();
String response = assistant.chat("User input");import dev.langchain4j.service.AiServices;
import dev.langchain4j.service.Moderate;
interface Assistant {
@Moderate // Built-in moderation check
String chat(String message);
}
// Combine moderation with custom guardrails
Assistant assistant = AiServices.builder(Assistant.class)
.chatModel(chatModel)
.moderationModel(moderationModel)
.inputGuardrails(new CustomInputGuardrail())
.outputGuardrails(new CustomOutputGuardrail())
.build();
// Moderation happens in parallel with LLM call
// Guardrails are applied before/after as configured
String response = assistant.chat("User input");import java.util.ArrayList;
import java.util.List;
// Composing guardrails dynamically based on context
class GuardrailFactory {
public static List<Object> createInputGuardrails(SecurityLevel level) {
List<Object> guardrails = new ArrayList<>();
// Always include basic validation
guardrails.add(new InputLengthGuardrail(5000));
guardrails.add(new NullInputGuardrail());
if (level == SecurityLevel.STANDARD || level == SecurityLevel.HIGH) {
guardrails.add(new ProfanityFilter());
guardrails.add(new PIIFilter());
}
if (level == SecurityLevel.HIGH) {
guardrails.add(new SQLInjectionFilter());
guardrails.add(new XSSFilter());
guardrails.add(new CommandInjectionFilter());
}
return guardrails;
}
public static List<Object> createOutputGuardrails(SecurityLevel level) {
List<Object> guardrails = new ArrayList<>();
// Always sanitize output
guardrails.add(new OutputSanitizer());
if (level == SecurityLevel.STANDARD || level == SecurityLevel.HIGH) {
guardrails.add(new PIIRedactor());
}
if (level == SecurityLevel.HIGH) {
guardrails.add(new CredentialLeakDetector());
guardrails.add(new InternalPathRedactor());
}
return guardrails;
}
}
// Usage
interface SecureAssistant {
String chat(String message);
}
SecureAssistant assistant = AiServices.builder(SecureAssistant.class)
.chatModel(chatModel)
.inputGuardrails(GuardrailFactory.createInputGuardrails(SecurityLevel.HIGH))
.outputGuardrails(GuardrailFactory.createOutputGuardrails(SecurityLevel.HIGH))
.build();// Guardrails that pass context to subsequent guardrails
class GuardrailContext {
private final Map<String, Object> data = new ConcurrentHashMap<>();
public void set(String key, Object value) {
data.put(key, value);
}
public <T> T get(String key, Class<T> type) {
return type.cast(data.get(key));
}
}
class ContextAwareInputGuardrail {
private final ThreadLocal<GuardrailContext> contextHolder = new ThreadLocal<>();
public String process(String input, GuardrailContext context) {
// Store detected metadata for later guardrails
context.set("inputLength", input.length());
context.set("containsPII", detectPII(input));
return input;
}
private boolean detectPII(String input) {
// PII detection logic
return input.matches(".*\\b\\d{3}-\\d{2}-\\d{4}\\b.*");
}
}
class ContextConsumingGuardrail {
public String process(String input, GuardrailContext context) {
Boolean containsPII = context.get("containsPII", Boolean.class);
if (Boolean.TRUE.equals(containsPII)) {
// Apply stricter validation if PII detected
return applyStrictValidation(input);
}
return input;
}
private String applyStrictValidation(String input) {
// Enhanced validation logic
return input;
}
}// Guardrail that conditionally applies based on input characteristics
class ConditionalGuardrail {
private final Guardrail strictGuardrail;
private final Guardrail lenientGuardrail;
public ConditionalGuardrail(Guardrail strict, Guardrail lenient) {
this.strictGuardrail = strict;
this.lenientGuardrail = lenient;
}
public String process(String input) {
// Apply different guardrails based on input
if (requiresStrictValidation(input)) {
return strictGuardrail.process(input);
} else {
return lenientGuardrail.process(input);
}
}
private boolean requiresStrictValidation(String input) {
return input.contains("payment")
|| input.contains("password")
|| input.contains("ssn");
}
}
interface Assistant {
String chat(String message);
}
Assistant assistant = AiServices.builder(Assistant.class)
.chatModel(chatModel)
.inputGuardrails(new ConditionalGuardrail(
new StrictSecurityGuardrail(),
new BasicValidationGuardrail()
))
.build();import java.util.Set;
import java.util.regex.Pattern;
class ProfanityFilter {
private final Set<String> profanityList;
private final Pattern profanityPattern;
private final boolean strictMode;
public ProfanityFilter(Set<String> profanityList, boolean strictMode) {
this.profanityList = profanityList;
this.strictMode = strictMode;
// Build regex pattern from profanity list
String patternStr = String.join("|", profanityList);
this.profanityPattern = Pattern.compile(
"\\b(" + patternStr + ")\\b",
Pattern.CASE_INSENSITIVE
);
}
public String filter(String input) {
if (input == null) {
return null;
}
// Check for profanity
if (profanityPattern.matcher(input).find()) {
if (strictMode) {
throw new IllegalArgumentException(
"Input contains prohibited content"
);
} else {
// Replace profanity with asterisks
return profanityPattern.matcher(input)
.replaceAll(match -> "*".repeat(match.group().length()));
}
}
return input;
}
}
// Usage
ProfanityFilter filter = new ProfanityFilter(
Set.of("badword1", "badword2", "badword3"),
false // lenient mode - replace instead of reject
);
Assistant assistant = AiServices.builder(Assistant.class)
.chatModel(chatModel)
.inputGuardrails(filter)
.build();import java.util.regex.Matcher;
import java.util.regex.Pattern;
class PIIFilter {
private static final Pattern SSN_PATTERN =
Pattern.compile("\\b\\d{3}-\\d{2}-\\d{4}\\b");
private static final Pattern CREDIT_CARD_PATTERN =
Pattern.compile("\\b\\d{4}[\\s-]?\\d{4}[\\s-]?\\d{4}[\\s-]?\\d{4}\\b");
private static final Pattern EMAIL_PATTERN =
Pattern.compile("\\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Z|a-z]{2,}\\b");
private static final Pattern PHONE_PATTERN =
Pattern.compile("\\b\\d{3}[-.]?\\d{3}[-.]?\\d{4}\\b");
private final boolean strict;
public PIIFilter(boolean strict) {
this.strict = strict;
}
public String filter(String input) {
if (input == null) {
return null;
}
boolean containsPII = false;
String result = input;
// Check for SSN
if (SSN_PATTERN.matcher(result).find()) {
containsPII = true;
result = SSN_PATTERN.matcher(result).replaceAll("[SSN REDACTED]");
}
// Check for credit card numbers
if (CREDIT_CARD_PATTERN.matcher(result).find()) {
containsPII = true;
result = CREDIT_CARD_PATTERN.matcher(result).replaceAll("[CARD REDACTED]");
}
// Check for email addresses
if (EMAIL_PATTERN.matcher(result).find()) {
containsPII = true;
result = EMAIL_PATTERN.matcher(result).replaceAll("[EMAIL REDACTED]");
}
// Check for phone numbers
if (PHONE_PATTERN.matcher(result).find()) {
containsPII = true;
result = PHONE_PATTERN.matcher(result).replaceAll("[PHONE REDACTED]");
}
if (strict && containsPII) {
throw new IllegalArgumentException(
"Input contains personally identifiable information"
);
}
return result;
}
}
// Usage - strict mode rejects, lenient mode redacts
PIIFilter strictFilter = new PIIFilter(true);
PIIFilter lenientFilter = new PIIFilter(false);
Assistant strictAssistant = AiServices.builder(Assistant.class)
.chatModel(chatModel)
.inputGuardrails(strictFilter)
.outputGuardrails(lenientFilter) // Redact PII in output
.build();class ContentSafetyGuardrail {
private final OpenAiModerationModel moderationModel;
private final boolean blockOnFlag;
public ContentSafetyGuardrail(
OpenAiModerationModel moderationModel,
boolean blockOnFlag
) {
this.moderationModel = moderationModel;
this.blockOnFlag = blockOnFlag;
}
public String filter(String input) {
if (input == null || input.isEmpty()) {
return input;
}
Response<Moderation> response = moderationModel.moderate(input);
Moderation moderation = response.content();
if (moderation.flagged()) {
if (blockOnFlag) {
throw new ContentViolationException(
"Content violates safety policies",
moderation.flaggedText()
);
} else {
// Log the violation but allow it
logViolation(input, moderation);
}
}
return input;
}
private void logViolation(String input, Moderation moderation) {
// Logging implementation
System.err.println("Content flagged: " + moderation.flaggedText());
}
}
class ContentViolationException extends RuntimeException {
private final String flaggedContent;
public ContentViolationException(String message, String flaggedContent) {
super(message);
this.flaggedContent = flaggedContent;
}
public String getFlaggedContent() {
return flaggedContent;
}
}
// Usage
OpenAiModerationModel moderationModel = OpenAiModerationModel.builder()
.apiKey(System.getenv("OPENAI_API_KEY"))
.build();
ContentSafetyGuardrail inputSafety =
new ContentSafetyGuardrail(moderationModel, true);
ContentSafetyGuardrail outputSafety =
new ContentSafetyGuardrail(moderationModel, false);
Assistant assistant = AiServices.builder(Assistant.class)
.chatModel(chatModel)
.inputGuardrails(inputSafety)
.outputGuardrails(outputSafety)
.build();
try {
String response = assistant.chat("User input");
} catch (ContentViolationException e) {
System.err.println("Blocked: " + e.getMessage());
System.err.println("Flagged: " + e.getFlaggedContent());
}// Filter content specific to business domain
class BusinessDomainFilter {
private final Set<String> allowedTopics;
private final Set<String> prohibitedTopics;
private final TopicClassifier classifier;
public BusinessDomainFilter(
Set<String> allowedTopics,
Set<String> prohibitedTopics,
TopicClassifier classifier
) {
this.allowedTopics = allowedTopics;
this.prohibitedTopics = prohibitedTopics;
this.classifier = classifier;
}
public String filter(String input) {
if (input == null || input.isEmpty()) {
return input;
}
Set<String> detectedTopics = classifier.classify(input);
// Check for prohibited topics
Set<String> violations = new HashSet<>(detectedTopics);
violations.retainAll(prohibitedTopics);
if (!violations.isEmpty()) {
throw new IllegalArgumentException(
"Input contains prohibited topics: " + violations
);
}
// Check if at least one allowed topic is present
if (!allowedTopics.isEmpty()) {
Set<String> allowed = new HashSet<>(detectedTopics);
allowed.retainAll(allowedTopics);
if (allowed.isEmpty()) {
throw new IllegalArgumentException(
"Input must relate to allowed topics: " + allowedTopics
);
}
}
return input;
}
}
// Simple keyword-based classifier
class TopicClassifier {
private final Map<String, Set<String>> topicKeywords;
public TopicClassifier(Map<String, Set<String>> topicKeywords) {
this.topicKeywords = topicKeywords;
}
public Set<String> classify(String text) {
Set<String> topics = new HashSet<>();
String lowerText = text.toLowerCase();
for (Map.Entry<String, Set<String>> entry : topicKeywords.entrySet()) {
for (String keyword : entry.getValue()) {
if (lowerText.contains(keyword.toLowerCase())) {
topics.add(entry.getKey());
break;
}
}
}
return topics;
}
}
// Usage - customer service bot that only handles specific topics
TopicClassifier classifier = new TopicClassifier(Map.of(
"billing", Set.of("invoice", "payment", "charge", "bill"),
"technical", Set.of("error", "bug", "issue", "problem"),
"account", Set.of("login", "password", "username", "profile"),
"political", Set.of("election", "politics", "government")
));
BusinessDomainFilter domainFilter = new BusinessDomainFilter(
Set.of("billing", "technical", "account"), // allowed
Set.of("political"), // prohibited
classifier
);
Assistant assistant = AiServices.builder(Assistant.class)
.chatModel(chatModel)
.inputGuardrails(domainFilter)
.build();class InputValidationGuardrail {
private final int minLength;
private final int maxLength;
private final Pattern allowedPattern;
private final boolean trimWhitespace;
public InputValidationGuardrail(
int minLength,
int maxLength,
Pattern allowedPattern,
boolean trimWhitespace
) {
this.minLength = minLength;
this.maxLength = maxLength;
this.allowedPattern = allowedPattern;
this.trimWhitespace = trimWhitespace;
}
public String validate(String input) {
if (input == null) {
throw new IllegalArgumentException("Input cannot be null");
}
String processed = trimWhitespace ? input.trim() : input;
if (processed.isEmpty()) {
throw new IllegalArgumentException("Input cannot be empty");
}
if (processed.length() < minLength) {
throw new IllegalArgumentException(
String.format("Input too short (min: %d, actual: %d)",
minLength, processed.length())
);
}
if (processed.length() > maxLength) {
throw new IllegalArgumentException(
String.format("Input too long (max: %d, actual: %d)",
maxLength, processed.length())
);
}
if (allowedPattern != null && !allowedPattern.matcher(processed).matches()) {
throw new IllegalArgumentException(
"Input does not match required format"
);
}
return processed;
}
}
// Usage
InputValidationGuardrail validator = new InputValidationGuardrail(
10, // min length
1000, // max length
Pattern.compile("^[a-zA-Z0-9\\s.,!?'-]+$"), // allowed chars
true // trim whitespace
);
Assistant assistant = AiServices.builder(Assistant.class)
.chatModel(chatModel)
.inputGuardrails(validator)
.build();import org.junit.jupiter.api.Test;
import static org.junit.jupiter.api.Assertions.*;
class GuardrailTests {
@Test
void testInputLengthGuardrail_acceptsValidInput() {
InputLengthGuardrail guardrail = new InputLengthGuardrail(100);
String input = "This is a valid input";
String result = guardrail.validate(input);
assertEquals(input, result);
}
@Test
void testInputLengthGuardrail_rejectsLongInput() {
InputLengthGuardrail guardrail = new InputLengthGuardrail(10);
String input = "This input is too long";
assertThrows(IllegalArgumentException.class, () -> {
guardrail.validate(input);
});
}
@Test
void testPIIFilter_redactsSensitiveData() {
PIIFilter filter = new PIIFilter(false);
String input = "My SSN is 123-45-6789";
String result = filter.filter(input);
assertFalse(result.contains("123-45-6789"));
assertTrue(result.contains("[SSN REDACTED]"));
}
@Test
void testPIIFilter_strictMode_throwsOnPII() {
PIIFilter filter = new PIIFilter(true);
String input = "Contact me at user@example.com";
assertThrows(IllegalArgumentException.class, () -> {
filter.filter(input);
});
}
@Test
void testGuardrail_handlesNullInput() {
PIIFilter filter = new PIIFilter(false);
String result = filter.filter(null);
assertNull(result);
}
@Test
void testGuardrail_handlesEmptyInput() {
PIIFilter filter = new PIIFilter(false);
String input = "";
String result = filter.filter(input);
assertEquals("", result);
}
@Test
void testGuardrailChain_appliesInOrder() {
List<String> executionOrder = new ArrayList<>();
Guardrail first = input -> {
executionOrder.add("first");
return input;
};
Guardrail second = input -> {
executionOrder.add("second");
return input;
};
Guardrail third = input -> {
executionOrder.add("third");
return input;
};
// Apply guardrails
String input = "test";
third.process(second.process(first.process(input)));
assertEquals(List.of("first", "second", "third"), executionOrder);
}
}import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import static org.junit.jupiter.api.Assertions.*;
class GuardrailIntegrationTests {
private ChatLanguageModel mockModel;
private Assistant assistant;
@BeforeEach
void setup() {
mockModel = new MockChatLanguageModel("Mock response");
assistant = AiServices.builder(Assistant.class)
.chatModel(mockModel)
.inputGuardrails(
new InputLengthGuardrail(1000),
new PIIFilter(true)
)
.outputGuardrails(new OutputSanitizer())
.build();
}
@Test
void testInputGuardrail_blocksInvalidInput() {
String inputWithPII = "My email is user@example.com";
assertThrows(IllegalArgumentException.class, () -> {
assistant.chat(inputWithPII);
});
}
@Test
void testInputGuardrail_allowsValidInput() {
String validInput = "What is the weather today?";
String response = assistant.chat(validInput);
assertNotNull(response);
assertEquals("Mock response", response);
}
@Test
void testOutputGuardrail_sanitizesResponse() {
MockChatLanguageModel modelWithPII = new MockChatLanguageModel(
"Contact us at support@company.com"
);
Assistant assistantWithSanitizer = AiServices.builder(Assistant.class)
.chatModel(modelWithPII)
.outputGuardrails(new PIIFilter(false))
.build();
String response = assistantWithSanitizer.chat("How to contact?");
assertFalse(response.contains("support@company.com"));
assertTrue(response.contains("[EMAIL REDACTED]"));
}
interface Assistant {
String chat(String message);
}
static class MockChatLanguageModel implements ChatLanguageModel {
private final String response;
MockChatLanguageModel(String response) {
this.response = response;
}
@Override
public Response<AiMessage> generate(List<ChatMessage> messages) {
return Response.from(AiMessage.from(response));
}
}
}import org.junit.jupiter.api.Test;
import java.util.concurrent.*;
import java.util.stream.IntStream;
class GuardrailPerformanceTests {
@Test
void testGuardrail_latencyImpact() {
PIIFilter filter = new PIIFilter(false);
String input = "This is a test input without PII";
// Warm up
for (int i = 0; i < 1000; i++) {
filter.filter(input);
}
// Measure
long start = System.nanoTime();
for (int i = 0; i < 10000; i++) {
filter.filter(input);
}
long end = System.nanoTime();
double avgLatencyMs = (end - start) / 10000.0 / 1_000_000.0;
System.out.println("Average latency: " + avgLatencyMs + " ms");
// Assert reasonable performance (< 1ms per call)
assertTrue(avgLatencyMs < 1.0,
"Guardrail latency too high: " + avgLatencyMs + " ms");
}
@Test
void testGuardrail_concurrentAccess() throws InterruptedException {
PIIFilter filter = new PIIFilter(false);
String input = "Test input";
int numThreads = 10;
int callsPerThread = 1000;
ExecutorService executor = Executors.newFixedThreadPool(numThreads);
CountDownLatch latch = new CountDownLatch(numThreads);
AtomicInteger successCount = new AtomicInteger(0);
AtomicInteger errorCount = new AtomicInteger(0);
for (int i = 0; i < numThreads; i++) {
executor.submit(() -> {
try {
for (int j = 0; j < callsPerThread; j++) {
filter.filter(input);
successCount.incrementAndGet();
}
} catch (Exception e) {
errorCount.incrementAndGet();
} finally {
latch.countDown();
}
});
}
latch.await(30, TimeUnit.SECONDS);
executor.shutdown();
assertEquals(numThreads * callsPerThread, successCount.get());
assertEquals(0, errorCount.get());
}
@Test
void testGuardrailChain_cumulativeLatency() {
List<Object> guardrails = List.of(
new InputLengthGuardrail(5000),
new PIIFilter(false),
new ProfanityFilter(Set.of("badword"), false),
new InputValidationGuardrail(1, 1000, null, true)
);
String input = "This is a test input for measuring latency";
long start = System.nanoTime();
String result = input;
for (Object guardrail : guardrails) {
// Apply each guardrail
if (guardrail instanceof InputLengthGuardrail) {
result = ((InputLengthGuardrail) guardrail).validate(result);
} else if (guardrail instanceof PIIFilter) {
result = ((PIIFilter) guardrail).filter(result);
}
// ... other guardrails
}
long end = System.nanoTime();
double totalLatencyMs = (end - start) / 1_000_000.0;
System.out.println("Total chain latency: " + totalLatencyMs + " ms");
// Assert total latency is reasonable
assertTrue(totalLatencyMs < 5.0,
"Guardrail chain latency too high: " + totalLatencyMs + " ms");
}
}class ExceptionHandlingTests {
@Test
void testGuardrail_throwsExpectedException() {
InputLengthGuardrail guardrail = new InputLengthGuardrail(10);
String longInput = "This input is way too long for the limit";
IllegalArgumentException exception = assertThrows(
IllegalArgumentException.class,
() -> guardrail.validate(longInput)
);
assertTrue(exception.getMessage().contains("exceeds maximum length"));
}
@Test
void testGuardrail_gracefullyHandlesInternalError() {
Guardrail faultyGuardrail = input -> {
throw new RuntimeException("Internal error");
};
Assistant assistant = AiServices.builder(Assistant.class)
.chatModel(mockModel)
.inputGuardrails(faultyGuardrail)
.build();
assertThrows(RuntimeException.class, () -> {
assistant.chat("test");
});
}
@Test
void testGuardrailChain_shortCircuitsOnException() {
AtomicBoolean secondGuardrailCalled = new AtomicBoolean(false);
Guardrail firstGuardrail = input -> {
throw new IllegalArgumentException("First guardrail failed");
};
Guardrail secondGuardrail = input -> {
secondGuardrailCalled.set(true);
return input;
};
assertThrows(IllegalArgumentException.class, () -> {
secondGuardrail.process(firstGuardrail.process("test"));
});
assertFalse(secondGuardrailCalled.get(),
"Second guardrail should not be called after first fails");
}
}The guardrails API provides extension points for implementing custom validation, filtering, and transformation logic. The exact implementation details of guardrails depend on the specific integration framework being used (e.g., Quarkus, Spring Boot) or custom implementations through the SPI (Service Provider Interface).
Key concepts:
OpenAiModerationModel for content safety checking that can be integrated with guardrailsAiServices for the main service builder that accepts guardrail configurationChatLanguageModel implementationInstall with Tessl CLI
npx tessl i tessl/maven-dev-langchain4j--langchain4j@1.11.0