CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/maven-dev-langchain4j--langchain4j-bedrock

AWS Bedrock integration for LangChain4j enabling Java applications to interact with various LLM providers through a unified interface

Overview
Eval results
Files

overview.mddocs/features/guardrails/

Guardrails

AWS Bedrock Guardrails provide content filtering, safety checks, and policy enforcement.

Capabilities

  • Filter harmful content: Block toxic, violent, sexual, or hateful content
  • Detect PII: Identify and anonymize personally identifiable information
  • Block topics: Prevent discussions of specific unwanted topics
  • Filter words: Block profanity and custom word lists
  • Ensure grounding: Verify responses are grounded in provided context

How It Works

  1. Create guardrail in AWS Console with policies
  2. Reference guardrail by ID and version in requests
  3. Receive assessments in response metadata

AWS Bedrock Guardrails Documentation

Quick Example

import dev.langchain4j.model.bedrock.BedrockGuardrailConfiguration;
import dev.langchain4j.model.bedrock.BedrockChatRequestParameters;
import dev.langchain4j.model.bedrock.BedrockChatModel;

// Configure guardrail
BedrockGuardrailConfiguration guardrail = BedrockGuardrailConfiguration.builder()
    .guardrailIdentifier("my-guardrail-id")
    .guardrailVersion("1")
    .build();

BedrockChatRequestParameters params = BedrockChatRequestParameters.builder()
    .guardrailConfiguration(guardrail)
    .build();

BedrockChatModel model = BedrockChatModel.builder()
    .modelId("anthropic.claude-3-5-sonnet-20241022-v2:0")
    .defaultRequestParameters(params)
    .build();

// Chat with guardrail protection
ChatResponse response = model.chat(request);

// Check for violations
if (response.metadata() instanceof BedrockChatResponseMetadata metadata) {
    GuardrailAssessmentSummary summary = metadata.guardrailAssessmentSummary();
    if (summary != null) {
        handleAssessments(summary);
    }
}

Policy Types

public enum Policy {
    TOPIC,      // Blocks unwanted topics
    CONTENT,    // Filters harmful content (hate, violence, sexual, etc.)
    WORD,       // Blocks profanity and custom word lists
    SENSITIVE,  // Detects and anonymizes PII
    CONTEXT     // Verifies factual accuracy
}

Actions

public enum Action {
    ANONYMIZED,  // PII was masked/removed
    BLOCKED,     // Content was blocked
    NONE,        // No action (passed policy check)
    UNKNOWN      // Unrecognized action
}

API Classes

BedrockGuardrailConfiguration

public class BedrockGuardrailConfiguration {
    public BedrockGuardrailConfiguration(String guardrailIdentifier, String guardrailVersion);

    public String guardrailIdentifier();
    public String guardrailVersion();

    public static Builder builder();
}

GuardrailAssessmentSummary

public class GuardrailAssessmentSummary {
    public List<GuardrailAssessment> inputAssessments();
    public List<GuardrailAssessment> ouputAssessments();

    public static Builder builder();
}

GuardrailAssessment

public class GuardrailAssessment {
    public Action action();
    public Policy policy();
    public String name();

    public static Builder<?> builder();
}

IAM Permissions

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "bedrock:InvokeModel",
        "bedrock:InvokeModelWithResponseStream",
        "bedrock:ApplyGuardrail"
      ],
      "Resource": [
        "arn:aws:bedrock:*::foundation-model/*",
        "arn:aws:bedrock:*:*:guardrail/*"
      ]
    }
  ]
}

Next Steps

  • Configuration Guide - Set up guardrails
  • Handling Assessments - Process violations
  • Use Cases - Common scenarios

Install with Tessl CLI

npx tessl i tessl/maven-dev-langchain4j--langchain4j-bedrock

docs

index.md

README.md

tile.json