LangChain AWS integration providing chat models, embeddings, and retrievers for seamless AWS service connections.
Advanced conversational AI capabilities using AWS Bedrock's Converse API with comprehensive support for streaming, function calling, multimodal input, structured output, and advanced features like reasoning content and prompt caching.
Primary chat model class for AWS Bedrock Converse API integration, extending LangChain's BaseChatModel with AWS-specific capabilities.
/**
* AWS Bedrock Converse chat model integration with streaming and tool support
*/
class ChatBedrockConverse extends BaseChatModel<ChatBedrockConverseCallOptions, AIMessageChunk> {
constructor(fields?: ChatBedrockConverseInput);
/** Generate a single response from the model */
invoke(messages: BaseMessage[], options?: ChatBedrockConverseCallOptions): Promise<AIMessage>;
/** Stream response chunks for real-time output */
stream(messages: BaseMessage[], options?: ChatBedrockConverseCallOptions): AsyncGenerator<AIMessageChunk>;
/** Bind tools for function calling capabilities */
bindTools(tools: ChatBedrockConverseToolType[], kwargs?: Partial<ChatBedrockConverseCallOptions>): Runnable<BaseLanguageModelInput, AIMessageChunk, ChatBedrockConverseCallOptions>;
/** Force structured JSON output using schema validation */
withStructuredOutput<T>(outputSchema: InteropZodType<T> | Record<string, any>, config?: StructuredOutputMethodOptions<boolean>): Runnable<BaseLanguageModelInput, T> | Runnable<BaseLanguageModelInput, { raw: BaseMessage; parsed: T }>;
/** Get LangSmith parameters for tracing */
getLsParams(options: ChatBedrockConverseCallOptions): LangSmithParams;
/** Get invocation parameters for Bedrock API */
invocationParams(options?: ChatBedrockConverseCallOptions): Partial<ConverseCommandParams>;
}Usage Examples:
import { ChatBedrockConverse } from "@langchain/aws";
import { HumanMessage } from "@langchain/core/messages";
// Basic initialization
const model = new ChatBedrockConverse({
region: "us-east-1",
model: "anthropic.claude-3-5-sonnet-20240620-v1:0",
temperature: 0.7,
maxTokens: 1000,
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
}
});
// Simple conversation
const response = await model.invoke([
new HumanMessage("Explain quantum computing in simple terms")
]);
// Streaming conversation
const stream = await model.stream([
new HumanMessage("Write a short story about a robot")
]);
for await (const chunk of stream) {
process.stdout.write(chunk.content);
}Comprehensive configuration options for initializing ChatBedrockConverse instances.
interface ChatBedrockConverseInput extends BaseChatModelParams, Partial<DefaultProviderInit> {
/** Custom BedrockRuntimeClient instance */
client?: BedrockRuntimeClient;
/** Configuration options for BedrockRuntimeClient */
clientOptions?: BedrockRuntimeClientConfig;
/** Enable streaming responses by default */
streaming?: boolean;
/** Model ID to use (default: "anthropic.claude-3-haiku-20240307-v1:0") */
model?: string;
/** AWS region for API calls */
region?: string;
/** AWS credentials for authentication */
credentials?: CredentialType;
/** Temperature for response randomness (0.0-1.0) */
temperature?: number;
/** Maximum tokens to generate */
maxTokens?: number;
/** Custom endpoint hostname override */
endpointHost?: string;
/** Top-p sampling parameter (0.0-1.0) */
topP?: number;
/** Additional model-specific request fields */
additionalModelRequestFields?: __DocumentType;
/** Include usage metadata in streaming responses */
streamUsage?: boolean;
/** Guardrail configuration for content filtering */
guardrailConfig?: GuardrailConfiguration;
/** Performance configuration for latency optimization */
performanceConfig?: PerformanceConfiguration;
/** Supported tool choice values for this model */
supportsToolChoiceValues?: Array<"auto" | "any" | "tool">;
}Runtime options that can be passed to model methods for per-request customization.
interface ChatBedrockConverseCallOptions extends BaseChatModelCallOptions {
/** Stop sequences to halt generation */
stop?: string[];
/** Tools available for function calling */
tools?: ChatBedrockConverseToolType[];
/** Tool choice strategy ("auto", "any", tool name, or BedrockToolChoice object) */
tool_choice?: BedrockConverseToolChoice;
/** Additional model-specific fields for this request */
additionalModelRequestFields?: __DocumentType;
/** Include usage metadata in streaming for this request */
streamUsage?: boolean;
/** Guardrail configuration for this request */
guardrailConfig?: GuardrailConfiguration;
/** Performance configuration for this request */
performanceConfig?: PerformanceConfiguration;
}Comprehensive tool binding and function calling capabilities with intelligent tool choice selection.
type ChatBedrockConverseToolType = BindToolsInput | BedrockTool;
type BedrockConverseToolChoice = "auto" | "any" | string | BedrockToolChoice;
type BedrockToolChoice = ToolChoice.AnyMember | ToolChoice.AutoMember | ToolChoice.ToolMember;Usage Examples:
import { z } from "zod";
// Define tools with Zod schemas
const weatherTool = {
name: "get_weather",
description: "Get current weather information",
schema: z.object({
location: z.string().describe("City and state, e.g. San Francisco, CA"),
unit: z.enum(["celsius", "fahrenheit"]).optional()
})
};
const calculatorTool = {
name: "calculate",
description: "Perform mathematical calculations",
schema: z.object({
expression: z.string().describe("Mathematical expression to evaluate")
})
};
// Bind tools with automatic tool choice
const modelWithTools = model.bindTools([weatherTool, calculatorTool], {
tool_choice: "auto" // Let model decide when to use tools
});
// Force tool usage
const modelForceWeather = model.bindTools([weatherTool], {
tool_choice: "get_weather" // Always use weather tool
});
// Require any tool
const modelRequireTool = model.bindTools([weatherTool, calculatorTool], {
tool_choice: "any" // Must use at least one tool
});
const result = await modelWithTools.invoke([
new HumanMessage("What's the weather in Paris and what's 2 + 2?")
]);
// Access tool calls from response
if (result.tool_calls && result.tool_calls.length > 0) {
result.tool_calls.forEach(call => {
console.log(`Tool: ${call.name}, Args:`, call.args);
});
}Force models to return structured JSON data using schema validation.
Usage Examples:
import { z } from "zod";
// Define response structure
const JokeSchema = z.object({
setup: z.string().describe("The setup of the joke"),
punchline: z.string().describe("The punchline"),
rating: z.number().min(1).max(10).optional().describe("Humor rating 1-10")
});
// Create structured output model
const structuredModel = model.withStructuredOutput(JokeSchema, {
name: "generate_joke"
});
// Get structured response
const joke = await structuredModel.invoke([
new HumanMessage("Tell me a joke about programming")
]);
console.log(joke); // { setup: "Why do programmers...", punchline: "...", rating: 8 }
// With raw response included
const modelWithRaw = model.withStructuredOutput(JokeSchema, {
name: "generate_joke",
includeRaw: true
});
const result = await modelWithRaw.invoke([
new HumanMessage("Tell me a joke about cats")
]);
console.log(result.parsed); // Structured joke object
console.log(result.raw); // Original AIMessageSupport for text, images, and documents in conversations.
Usage Examples:
import { HumanMessage } from "@langchain/core/messages";
// Image analysis
const imageMessage = new HumanMessage({
content: [
{ type: "text", text: "Describe what you see in this image" },
{
type: "image_url",
image_url: {
url: "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD..."
}
}
]
});
const imageResponse = await model.invoke([imageMessage]);
// Document analysis
const documentMessage = new HumanMessage({
content: [
{ type: "text", text: "Summarize this document" },
{
type: "document",
document: {
format: "pdf",
name: "report.pdf",
source: {
bytes: pdfBuffer
}
}
}
]
});
const docResponse = await model.invoke([documentMessage]);Claude models can produce reasoning blocks showing their thought process.
// Reasoning content is automatically handled in streaming
const stream = await model.stream([
new HumanMessage("Solve this step by step: What is 15% of 240?")
]);
for await (const chunk of stream) {
// Reasoning content appears in response_metadata
if (chunk.response_metadata?.reasoning) {
console.log("Reasoning:", chunk.response_metadata.reasoning);
}
console.log("Content:", chunk.content);
}Optimize costs and latency by caching frequently used prompts.
import { SystemMessage } from "@langchain/core/messages";
const systemMessage = new SystemMessage({
content: [
{ type: "text", text: "You are a helpful coding assistant with expertise in Python and JavaScript." },
{ cachePoint: { type: "default" } }, // Cache point marker
{ type: "text", text: "Always provide working code examples and explain your reasoning." }
]
});
const response = await model.invoke([
systemMessage,
new HumanMessage("How do I implement a binary search in Python?")
]);Built-in support for AWS Bedrock Guardrails for content filtering.
const guardedModel = new ChatBedrockConverse({
region: "us-east-1",
model: "anthropic.claude-3-5-sonnet-20240620-v1:0",
guardrailConfig: {
guardrailIdentifier: "your-guardrail-id",
guardrailVersion: "1",
trace: "enabled"
}
});
// Guardrails are automatically applied to requests and responses
const response = await guardedModel.invoke([
new HumanMessage("Your message here")
]);Optimize for reduced latency using performance settings.
const optimizedModel = new ChatBedrockConverse({
region: "us-east-1",
model: "anthropic.claude-3-5-sonnet-20240620-v1:0",
performanceConfig: {
latency: "optimized" // or "standard"
}
});Install with Tessl CLI
npx tessl i tessl/npm-langchain--aws