CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/maven-dev-langchain4j--langchain4j-bedrock

AWS Bedrock integration for LangChain4j enabling Java applications to interact with various LLM providers through a unified interface

Overview
Eval results
Files

thinking-mode.mddocs/guides/

Thinking Mode (Extended Reasoning)

Enable extended reasoning for complex problem-solving with Claude 3.5 Sonnet v2.

Overview

Thinking mode (extended reasoning) allows the model to show its reasoning process. Only supported on Claude 3.5 Sonnet v2 and newer models.

Enabling Thinking Mode

import dev.langchain4j.model.bedrock.BedrockChatRequestParameters;
import dev.langchain4j.model.bedrock.BedrockChatModel;

BedrockChatRequestParameters params = BedrockChatRequestParameters.builder()
    .enableReasoning(10000)  // Token budget for reasoning
    .build();

BedrockChatModel model = BedrockChatModel.builder()
    .modelId("anthropic.claude-3-5-sonnet-20241022-v2:0")
    .defaultRequestParameters(params)
    .returnThinking(true)  // Return thinking content in responses
    .sendThinking(true)    // Send thinking in conversation history
    .build();

Parameters

  • enableReasoning(Integer tokenBudget): Maximum tokens allocated for reasoning
  • returnThinking(Boolean): Include thinking content in responses (default: false)
  • sendThinking(Boolean): Send thinking in follow-up messages (default: true)

Accessing Thinking Content

import dev.langchain4j.data.message.AiMessage;

ChatResponse response = model.chat(ChatRequest.builder()
    .messages(UserMessage.from("Solve this complex problem: ..."))
    .build());

AiMessage aiMessage = response.aiMessage();

if (aiMessage.hasThinkingContent()) {
    String thinking = aiMessage.thinkingContent();
    String answer = aiMessage.text();

    System.out.println("Thinking: " + thinking);
    System.out.println("Answer: " + answer);
}

Complete Example

BedrockChatRequestParameters params = BedrockChatRequestParameters.builder()
    .enableReasoning(10000)
    .temperature(0.7)
    .maxOutputTokens(8192)
    .build();

BedrockChatModel model = BedrockChatModel.builder()
    .region(Region.US_EAST_1)
    .modelId("anthropic.claude-3-5-sonnet-20241022-v2:0")
    .defaultRequestParameters(params)
    .returnThinking(true)
    .sendThinking(true)
    .build();

ChatRequest request = ChatRequest.builder()
    .messages(UserMessage.from(
        "A farmer has 17 sheep. All but 9 die. How many are left?"
    ))
    .build();

ChatResponse response = model.chat(request);
AiMessage aiMessage = response.aiMessage();

if (aiMessage.hasThinkingContent()) {
    System.out.println("=== Thinking Process ===");
    System.out.println(aiMessage.thinkingContent());
    System.out.println("\n=== Answer ===");
    System.out.println(aiMessage.text());
}

Important Notes

  • Only Claude 3.5 Sonnet v2+: Older models don't support extended reasoning
  • Token budget: Counts toward total output token limits
  • returnThinking: Controls whether thinking appears in responses
  • sendThinking: Controls whether thinking is included in conversation history
  • Thinking tokens: Count as output tokens and affect costs

Supported Models

  • anthropic.claude-3-5-sonnet-20241022-v2:0 - Claude 3.5 Sonnet v2

Related:

  • Chat Models API
  • Parameters API
  • Model Reference

Install with Tessl CLI

npx tessl i tessl/maven-dev-langchain4j--langchain4j-bedrock@1.11.0

docs

index.md

README.md

tile.json