CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/maven-dev-langchain4j--langchain4j-anthropic

This package provides an integration layer between the LangChain4j framework and Anthropic's Claude language models, enabling Java developers to seamlessly incorporate Anthropic's AI capabilities into their applications.

Overview
Eval results
Files

LangChain4j Anthropic Integration

This package provides integration between the LangChain4j framework and Anthropic's Claude language models, enabling Java developers to incorporate Claude's AI capabilities into their applications with full support for advanced features including streaming responses, tool use, prompt caching, extended thinking, and PDF processing.

Package Information

  • Package Name: langchain4j-anthropic
  • Package Type: maven
  • Group ID: dev.langchain4j
  • Artifact ID: langchain4j-anthropic
  • Version: 1.11.0
  • Language: Java
  • Framework: LangChain4j
  • License: Apache-2.0

Installation

<dependency>
    <groupId>dev.langchain4j</groupId>
    <artifactId>langchain4j-anthropic</artifactId>
    <version>1.11.0</version>
</dependency>

Core Imports

import dev.langchain4j.model.anthropic.AnthropicChatModel;
import dev.langchain4j.model.anthropic.AnthropicStreamingChatModel;
import dev.langchain4j.model.anthropic.AnthropicChatModelName;
import dev.langchain4j.model.anthropic.AnthropicTokenCountEstimator;
import dev.langchain4j.model.anthropic.AnthropicModelCatalog;
import dev.langchain4j.model.anthropic.AnthropicChatResponseMetadata;
import dev.langchain4j.model.anthropic.AnthropicTokenUsage;
import dev.langchain4j.model.anthropic.AnthropicServerTool;
import dev.langchain4j.model.anthropic.AnthropicServerToolResult;

Basic Usage

import dev.langchain4j.model.anthropic.AnthropicChatModel;
import dev.langchain4j.model.anthropic.AnthropicChatModelName;
import dev.langchain4j.model.chat.request.ChatRequest;
import dev.langchain4j.model.chat.response.ChatResponse;
import dev.langchain4j.data.message.UserMessage;

// Create a synchronous chat model
AnthropicChatModel model = AnthropicChatModel.builder()
    .apiKey(System.getenv("ANTHROPIC_API_KEY"))  // Required, must not be null
    .modelName(AnthropicChatModelName.CLAUDE_SONNET_4_5_20250929)
    .maxTokens(1024)  // Optional, default: 1024
    .temperature(0.7)  // Optional, default: null (Anthropic's default)
    .build();

// Send a chat request
ChatRequest request = ChatRequest.builder()
    .messages(UserMessage.from("What is the capital of France?"))
    .build();

ChatResponse response = model.chat(request);  // May throw RuntimeException on API errors
System.out.println(response.aiMessage().text());

Architecture

The langchain4j-anthropic integration follows LangChain4j's standard patterns:

  • ChatModel Interface: Synchronous chat via AnthropicChatModel
  • StreamingChatModel Interface: Token-by-token streaming via AnthropicStreamingChatModel
  • Builder Pattern: All models use fluent builders for configuration (thread-safe after build())
  • Tool Support: Both user-defined tools and Anthropic server tools
  • LangChain4j Integration: Seamless integration with LangChain4j's AI services, memory, and chains

Thread Safety: Built model instances are thread-safe and can be shared across threads. Builders are not thread-safe.

Capabilities

Synchronous Chat Model

Core synchronous chat interface for Claude models with comprehensive configuration options.

package dev.langchain4j.model.anthropic;

import dev.langchain4j.model.chat.ChatModel;
import dev.langchain4j.model.chat.request.ChatRequest;
import dev.langchain4j.model.chat.response.ChatResponse;
import dev.langchain4j.model.chat.listener.ChatModelListener;
import dev.langchain4j.model.chat.request.ChatRequestParameters;
import dev.langchain4j.model.output.ModelProvider;
import dev.langchain4j.model.Capability;
import java.util.List;
import java.util.Set;

public class AnthropicChatModel implements ChatModel {
    public static AnthropicChatModelBuilder builder();

    // Primary method for synchronous chat
    // @throws RuntimeException if API call fails (network, auth, rate limit, etc.)
    // @return ChatResponse containing AiMessage with text and/or tool calls, never null
    public ChatResponse doChat(ChatRequest chatRequest);

    // Model metadata
    public List<ChatModelListener> listeners();
    public ModelProvider provider();  // Returns ModelProvider.ANTHROPIC
    public ChatRequestParameters defaultRequestParameters();
    public Set<Capability> supportedCapabilities();  // Returns CHAT, TOOLS, VISION, MULTIMODALITY
}

Chat Model

Streaming Chat Model

Token-by-token streaming interface for real-time responses.

package dev.langchain4j.model.anthropic;

import dev.langchain4j.model.chat.StreamingChatModel;
import dev.langchain4j.model.chat.request.ChatRequest;
import dev.langchain4j.model.chat.response.StreamingChatResponseHandler;
import dev.langchain4j.model.chat.listener.ChatModelListener;
import dev.langchain4j.model.chat.request.ChatRequestParameters;
import dev.langchain4j.model.output.ModelProvider;
import dev.langchain4j.model.Capability;
import java.util.List;
import java.util.Set;

public class AnthropicStreamingChatModel implements StreamingChatModel {
    public static AnthropicStreamingChatModelBuilder builder();

    // Primary method for streaming chat
    // Handler methods called asynchronously on IO thread
    // @throws RuntimeException if API call fails immediately (auth, validation)
    // Handler.onError() called for streaming errors
    public void doChat(ChatRequest chatRequest, StreamingChatResponseHandler handler);

    // Model metadata
    public List<ChatModelListener> listeners();
    public ModelProvider provider();  // Returns ModelProvider.ANTHROPIC
    public ChatRequestParameters defaultRequestParameters();
    public Set<Capability> supportedCapabilities();  // Returns CHAT, STREAMING, TOOLS, VISION, MULTIMODALITY
}

Streaming Chat Model

Model Names

Predefined Claude model identifiers.

package dev.langchain4j.model.anthropic;

public enum AnthropicChatModelName {
    CLAUDE_OPUS_4_5_20251101,       // "claude-opus-4-5-20251101" - Most powerful, supports thinking
    CLAUDE_SONNET_4_5_20250929,     // "claude-sonnet-4-5-20250929" - Balanced, recommended for general use
    CLAUDE_HAIKU_4_5_20251001,      // "claude-haiku-4-5-20251001" - Fastest, lowest cost
    CLAUDE_OPUS_4_1_20250805,       // "claude-opus-4-1-20250805" - Previous gen Opus
    CLAUDE_OPUS_4_20250514,         // "claude-opus-4-20250514" - Claude 4 Opus
    CLAUDE_SONNET_4_20250514,       // "claude-sonnet-4-20250514" - Claude 4 Sonnet
    CLAUDE_3_5_HAIKU_20241022,      // "claude-3-5-haiku-20241022" - Claude 3.5 Haiku
    CLAUDE_3_HAIKU_20240307;        // "claude-3-haiku-20240307" - Claude 3 Haiku

    public String toString();  // Returns model ID string (e.g., "claude-sonnet-4-5-20250929")
}

Model Names

Token Count Estimation

Estimate token counts for text and messages using Anthropic's API.

package dev.langchain4j.model.anthropic;

import dev.langchain4j.model.TokenCountEstimator;
import dev.langchain4j.data.message.ChatMessage;

// Experimental API - may change in future versions
public class AnthropicTokenCountEstimator implements TokenCountEstimator {
    public static Builder builder();

    // Estimate tokens for plain text
    // @throws RuntimeException on API errors
    // @return Token count, always >= 0
    public int estimateTokenCountInText(String text);

    // Estimate tokens for single message
    // @throws RuntimeException on API errors
    // @return Token count, always >= 0
    public int estimateTokenCountInMessage(ChatMessage message);

    // Estimate tokens for message list
    // @throws RuntimeException on API errors or if no user messages and no dummy message configured
    // @return Token count, always >= 0
    public int estimateTokenCountInMessages(Iterable<ChatMessage> messages);
}

Token Count Estimator

Model Discovery

Dynamically discover available Claude models via the Anthropic Models API.

package dev.langchain4j.model.anthropic;

import dev.langchain4j.model.catalog.ModelCatalog;
import dev.langchain4j.model.catalog.ModelDescription;
import dev.langchain4j.model.output.ModelProvider;
import java.util.List;

public class AnthropicModelCatalog implements ModelCatalog {
    public static Builder builder();

    // List all models accessible with provided API key
    // @throws RuntimeException on API errors (network, auth, etc.)
    // @return List of model descriptions, never null, may be empty
    public List<ModelDescription> listModels();

    public ModelProvider provider();  // Returns ModelProvider.ANTHROPIC
}

Model Catalog

Tool Support

User-defined tools are configured via ToolSpecification from langchain4j-core. Server-side tools executed by Anthropic (e.g., web search) use dedicated classes.

package dev.langchain4j.model.anthropic;

import java.util.Map;

// Server-side tools executed by Anthropic (e.g., web_search, code_execution)
// Experimental API
public class AnthropicServerTool {
    public static Builder builder();

    public String type();  // Tool type identifier (e.g., "web_search_20250305"), never null
    public String name();  // Tool name (e.g., "web_search"), never null
    public Map<String, Object> attributes();  // Tool configuration attributes, never null, may be empty

    public boolean equals(Object o);
    public int hashCode();
    public String toString();
}

// Results from server-side tool execution
public class AnthropicServerToolResult {
    public static Builder builder();

    public String type();  // Result type (e.g., "web_search_tool_result"), never null
    public String toolUseId();  // ID linking to tool use, never null
    public Object content();  // Result content (structure depends on tool type), may be null

    public boolean equals(Object o);
    public int hashCode();
    public String toString();
}

Tool Support

Content Types

Multimodal content support for rich interactions with text, images, PDFs, tools, and thinking.

package dev.langchain4j.data.message;

// Base interface for all content types
public interface Content {
    ContentType type();  // Returns content type enum, never null
}

// Text content
public class TextContent implements Content {
    public static TextContent from(String text);  // text must not be null
    public String text();  // Never null
}

// Image content (JPEG, PNG, GIF, WebP)
public class ImageContent implements Content {
    public static ImageContent from(Image image);  // image must not be null
    public Image image();  // Never null
}

// PDF document content
public class PdfFileContent implements Content {
    // From Base64-encoded data
    public static PdfFileContent from(String base64Data, String mimeType);  // Both must not be null
    // From URL
    public static PdfFileContent from(String url);  // url must not be null
}

Content Types

Response Metadata

Detailed response metadata including token usage with caching information.

package dev.langchain4j.model.anthropic;

import dev.langchain4j.model.chat.response.ChatResponseMetadata;
import dev.langchain4j.model.output.TokenUsage;
import dev.langchain4j.model.output.FinishReason;
import dev.langchain4j.http.client.SuccessfulHttpResponse;
import dev.langchain4j.http.client.sse.ServerSentEvent;
import java.util.List;

public class AnthropicChatResponseMetadata extends ChatResponseMetadata {
    public static Builder builder();

    // Anthropic-specific token usage with cache metrics
    public AnthropicTokenUsage tokenUsage();  // Never null

    // Raw HTTP response for debugging
    public SuccessfulHttpResponse rawHttpResponse();  // May be null

    // Raw SSE events (streaming only)
    public List<ServerSentEvent> rawServerSentEvents();  // May be null, non-null only for streaming

    public Builder toBuilder();
}

// Token usage with cache-specific metrics
public class AnthropicTokenUsage extends TokenUsage {
    public static Builder builder();

    // Standard token counts (inherited from TokenUsage)
    public Integer inputTokenCount();  // Never null, >= 0
    public Integer outputTokenCount();  // Never null, >= 0
    public Integer totalTokenCount();  // Never null, >= 0

    // Cache-specific metrics (null if caching not used)
    public Integer cacheCreationInputTokens();  // Null if no cache created, >= 0 otherwise
    public Integer cacheReadInputTokens();  // Null if no cache read, >= 0 otherwise

    // Add token usage from another response
    public AnthropicTokenUsage add(TokenUsage that);  // that may be null, returns new instance

    public String toString();
}

Response Metadata

Key Features

Advanced Chat Features

  • Synchronous and streaming modes
  • Temperature (0.0-1.0), topP (0.0-1.0), topK (1+) sampling control
  • Stop sequences (max 4)
  • Response format control (JSON, text)
  • Custom request parameters for experimental features
  • Request/response logging with SLF4J

Tool Integration

  • User-defined tools via ToolSpecification
  • Tool choice strategies (AUTO, REQUIRED, NONE, specific tool)
  • Parallel and sequential tool execution
  • Strict tool schema validation
  • Server-side tools: web_search_20250305, code_execution_20250305
  • Tool metadata passing for context
  • Tool results with error handling

Prompt Caching

  • Cache system messages for cost optimization (90% discount on cache reads)
  • Cache tool definitions (reuse across requests)
  • Track cache creation and read tokens in metadata
  • Automatic cache management with 5-minute TTL
  • Requires minimum 1024 tokens to cache

Extended Thinking

  • Enable Claude's reasoning mode (Opus 4.5+ only)
  • Configure thinking token budget (recommended: 5000-10000)
  • Return thinking text in responses
  • Control thinking in follow-up requests
  • Thinking text streaming support
  • Cryptographic signatures for thinking verification (model-specific)

Media Support

  • Images: JPEG, PNG, GIF, WebP (Base64-encoded with MIME type, max 5MB recommended)
  • PDFs: Document processing via URL or Base64 (max 32MB recommended)
  • Multimodal message content (text + images/PDFs)
  • Multiple images per message (up to model limits)
  • OCR and text extraction from images

Integration Features

  • LangChain4j AI Services integration (@Tool annotations)
  • Chat memory support (MessageWindowChatMemory, TokenWindowChatMemory)
  • Model listeners for events (request/response/error)
  • Custom HTTP client support (timeouts, proxies, retry logic)
  • Request/response logging with SLF4J
  • Retry mechanism with exponential backoff (default: 2 retries)
  • User ID tracking for abuse detection

Common Patterns

Using with AI Services

import dev.langchain4j.service.AiServices;

interface Assistant {
    String chat(String message);
}

AnthropicChatModel model = AnthropicChatModel.builder()
    .apiKey(System.getenv("ANTHROPIC_API_KEY"))
    .modelName(AnthropicChatModelName.CLAUDE_SONNET_4_5_20250929)
    .build();

Assistant assistant = AiServices.create(Assistant.class, model);
String response = assistant.chat("Hello!");  // May throw RuntimeException on API errors

Using with Tools

import dev.langchain4j.agent.tool.Tool;
import dev.langchain4j.service.AiServices;

class WeatherService {
    @Tool("Get current weather for a location")
    public String getWeather(String location) {  // Method must be public
        return "Sunny, 72°F in " + location;
    }
}

AnthropicChatModel model = AnthropicChatModel.builder()
    .apiKey(System.getenv("ANTHROPIC_API_KEY"))
    .modelName(AnthropicChatModelName.CLAUDE_SONNET_4_5_20250929)
    .build();

Assistant assistant = AiServices.builder(Assistant.class)
    .chatLanguageModel(model)
    .tools(new WeatherService())  // Tool methods discovered via reflection
    .build();

Streaming Responses

import dev.langchain4j.model.chat.response.StreamingChatResponseHandler;
import dev.langchain4j.model.chat.response.ChatResponse;

AnthropicStreamingChatModel model = AnthropicStreamingChatModel.builder()
    .apiKey(System.getenv("ANTHROPIC_API_KEY"))
    .modelName(AnthropicChatModelName.CLAUDE_SONNET_4_5_20250929)
    .build();

model.chat(
    ChatRequest.builder()
        .messages(UserMessage.from("Tell me a story"))
        .build(),
    new StreamingChatResponseHandler() {
        @Override
        public void onPartialResponse(String token) {
            // Called on IO thread for each token
            System.out.print(token);
        }

        @Override
        public void onCompleteResponse(ChatResponse completeResponse) {
            // Called on IO thread when streaming completes
            System.out.println("\nDone!");
        }

        @Override
        public void onError(Throwable error) {
            // Called on IO thread if streaming fails
            error.printStackTrace();
        }
    }
);

Enabling Prompt Caching

AnthropicChatModel model = AnthropicChatModel.builder()
    .apiKey(System.getenv("ANTHROPIC_API_KEY"))
    .modelName(AnthropicChatModelName.CLAUDE_SONNET_4_5_20250929)
    .cacheSystemMessages(true)  // Cache system messages (90% cost reduction on reads)
    .cacheTools(true)            // Cache tool definitions
    .build();

// First request: creates cache (slightly higher cost)
ChatResponse response = model.chat(request);
AnthropicTokenUsage usage = ((AnthropicChatResponseMetadata) response.metadata()).tokenUsage();
System.out.println("Cache created: " + usage.cacheCreationInputTokens() + " tokens");

// Subsequent requests within 5 minutes: reads from cache
ChatResponse response2 = model.chat(request2);
AnthropicTokenUsage usage2 = ((AnthropicChatResponseMetadata) response2.metadata()).tokenUsage();
System.out.println("Cache read: " + usage2.cacheReadInputTokens() + " tokens (90% discount)");

Extended Thinking

AnthropicChatModel model = AnthropicChatModel.builder()
    .apiKey(System.getenv("ANTHROPIC_API_KEY"))
    .modelName(AnthropicChatModelName.CLAUDE_OPUS_4_5_20251101)  // Requires Opus 4.5+
    .thinkingType("enabled")  // Enable thinking mode
    .thinkingBudgetTokens(5000)  // Token budget for reasoning (recommended: 5000-10000)
    .returnThinking(true)  // Required to receive thinking text
    .build();

ChatResponse response = model.chat(request);
String thinking = response.aiMessage().thinking();  // May be null if thinking not used
String answer = response.aiMessage().text();  // Never null

if (thinking != null) {
    System.out.println("Reasoning: " + thinking);
}
System.out.println("Answer: " + answer);

Error Handling

Common Exceptions

import dev.langchain4j.model.anthropic.AnthropicChatModel;

try {
    ChatResponse response = model.chat(request);
} catch (RuntimeException e) {
    // API errors wrapped in RuntimeException
    // Common causes:
    // - Invalid API key (401)
    // - Rate limit exceeded (429)
    // - Network timeout
    // - Invalid model name
    // - Invalid request parameters
    // - Server errors (500+)
    System.err.println("API error: " + e.getMessage());

    // Check if retries exhausted
    if (e.getMessage().contains("retries")) {
        // All retries failed
    }
}

Validation Errors

// These throw IllegalArgumentException at build time:
try {
    AnthropicChatModel model = AnthropicChatModel.builder()
        .apiKey(null)  // IllegalArgumentException: apiKey is required
        .build();
} catch (IllegalArgumentException e) {
    // Handle validation error
}

try {
    AnthropicChatModel model = AnthropicChatModel.builder()
        .apiKey(apiKey)
        .temperature(1.5)  // IllegalArgumentException: temperature must be 0.0-1.0
        .build();
} catch (IllegalArgumentException e) {
    // Handle validation error
}

Streaming Errors

model.chat(request, new StreamingChatResponseHandler() {
    @Override
    public void onPartialResponse(String token) {
        System.out.print(token);
    }

    @Override
    public void onCompleteResponse(ChatResponse response) {
        System.out.println("\nDone");
    }

    @Override
    public void onError(Throwable error) {
        // Called for streaming errors:
        // - Connection lost
        // - Invalid SSE format
        // - Server errors mid-stream
        System.err.println("Streaming error: " + error.getMessage());
    }
});

Performance Considerations

Model Selection Impact

  • Haiku: 2-3x faster, 10x cheaper than Sonnet
  • Sonnet: Balanced speed/quality, recommended for production
  • Opus: 2-3x slower, highest quality, supports thinking

Caching for Cost Optimization

  • Cache hit: 90% cost reduction on cached tokens
  • Cache TTL: 5 minutes
  • Minimum cacheable size: 1024 tokens
  • Best for: repeated system prompts, tool definitions, long context

Token Limits

  • Context window: varies by model (200K for Sonnet 4.5)
  • Max output tokens: configurable, default 1024
  • Thinking tokens: separate budget (not counted against output)

Retry Configuration

AnthropicChatModel model = AnthropicChatModel.builder()
    .apiKey(apiKey)
    .maxRetries(3)  // Default: 2, recommended: 2-3
    .timeout(Duration.ofSeconds(60))  // Default: 60s, adjust based on maxTokens
    .build();

Connection Pooling

import dev.langchain4j.http.client.HttpClientBuilder;

// Reuse model instances to benefit from connection pooling
// Built models are thread-safe
AnthropicChatModel sharedModel = AnthropicChatModel.builder()
    .apiKey(apiKey)
    .modelName(AnthropicChatModelName.CLAUDE_SONNET_4_5_20250929)
    .build();

// Use across multiple threads
ExecutorService executor = Executors.newFixedThreadPool(10);
for (int i = 0; i < 100; i++) {
    executor.submit(() -> {
        ChatResponse response = sharedModel.chat(request);  // Thread-safe
    });
}

LangChain4j Core Types

This package integrates with standard LangChain4j interfaces and classes:

  • ChatModel, StreamingChatModel - Model interfaces (dev.langchain4j.model.chat)
  • ChatRequest, ChatResponse - Request/response wrappers (dev.langchain4j.model.chat.request/response)
  • UserMessage, SystemMessage, AiMessage - Message types (dev.langchain4j.data.message)
  • ToolSpecification - Tool definition (dev.langchain4j.agent.tool)
  • TokenCountEstimator - Token counting interface (dev.langchain4j.model)
  • ModelCatalog - Model discovery interface (dev.langchain4j.model.catalog)
  • ChatModelListener - Event listeners (dev.langchain4j.model.chat.listener)

Refer to the langchain4j-core documentation for details on these types.

Common Pitfalls

API Key Security

// ❌ DON'T: Hardcode API keys
AnthropicChatModel model = AnthropicChatModel.builder()
    .apiKey("sk-ant-...")  // Security risk!
    .build();

// ✅ DO: Use environment variables or secure config
AnthropicChatModel model = AnthropicChatModel.builder()
    .apiKey(System.getenv("ANTHROPIC_API_KEY"))
    .build();

Token Limit Errors

// ❌ DON'T: Ignore token limits
model.chat(hugeRequest);  // May fail with 400 error

// ✅ DO: Check token counts beforehand
AnthropicTokenCountEstimator estimator = AnthropicTokenCountEstimator.builder()
    .apiKey(apiKey)
    .modelName(modelName)
    .build();

int tokens = estimator.estimateTokenCountInMessages(messages);
if (tokens + maxOutputTokens > contextWindow) {
    // Trim messages or use summarization
}

Missing Tool Results

// ❌ DON'T: Ignore tool calls
ChatResponse response = model.chat(request);
// If model wants to use a tool, you must send back the result

// ✅ DO: Handle tool execution loop
ChatResponse response = model.chat(request);
if (response.aiMessage().hasToolExecutionRequests()) {
    for (ToolExecutionRequest toolCall : response.aiMessage().toolExecutionRequests()) {
        String result = executeTool(toolCall);
        // Send result back to model
        ChatResponse finalResponse = model.chat(ChatRequest.builder()
            .messages(originalRequest, response.aiMessage(),
                      ToolExecutionResultMessage.from(toolCall, result))
            .build());
    }
}

Caching Without Persistence

// ❌ DON'T: Rebuild model for each request (loses cache)
for (ChatRequest request : requests) {
    AnthropicChatModel model = AnthropicChatModel.builder()
        .apiKey(apiKey)
        .cacheSystemMessages(true)
        .build();
    model.chat(request);  // Cache never reused!
}

// ✅ DO: Reuse model instance
AnthropicChatModel model = AnthropicChatModel.builder()
    .apiKey(apiKey)
    .cacheSystemMessages(true)
    .build();

for (ChatRequest request : requests) {
    model.chat(request);  // Cache reused within 5-minute TTL
}

Thinking Without Configuration

// ❌ DON'T: Expect thinking without enabling it
AnthropicChatModel model = AnthropicChatModel.builder()
    .apiKey(apiKey)
    .modelName(AnthropicChatModelName.CLAUDE_OPUS_4_5_20251101)
    .build();
ChatResponse response = model.chat(request);
String thinking = response.aiMessage().thinking();  // Always null!

// ✅ DO: Enable and configure thinking
AnthropicChatModel model = AnthropicChatModel.builder()
    .apiKey(apiKey)
    .modelName(AnthropicChatModelName.CLAUDE_OPUS_4_5_20251101)
    .thinkingType("enabled")
    .returnThinking(true)  // Required!
    .build();
ChatResponse response = model.chat(request);
String thinking = response.aiMessage().thinking();  // May contain thinking text

Install with Tessl CLI

npx tessl i tessl/maven-dev-langchain4j--langchain4j-anthropic@1.11.0
Workspace
tessl
Visibility
Public
Created
Last updated
Describes
mavenpkg:maven/dev.langchain4j/langchain4j-anthropic@1.11.x