CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/maven-dev-langchain4j--langchain4j

Build LLM-powered applications in Java with support for chatbots, agents, RAG, tools, and much more

Overview
Eval results
Files

tools.mddocs/

Tools

Framework for function calling and tool execution. Allows LLMs to call Java methods as tools with automatic JSON argument parsing and result handling.

Capabilities

Tool Annotation

Annotation for marking methods as tools that can be called by LLMs.

package dev.langchain4j.agent.tool;

/**
 * Annotation for marking methods as tools callable by LLMs
 * Tool methods can be in any class and will be discovered automatically
 */
@Target(METHOD)
@Retention(RUNTIME)
public @interface Tool {
    /**
     * Tool description for the LLM
     * Should clearly explain what the tool does
     * @return Tool description
     */
    String value() default "";

    /**
     * Tool name (optional, defaults to method name)
     * @return Tool name
     */
    String name() default "";
}

Thread Safety: The @Tool annotation itself is thread-safe as it's processed at startup. However, the tool method implementations must be thread-safe as they may be invoked concurrently when executeToolsConcurrently() is enabled. Use synchronization, thread-safe data structures, or stateless implementations.

Common Pitfalls:

  • Vague descriptions: Tool descriptions like "Does something useful" confuse the LLM. Be specific: "Calculates the sum of two numbers and returns the result as a decimal"
  • Missing parameter descriptions: Use @P annotation to describe each parameter
  • Long-running operations: Tools that take >30 seconds may timeout or block other tool executions
  • Hallucinated tool names: LLMs may invent tool names if descriptions are unclear

Edge Cases:

  • Null parameters: Tool methods receive null if LLM omits optional parameters. Always validate inputs.
  • Type mismatches: If LLM provides incompatible types (string instead of int), argument parsing fails and triggers ToolArgumentsErrorHandler
  • Method overloading: Multiple @Tool methods with same name will conflict. Use explicit name attribute to disambiguate.

Performance Notes:

  • Tool discovery happens once at service initialization
  • Reflection overhead is minimal (methods are cached)
  • Consider marking fast tools (<100ms) separately from slow tools for prioritization

Cost Considerations:

  • Each tool invocation adds a round-trip: user message → LLM → tool call → result → LLM → final response
  • Multiple sequential tool calls multiply costs (3 tools = 3x round-trips)
  • Use maxSequentialToolsInvocations to cap costs

Exception Handling:

  • Uncaught exceptions in tool methods trigger ToolExecutionErrorHandler
  • Return error messages as strings instead of throwing for recoverable errors
  • Use propagateToolExecutionExceptions(true) to fail fast on critical errors

Related APIs: AiServices.Builder, ToolSpecification, ToolExecutor, ToolProvider


ToolExecutor

Interface for executing tools programmatically.

package dev.langchain4j.service.tool;

/**
 * Interface for executing tools
 */
public interface ToolExecutor {
    /**
     * Execute tool with given request
     * @param toolExecutionRequest Request containing tool name and arguments
     * @param memoryId Memory ID for context
     * @return Result string from tool execution
     */
    String execute(ToolExecutionRequest toolExecutionRequest, Object memoryId);
}

/**
 * Default implementation of ToolExecutor
 */
public class DefaultToolExecutor implements ToolExecutor {
    /**
     * Create builder
     * @return Builder instance
     */
    public static Builder builder();

    /**
     * Execute tool
     * @param toolExecutionRequest Request containing tool name and arguments
     * @param memoryId Memory ID for context
     * @return Result string
     */
    public String execute(ToolExecutionRequest toolExecutionRequest, Object memoryId);
}

Thread Safety: DefaultToolExecutor is thread-safe and can be shared across multiple requests. The underlying tool method invocations follow the thread-safety guarantees of the tool implementation classes.

Common Pitfalls:

  • Ignoring memoryId: Memory ID provides user/session context. Not using it leads to cross-user data leaks.
  • Large return strings: Tool results >10KB may exceed token limits or increase costs. Summarize or truncate.
  • Blocking operations: Synchronous tool execution blocks the entire request. Use async patterns or timeouts.

Edge Cases:

  • Null memoryId: Valid when tools don't need context. Check for null before using.
  • Empty argument map: Happens for parameterless tools. Validate before accessing arguments.
  • Tool not found: DefaultToolExecutor throws exception if tool name doesn't match registered tools.

Performance Notes:

  • Tool execution time appears in ToolExecution.durationMs() - monitor for slow tools
  • Consider caching tool results for idempotent operations
  • Use executeToolsConcurrently() for independent tools called in parallel

Cost Considerations:

  • Each tool execution adds inference cost for the result processing
  • Verbose tool results increase token costs - aim for concise outputs
  • Failed tool calls that retry double the cost

Exception Handling:

  • Argument parsing errors throw exceptions caught by ToolArgumentsErrorHandler
  • Tool method exceptions caught by ToolExecutionErrorHandler
  • Use wrapToolArgumentsExceptions(true) to convert exceptions to error strings

Related APIs: ToolExecutionRequest, ToolSpecification, ToolProvider, AiServices.Builder


DefaultToolExecutor Builder

/**
 * Builder for DefaultToolExecutor
 */
public class Builder {
    /**
     * Configure wrapping of tool argument exceptions
     * @param wrapToolArgumentsExceptions Whether to wrap exceptions
     * @return Builder instance
     */
    public Builder wrapToolArgumentsExceptions(Boolean wrapToolArgumentsExceptions);

    /**
     * Configure propagation of tool execution exceptions
     * @param propagateToolExecutionExceptions Whether to propagate exceptions
     * @return Builder instance
     */
    public Builder propagateToolExecutionExceptions(Boolean propagateToolExecutionExceptions);

    /**
     * Build executor
     * @return DefaultToolExecutor instance
     */
    public DefaultToolExecutor build();
}

Thread Safety: Builder instances are not thread-safe. Build the executor once and reuse the immutable executor instance.

Common Pitfalls:

  • Default exception behavior: By default, exceptions are wrapped in error messages sent to LLM. Set propagateToolExecutionExceptions(true) for critical failures that should stop execution.
  • Swallowing errors: wrapToolArgumentsExceptions(false) may hide parsing errors from LLM, preventing self-correction.

Edge Cases:

  • Null boolean values: Defaults to false for both settings
  • Multiple builds: Each build() call creates a new executor instance

Performance Notes:

  • Builder configuration has no runtime performance impact
  • Exception wrapping adds minimal overhead (string formatting)

Cost Considerations:

  • Wrapped exceptions sent to LLM add token costs
  • propagateToolExecutionExceptions(true) saves costs by failing fast instead of retrying

Exception Handling:

  • wrapToolArgumentsExceptions(true): JSON parsing errors → error string → sent to LLM
  • propagateToolExecutionExceptions(true): Tool exceptions → thrown to caller
  • Both false: Errors logged but execution continues

Related APIs: DefaultToolExecutor, ToolArgumentsErrorHandler, ToolExecutionErrorHandler


ToolProvider

Interface for providing tools dynamically based on context.

package dev.langchain4j.service.tool;

/**
 * Interface for providing tools dynamically
 * Allows selection of tools based on request context
 */
public interface ToolProvider {
    /**
     * Provide tools for the given request
     * @param request Request containing context for tool selection
     * @return Result with tools to make available
     */
    ToolProviderResult provideTools(ToolProviderRequest request);
}

/**
 * Request object for tool provider
 * Contains context information for tool selection
 */
public class ToolProviderRequest {
    /**
     * Get user message
     * @return User message from request
     */
    public UserMessage userMessage();

    /**
     * Get memory ID
     * @return Memory ID for context
     */
    public Object memoryId();
}

/**
 * Result from tool provider
 * Contains tools to make available to LLM
 */
public class ToolProviderResult {
    /**
     * Create result with tools
     * @param tools Map of tool specifications to executors
     * @return ToolProviderResult instance
     */
    public static ToolProviderResult of(Map<ToolSpecification, ToolExecutor> tools);

    /**
     * Get tools
     * @return Map of tool specifications to executors
     */
    public Map<ToolSpecification, ToolExecutor> tools();
}

Thread Safety: ToolProvider.provideTools() may be called concurrently for multiple requests. Implementations must be thread-safe. Use immutable tool maps or synchronization.

Common Pitfalls:

  • Providing too many tools: >20 tools confuse LLMs and increase costs. Filter based on context.
  • Inconsistent tools per request: Changing tools mid-conversation breaks context. Cache tool set per session.
  • Expensive tool selection: provideTools() is called for every request. Keep logic lightweight (<10ms).

Edge Cases:

  • Empty tool map: Valid if no tools apply. LLM proceeds without function calling.
  • Null memoryId: Handle gracefully - provide default tools for anonymous sessions.
  • Tool specification conflicts: Multiple tools with same name cause undefined behavior.

Performance Notes:

  • provideTools() called synchronously before each LLM request - optimize aggressively
  • Cache tool specifications if selection logic is deterministic
  • Consider pre-computing tool sets for common scenarios

Cost Considerations:

  • Fewer tools = smaller prompt = lower costs
  • Dynamic filtering can reduce tokens by 50%+ compared to registering all tools
  • Balance between tool availability and cost efficiency

Exception Handling:

  • Exceptions in provideTools() fail the entire request
  • Return empty map instead of throwing for recoverable errors
  • Log errors for debugging tool selection logic

Related APIs: ToolSpecification, ToolExecutor, AiServices.Builder.toolProvider()


Tool Execution Context

Context objects for tool execution lifecycle.

package dev.langchain4j.service.tool;

/**
 * Context object passed before tool execution
 * Contains information about the upcoming tool execution
 */
public class BeforeToolExecution {
    /**
     * Get tool execution request
     * @return Tool execution request
     */
    public ToolExecutionRequest toolExecutionRequest();

    /**
     * Get memory ID
     * @return Memory ID for context
     */
    public Object memoryId();
}

/**
 * Represents a tool execution with request and result
 * Passed to callbacks after tool execution
 */
public class ToolExecution {
    /**
     * Get tool execution request
     * @return Request that was executed
     */
    public ToolExecutionRequest request();

    /**
     * Get tool execution result
     * @return Result from execution
     */
    public String result();

    /**
     * Get execution duration
     * @return Duration in milliseconds
     */
    public long durationMs();
}

/**
 * Result of tool execution
 */
public class ToolExecutionResult {
    /**
     * Get result text
     * @return Result text
     */
    public String text();

    /**
     * Check if execution was successful
     * @return true if successful
     */
    public boolean isSuccess();

    /**
     * Get error if execution failed
     * @return Error throwable, or null if successful
     */
    public Throwable error();
}

Thread Safety: Context objects are immutable and thread-safe. Safe to pass between threads or store for later analysis.

Common Pitfalls:

  • Blocking in callbacks: beforeToolExecution runs synchronously - don't perform slow operations
  • Modifying request: ToolExecutionRequest is immutable. Create new request if modification needed.
  • Ignoring durationMs: Critical for identifying slow tools. Always monitor in production.

Edge Cases:

  • Null result: Possible if tool method returns null. Check isSuccess() first.
  • Negative durationMs: System clock changes may cause negative durations. Use absolute value.
  • Empty result string: Valid for tools that perform actions without returning data.

Performance Notes:

  • Callbacks add <1ms overhead per tool execution
  • Avoid complex logging in callbacks - use async logging
  • durationMs includes argument parsing and result serialization

Cost Considerations:

  • Callbacks don't add token costs
  • Use beforeToolExecution to implement cost tracking per tool
  • Monitor high-frequency tools for optimization opportunities

Exception Handling:

  • Exceptions in beforeToolExecution abort tool execution
  • Exceptions in afterToolExecution are logged but don't affect response
  • ToolExecutionResult.error() contains original exception from tool method

Related APIs: ToolExecutionRequest, ToolExecutor, AiServices.Builder callbacks


Error Handling

Error handlers for tool argument and execution errors.

package dev.langchain4j.service.tool;

/**
 * Handler interface for tool argument errors (JSON parsing failures, type mismatches, etc.)
 */
public interface ToolArgumentsErrorHandler {
    /**
     * Handle tool arguments error
     * @param context Error context
     * @return Result indicating how to proceed
     */
    ToolErrorHandlerResult handle(ToolErrorContext context);
}

/**
 * Handler interface for tool execution errors
 */
public interface ToolExecutionErrorHandler {
    /**
     * Handle tool execution error
     * @param context Error context
     * @return Result indicating how to proceed
     */
    ToolErrorHandlerResult handle(ToolErrorContext context);
}

/**
 * Context object for tool errors
 */
public class ToolErrorContext {
    /**
     * Get tool execution request
     * @return Tool execution request
     */
    public ToolExecutionRequest toolExecutionRequest();

    /**
     * Get error
     * @return Error throwable
     */
    public Throwable error();

    /**
     * Get memory ID
     * @return Memory ID for context
     */
    public Object memoryId();
}

/**
 * Result from tool error handlers
 */
public class ToolErrorHandlerResult {
    /**
     * Create result to retry with modified request
     * @param modifiedRequest Modified tool execution request
     * @return Handler result
     */
    public static ToolErrorHandlerResult retry(ToolExecutionRequest modifiedRequest);

    /**
     * Create result to continue with error message
     * @param errorMessage Error message to send to LLM
     * @return Handler result
     */
    public static ToolErrorHandlerResult continueWithError(String errorMessage);

    /**
     * Create result to stop execution
     * @return Handler result
     */
    public static ToolErrorHandlerResult stop();
}

/**
 * Enum implementing strategies for handling hallucinated tool names
 */
public enum HallucinatedToolNameStrategy implements Function<ToolExecutionRequest, ToolExecutionResultMessage> {
    /**
     * Fail immediately when LLM hallucinates a tool name
     */
    FAIL,

    /**
     * Send error message to LLM about non-existent tool
     */
    SEND_ERROR_MESSAGE;

    /**
     * Apply strategy to request
     * @param request Tool execution request with hallucinated name
     * @return Tool execution result message
     */
    public ToolExecutionResultMessage apply(ToolExecutionRequest request);
}

Thread Safety: Error handlers must be thread-safe as they may be invoked concurrently when tools execute in parallel. Use stateless implementations or proper synchronization.

Common Pitfalls:

  • Infinite retry loops: retry() without fixing the request causes endless loops. Always modify the request or limit retries.
  • Generic error messages: "Error occurred" doesn't help LLM. Include specific details: "Expected number but received string 'abc' for parameter 'amount'"
  • Stopping prematurely: stop() aborts the entire conversation. Use continueWithError() for recoverable errors.

Edge Cases:

  • Null error message: continueWithError(null) treated as empty string
  • Retry limit: No built-in retry limit. Implement counter in handler to prevent loops.
  • Handler exceptions: Exceptions thrown from handler propagate to caller, aborting request.

Performance Notes:

  • Error handlers add minimal overhead (<1ms)
  • Avoid expensive operations like database lookups in handlers
  • Cache common error responses

Cost Considerations:

  • continueWithError() adds round-trip cost (error sent to LLM for retry)
  • retry() may add multiple round-trips if request still fails
  • stop() is cheapest - no additional LLM calls

Exception Handling:

  • ToolArgumentsErrorHandler: Handles JSON parsing errors, type mismatches, missing required parameters
  • ToolExecutionErrorHandler: Handles exceptions thrown by tool methods
  • HallucinatedToolNameStrategy.FAIL: Throws exception immediately
  • HallucinatedToolNameStrategy.SEND_ERROR_MESSAGE: Continues with error, allowing LLM to recover

Related APIs: ToolExecutionRequest, DefaultToolExecutor.Builder, ToolErrorContext


Tool Best Practices

Design Guidelines

Clear Descriptions:

// BAD - vague
@Tool("Gets data")
public String getData() { }

// GOOD - specific
@Tool("Retrieves user profile including name, email, and registration date from database by user ID")
public String getUserProfile(String userId) { }

Parameter Documentation:

@Tool("Searches products by criteria")
public List<Product> searchProducts(
    @P("Search keyword for product name or description") String keyword,
    @P("Minimum price in USD, inclusive") double minPrice,
    @P("Maximum price in USD, inclusive") double maxPrice,
    @P("Maximum number of results to return, 1-100") int limit
) { }

Return Concise Results:

// BAD - returns entire JSON blob
@Tool("Get user details")
public String getUser(String id) {
    return database.getUser(id).toJson(); // 5KB response
}

// GOOD - returns summary
@Tool("Get user details")
public String getUser(String id) {
    User user = database.getUser(id);
    return String.format("User: %s, Email: %s, Status: %s",
        user.name(), user.email(), user.status());
}

Handle Errors Gracefully:

@Tool("Get product price")
public String getPrice(String productId) {
    try {
        Product product = catalog.findById(productId);
        if (product == null) {
            return "Product not found: " + productId;
        }
        return String.format("$%.2f", product.price());
    } catch (DatabaseException e) {
        return "Unable to retrieve price: database unavailable";
    }
}

Performance Optimization

Fast vs Slow Tool Separation:

// Fast tools - complete in <100ms
class QuickTools {
    @Tool("Calculate sum") double add(double a, double b) { }
    @Tool("Convert currency") double convert(double amount) { }
}

// Slow tools - may take seconds
class SlowTools {
    @Tool("Generate report") String generateReport() { }
    @Tool("Fetch external API") String fetchData() { }
}

// Register separately for better control
Assistant assistant = AiServices.builder(Assistant.class)
    .tools(new QuickTools())
    .tools(new SlowTools())
    .executeToolsConcurrently() // Parallel execution for slow tools
    .build();

Timeout Protection:

class TimeoutTools {
    private static final Duration TIMEOUT = Duration.ofSeconds(30);

    @Tool("Query external API")
    public String queryApi(String endpoint) {
        CompletableFuture<String> future = CompletableFuture.supplyAsync(() -> {
            return httpClient.get(endpoint);
        });

        try {
            return future.get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS);
        } catch (TimeoutException e) {
            return "Request timed out after 30 seconds";
        } catch (Exception e) {
            return "Request failed: " + e.getMessage();
        }
    }
}

Caching Results:

class CachedTools {
    private final LoadingCache<String, String> cache = Caffeine.newBuilder()
        .expireAfterWrite(5, TimeUnit.MINUTES)
        .maximumSize(1000)
        .build(this::fetchExpensiveData);

    @Tool("Get weather data (cached 5 minutes)")
    public String getWeather(String location) {
        return cache.get(location);
    }

    private String fetchExpensiveData(String location) {
        // Expensive API call
        return weatherApi.fetch(location);
    }
}

Security Considerations

Input Validation:

@Tool("Execute SQL query")
public String query(String sql) {
    // Validate SQL to prevent injection
    if (!sql.matches("^SELECT .* FROM .*$")) {
        return "Only SELECT queries allowed";
    }

    // Whitelist allowed tables
    if (!sql.contains("FROM users") && !sql.contains("FROM products")) {
        return "Access denied to table";
    }

    return database.executeQuery(sql);
}

Rate Limiting:

class RateLimitedTools {
    private final RateLimiter limiter = RateLimiter.create(10.0); // 10 requests/sec

    @Tool("Send email")
    public String sendEmail(String to, String subject) {
        if (!limiter.tryAcquire(1, Duration.ofSeconds(1))) {
            return "Rate limit exceeded. Try again later.";
        }
        emailService.send(to, subject);
        return "Email sent";
    }
}

Access Control:

@Tool("Delete user account")
public String deleteUser(@ToolMemoryId String sessionId, String userId) {
    // Verify session has admin privileges
    if (!authService.isAdmin(sessionId)) {
        return "Permission denied: admin access required";
    }

    // Prevent self-deletion
    String currentUserId = authService.getUserId(sessionId);
    if (userId.equals(currentUserId)) {
        return "Cannot delete your own account";
    }

    userService.delete(userId);
    return "User deleted: " + userId;
}

Testing Patterns

Mock Tools for Testing

Simple Mock Tool:

class MockWeatherTools {
    @Tool("Get weather")
    public String getWeather(String location) {
        return switch(location.toLowerCase()) {
            case "nyc" -> "Sunny, 72°F";
            case "london" -> "Rainy, 55°F";
            case "tokyo" -> "Cloudy, 68°F";
            default -> "Weather data unavailable for " + location;
        };
    }
}

@Test
void testWeatherQuery() {
    Assistant assistant = AiServices.builder(Assistant.class)
        .chatModel(chatModel)
        .tools(new MockWeatherTools())
        .build();

    String response = assistant.chat("What's the weather in NYC?");
    assertTrue(response.contains("72°F") || response.contains("Sunny"));
}

Recording Tool Invocations:

class RecordingToolWrapper {
    private final Object actualTools;
    private final List<ToolInvocation> invocations = new ArrayList<>();

    record ToolInvocation(String toolName, Object[] args, String result) {}

    public List<ToolInvocation> getInvocations() {
        return List.copyOf(invocations);
    }

    @Tool("Add numbers")
    public double add(double a, double b) {
        double result = ((CalculatorTools) actualTools).add(a, b);
        invocations.add(new ToolInvocation("add", new Object[]{a, b}, String.valueOf(result)));
        return result;
    }
}

@Test
void testToolInvocationOrder() {
    RecordingToolWrapper wrapper = new RecordingToolWrapper(new CalculatorTools());
    Assistant assistant = AiServices.builder(Assistant.class)
        .chatModel(chatModel)
        .tools(wrapper)
        .build();

    assistant.chat("Calculate 5 + 3, then multiply result by 2");

    List<ToolInvocation> invocations = wrapper.getInvocations();
    assertEquals(2, invocations.size());
    assertEquals("add", invocations.get(0).toolName());
    assertEquals("multiply", invocations.get(1).toolName());
}

Failing Tool Simulation:

class FailingToolsSimulator {
    private int callCount = 0;

    @Tool("Flaky API call")
    public String flakyApi() {
        callCount++;
        if (callCount < 3) {
            throw new RuntimeException("Temporary failure");
        }
        return "Success on attempt " + callCount;
    }
}

@Test
void testToolErrorRecovery() {
    AtomicInteger errorCount = new AtomicInteger(0);

    Assistant assistant = AiServices.builder(Assistant.class)
        .chatModel(chatModel)
        .tools(new FailingToolsSimulator())
        .toolExecutionErrorHandler(context -> {
            errorCount.incrementAndGet();
            return ToolErrorHandlerResult.continueWithError(
                "Tool failed, please try again"
            );
        })
        .build();

    String response = assistant.chat("Call the flaky API");
    assertTrue(errorCount.get() >= 2);
}

Verifying Tool Arguments:

@Test
void testToolArgumentParsing() {
    ArgumentCaptor captor = new ArgumentCaptor();

    Assistant assistant = AiServices.builder(Assistant.class)
        .chatModel(chatModel)
        .tools(captor)
        .build();

    assistant.chat("Search for 'java' with max 5 results");

    assertEquals("java", captor.lastKeyword);
    assertEquals(5, captor.lastMaxResults);
}

class ArgumentCaptor {
    String lastKeyword;
    int lastMaxResults;

    @Tool("Search documents")
    public String search(
        @P("Search keyword") String keyword,
        @P("Max results") int maxResults
    ) {
        this.lastKeyword = keyword;
        this.lastMaxResults = maxResults;
        return "Found 5 results";
    }
}

Integration Testing

Testing with Real LLM:

@Test
void testMultiToolWorkflow() {
    var weatherTools = new WeatherTools();
    var dbTools = new DatabaseTools();

    Assistant assistant = AiServices.builder(Assistant.class)
        .chatModel(OpenAiChatModel.builder()
            .apiKey(System.getenv("OPENAI_API_KEY"))
            .modelName("gpt-4")
            .build())
        .tools(weatherTools, dbTools)
        .build();

    String response = assistant.chat(
        "Get weather for NYC and save it to database"
    );

    // Verify both tools were called
    verify(weatherTools).getWeather("NYC");
    verify(dbTools).saveWeather(anyString(), anyString());
}

Testing Concurrent Execution:

@Test
void testParallelToolExecution() {
    var executionTimes = new ConcurrentHashMap<String, Long>();

    class SlowTools {
        @Tool("Slow operation 1")
        public String slow1() throws InterruptedException {
            long start = System.currentTimeMillis();
            Thread.sleep(1000);
            executionTimes.put("slow1", System.currentTimeMillis() - start);
            return "Result 1";
        }

        @Tool("Slow operation 2")
        public String slow2() throws InterruptedException {
            long start = System.currentTimeMillis();
            Thread.sleep(1000);
            executionTimes.put("slow2", System.currentTimeMillis() - start);
            return "Result 2";
        }
    }

    Assistant assistant = AiServices.builder(Assistant.class)
        .chatModel(chatModel)
        .tools(new SlowTools())
        .executeToolsConcurrently()
        .build();

    long startTime = System.currentTimeMillis();
    assistant.chat("Execute slow1 and slow2");
    long totalTime = System.currentTimeMillis() - startTime;

    // Should take ~1 second (parallel) not ~2 seconds (sequential)
    assertTrue(totalTime < 1500, "Tools should execute in parallel");
}

Error Recovery Patterns

Retry with Backoff

class RetryingErrorHandler implements ToolExecutionErrorHandler {
    private final Map<String, Integer> retryCount = new ConcurrentHashMap<>();
    private static final int MAX_RETRIES = 3;

    @Override
    public ToolErrorHandlerResult handle(ToolErrorContext context) {
        String toolName = context.toolExecutionRequest().name();
        int attempts = retryCount.merge(toolName, 1, Integer::sum);

        if (attempts <= MAX_RETRIES) {
            String message = String.format(
                "Tool execution failed (attempt %d/%d): %s. Retrying...",
                attempts, MAX_RETRIES, context.error().getMessage()
            );

            // Exponential backoff
            try {
                Thread.sleep(100L * (1 << attempts));
            } catch (InterruptedException e) {
                Thread.currentThread().interrupt();
            }

            return ToolErrorHandlerResult.continueWithError(message);
        }

        retryCount.remove(toolName);
        return ToolErrorHandlerResult.stop();
    }
}

Assistant assistant = AiServices.builder(Assistant.class)
    .chatModel(chatModel)
    .tools(new FlakyApiTools())
    .toolExecutionErrorHandler(new RetryingErrorHandler())
    .build();

Fallback Tools

class FallbackToolProvider implements ToolProvider {
    private final WeatherApi primaryApi;
    private final WeatherApi fallbackApi;

    @Override
    public ToolProviderResult provideTools(ToolProviderRequest request) {
        Map<ToolSpecification, ToolExecutor> tools = new HashMap<>();

        // Primary tool
        ToolSpecification weatherSpec = ToolSpecification.builder()
            .name("get_weather")
            .description("Get current weather")
            .addParameter("location", "string")
            .build();

        ToolExecutor weatherExecutor = (req, memoryId) -> {
            String location = req.argument("location");
            try {
                return primaryApi.getWeather(location);
            } catch (Exception e) {
                // Fallback to secondary API
                try {
                    return fallbackApi.getWeather(location);
                } catch (Exception e2) {
                    return "Weather data unavailable";
                }
            }
        };

        tools.put(weatherSpec, weatherExecutor);
        return ToolProviderResult.of(tools);
    }
}

Circuit Breaker Pattern

class CircuitBreakerTools {
    private final CircuitBreaker breaker = new CircuitBreaker(5, Duration.ofMinutes(1));

    @Tool("Call external API")
    public String callApi(String endpoint) {
        if (breaker.isOpen()) {
            return "Service temporarily unavailable. Please try again later.";
        }

        try {
            String result = externalApi.call(endpoint);
            breaker.recordSuccess();
            return result;
        } catch (Exception e) {
            breaker.recordFailure();
            return "API call failed: " + e.getMessage();
        }
    }

    static class CircuitBreaker {
        private final int threshold;
        private final Duration resetTimeout;
        private int failureCount = 0;
        private Instant openedAt;

        CircuitBreaker(int threshold, Duration resetTimeout) {
            this.threshold = threshold;
            this.resetTimeout = resetTimeout;
        }

        boolean isOpen() {
            if (openedAt != null) {
                if (Duration.between(openedAt, Instant.now()).compareTo(resetTimeout) > 0) {
                    openedAt = null;
                    failureCount = 0;
                    return false;
                }
                return true;
            }
            return false;
        }

        void recordSuccess() {
            failureCount = 0;
            openedAt = null;
        }

        void recordFailure() {
            failureCount++;
            if (failureCount >= threshold) {
                openedAt = Instant.now();
            }
        }
    }
}

Graceful Degradation

class DegradedModeTools {
    private final DatabaseService database;
    private final CacheService cache;

    @Tool("Get user profile")
    public String getUserProfile(String userId) {
        try {
            // Try database first
            User user = database.getUser(userId);
            cache.put(userId, user); // Update cache
            return formatUser(user);
        } catch (DatabaseException e) {
            // Fallback to cache
            User cached = cache.get(userId);
            if (cached != null) {
                return formatUser(cached) + " (from cache)";
            }

            // Last resort: minimal info
            return "User profile temporarily unavailable";
        }
    }

    private String formatUser(User user) {
        return String.format("Name: %s, Email: %s", user.name(), user.email());
    }
}

Debugging Tools

Tool Execution Logger

class ToolExecutionLogger implements Consumer<ToolExecution> {
    private static final Logger log = LoggerFactory.getLogger(ToolExecutionLogger.class);

    @Override
    public void accept(ToolExecution execution) {
        log.info("Tool: {} | Duration: {}ms | Result length: {} chars",
            execution.request().name(),
            execution.durationMs(),
            execution.result().length()
        );

        log.debug("Arguments: {}", execution.request().arguments());
        log.debug("Result: {}", execution.result());
    }
}

Assistant assistant = AiServices.builder(Assistant.class)
    .chatModel(chatModel)
    .tools(tools)
    .afterToolExecution(new ToolExecutionLogger())
    .build();

Tool Performance Monitor

class ToolPerformanceMonitor implements Consumer<ToolExecution> {
    private final Map<String, ToolStats> stats = new ConcurrentHashMap<>();

    static class ToolStats {
        long totalCalls = 0;
        long totalDuration = 0;
        long maxDuration = 0;
        long minDuration = Long.MAX_VALUE;

        synchronized void record(long durationMs) {
            totalCalls++;
            totalDuration += durationMs;
            maxDuration = Math.max(maxDuration, durationMs);
            minDuration = Math.min(minDuration, durationMs);
        }

        double avgDuration() {
            return totalCalls > 0 ? (double) totalDuration / totalCalls : 0;
        }
    }

    @Override
    public void accept(ToolExecution execution) {
        stats.computeIfAbsent(execution.request().name(), k -> new ToolStats())
            .record(execution.durationMs());
    }

    public void printReport() {
        System.out.println("\n=== Tool Performance Report ===");
        stats.forEach((name, stat) -> {
            System.out.printf(
                "Tool: %s | Calls: %d | Avg: %.1fms | Min: %dms | Max: %dms%n",
                name, stat.totalCalls, stat.avgDuration(), stat.minDuration, stat.maxDuration
            );
        });
    }
}

ToolPerformanceMonitor monitor = new ToolPerformanceMonitor();
Assistant assistant = AiServices.builder(Assistant.class)
    .chatModel(chatModel)
    .tools(tools)
    .afterToolExecution(monitor)
    .build();

// After conversations
monitor.printReport();

Tool Request Inspector

class ToolRequestInspector implements Consumer<BeforeToolExecution> {
    @Override
    public void accept(BeforeToolExecution context) {
        ToolExecutionRequest request = context.toolExecutionRequest();

        System.out.println("\n=== Tool Execution Request ===");
        System.out.println("Tool Name: " + request.name());
        System.out.println("Memory ID: " + context.memoryId());
        System.out.println("Arguments:");

        request.arguments().forEach((key, value) -> {
            System.out.printf("  %s: %s (%s)%n",
                key, value, value != null ? value.getClass().getSimpleName() : "null");
        });
    }
}

Assistant assistant = AiServices.builder(Assistant.class)
    .chatModel(chatModel)
    .tools(tools)
    .beforeToolExecution(new ToolRequestInspector())
    .build();

Comprehensive Debug Tool

class DebugTools {
    @Tool("Debug: List available tools")
    public String listTools() {
        // Return list of all registered tools
        return "Available tools: add, multiply, getWeather, searchDatabase";
    }

    @Tool("Debug: Echo arguments")
    public String echo(String message) {
        return "You said: " + message;
    }

    @Tool("Debug: Test argument parsing")
    public String testParsing(
        String str,
        int num,
        boolean flag,
        List<String> list,
        Map<String, Object> map
    ) {
        return String.format(
            "Received: str=%s, num=%d, flag=%b, list=%s, map=%s",
            str, num, flag, list, map
        );
    }
}

// Add to assistant for debugging
Assistant assistant = AiServices.builder(Assistant.class)
    .chatModel(chatModel)
    .tools(new ProductionTools(), new DebugTools()) // Include debug tools
    .build();

Usage Examples

Basic Tool Usage

import dev.langchain4j.agent.tool.Tool;
import dev.langchain4j.service.AiServices;

class CalculatorTools {
    @Tool("Add two numbers")
    public double add(double a, double b) {
        return a + b;
    }

    @Tool("Multiply two numbers")
    public double multiply(double a, double b) {
        return a * b;
    }
}

interface Assistant {
    String chat(String message);
}

// Register tools with AI service
Assistant assistant = AiServices.builder(Assistant.class)
    .chatModel(chatModel)
    .tools(new CalculatorTools())
    .build();

// LLM can now call calculator tools
String response = assistant.chat("What is 25 multiplied by 4?");
// Tool will be called automatically: multiply(25, 4)
// Response: "25 multiplied by 4 equals 100"

Tool with Complex Parameters

import dev.langchain4j.agent.tool.Tool;

record SearchQuery(String keyword, int maxResults, boolean includeArchived) {}

class SearchTools {
    @Tool("Search documents with the given criteria")
    public List<String> searchDocuments(SearchQuery query) {
        // Implementation
        return List.of("doc1", "doc2", "doc3");
    }
}

Assistant assistant = AiServices.builder(Assistant.class)
    .chatModel(chatModel)
    .tools(new SearchTools())
    .build();

String response = assistant.chat("Find 5 documents about 'Java' including archived ones");
// Tool will be called with proper SearchQuery object

Multiple Tool Classes

import dev.langchain4j.agent.tool.Tool;

class WeatherTools {
    @Tool("Get current weather for a location")
    public String getWeather(String location) {
        return "Sunny, 72°F";
    }

    @Tool("Get weather forecast for next N days")
    public String getForecast(String location, int days) {
        return "Next " + days + " days: mostly sunny";
    }
}

class DatabaseTools {
    @Tool("Query database and return results")
    public String queryDatabase(String sql) {
        // Execute query
        return "Query results...";
    }
}

Assistant assistant = AiServices.builder(Assistant.class)
    .chatModel(chatModel)
    .tools(new WeatherTools(), new DatabaseTools())
    .build();

// LLM can use tools from both classes
String response = assistant.chat("What's the weather in NYC and how many users do we have?");

Tool Execution Callbacks

import dev.langchain4j.service.AiServices;
import dev.langchain4j.service.tool.BeforeToolExecution;
import dev.langchain4j.service.tool.ToolExecution;

Assistant assistant = AiServices.builder(Assistant.class)
    .chatModel(chatModel)
    .tools(new CalculatorTools())
    .beforeToolExecution(ctx -> {
        System.out.println("About to execute: " + ctx.toolExecutionRequest().name());
        System.out.println("Arguments: " + ctx.toolExecutionRequest().arguments());
    })
    .afterToolExecution(execution -> {
        System.out.println("Executed: " + execution.request().name());
        System.out.println("Result: " + execution.result());
        System.out.println("Duration: " + execution.durationMs() + "ms");
    })
    .build();

String response = assistant.chat("Calculate 10 + 5");
// Callbacks will be invoked before and after tool execution

Concurrent Tool Execution

import dev.langchain4j.service.AiServices;
import java.util.concurrent.Executors;

// Enable concurrent tool execution
Assistant assistant = AiServices.builder(Assistant.class)
    .chatModel(chatModel)
    .tools(new WeatherTools(), new DatabaseTools())
    .executeToolsConcurrently() // Use default executor
    .build();

// Or with custom executor
Assistant assistant2 = AiServices.builder(Assistant.class)
    .chatModel(chatModel)
    .tools(new WeatherTools(), new DatabaseTools())
    .executeToolsConcurrently(Executors.newFixedThreadPool(4))
    .build();

// If LLM requests multiple tools at once, they execute in parallel
String response = assistant.chat(
    "What's the weather in NYC, London, and Tokyo?"
);

Tool Error Handling

import dev.langchain4j.service.AiServices;
import dev.langchain4j.service.tool.ToolArgumentsErrorHandler;
import dev.langchain4j.service.tool.ToolExecutionErrorHandler;
import dev.langchain4j.service.tool.ToolErrorHandlerResult;

Assistant assistant = AiServices.builder(Assistant.class)
    .chatModel(chatModel)
    .tools(new DatabaseTools())
    .toolArgumentsErrorHandler(context -> {
        System.err.println("Failed to parse arguments: " + context.error().getMessage());
        // Send error back to LLM so it can try again
        return ToolErrorHandlerResult.continueWithError(
            "Invalid arguments format. Please provide valid JSON."
        );
    })
    .toolExecutionErrorHandler(context -> {
        System.err.println("Tool execution failed: " + context.error().getMessage());
        // Continue with error message
        return ToolErrorHandlerResult.continueWithError(
            "Tool execution failed: " + context.error().getMessage()
        );
    })
    .build();

Dynamic Tool Provider

import dev.langchain4j.service.AiServices;
import dev.langchain4j.service.tool.ToolProvider;
import dev.langchain4j.service.tool.ToolProviderRequest;
import dev.langchain4j.service.tool.ToolProviderResult;

class DynamicToolProvider implements ToolProvider {
    @Override
    public ToolProviderResult provideTools(ToolProviderRequest request) {
        // Select tools based on user message or memory ID
        String userMessage = request.userMessage().text();

        Map<ToolSpecification, ToolExecutor> tools = new HashMap<>();

        if (userMessage.contains("weather")) {
            // Add weather tools
            // ...
        }

        if (userMessage.contains("database")) {
            // Add database tools
            // ...
        }

        return ToolProviderResult.of(tools);
    }
}

Assistant assistant = AiServices.builder(Assistant.class)
    .chatModel(chatModel)
    .toolProvider(new DynamicToolProvider())
    .build();

// Only relevant tools are provided to the LLM based on context

Limiting Tool Invocations

import dev.langchain4j.service.AiServices;

Assistant assistant = AiServices.builder(Assistant.class)
    .chatModel(chatModel)
    .tools(new CalculatorTools())
    .maxSequentialToolsInvocations(5) // Limit to 5 sequential tool calls
    .build();

// Prevents infinite loops or excessive tool chaining
String response = assistant.chat("Calculate 1+1, then add 1, then add 1...");

Hallucinated Tool Name Handling

import dev.langchain4j.service.AiServices;
import dev.langchain4j.service.tool.HallucinatedToolNameStrategy;

Assistant assistant = AiServices.builder(Assistant.class)
    .chatModel(chatModel)
    .tools(new CalculatorTools())
    .hallucinatedToolNameStrategy(HallucinatedToolNameStrategy.SEND_ERROR_MESSAGE)
    .build();

// If LLM tries to call non-existent tool, error message is sent back
// allowing LLM to correct and try again

Tool with Context Access

import dev.langchain4j.agent.tool.Tool;
import dev.langchain4j.agent.tool.ToolMemoryId;

class UserTools {
    @Tool("Get user information")
    public String getUserInfo(@ToolMemoryId String userId) {
        // Use memory ID to fetch user-specific data
        return "User information for " + userId;
    }

    @Tool("Update user preferences")
    public String updatePreferences(@ToolMemoryId String userId, String preferences) {
        return "Updated preferences for " + userId;
    }
}

interface Assistant {
    String chat(@MemoryId String userId, String message);
}

Assistant assistant = AiServices.builder(Assistant.class)
    .chatModel(chatModel)
    .chatMemoryProvider(memoryId -> MessageWindowChatMemory.withMaxMessages(10))
    .tools(new UserTools())
    .build();

// Memory ID is automatically passed to tools
String response = assistant.chat("user123", "What are my preferences?");

Programmatic Tool Registration

import dev.langchain4j.agent.tool.ToolSpecification;
import dev.langchain4j.service.tool.ToolExecutor;
import java.util.HashMap;
import java.util.Map;

// Create tool specification
ToolSpecification weatherSpec = ToolSpecification.builder()
    .name("get_weather")
    .description("Get current weather for a location")
    .addParameter("location", "string", "The location to get weather for")
    .build();

// Create tool executor
ToolExecutor weatherExecutor = (request, memoryId) -> {
    String location = request.argument("location");
    return "Weather in " + location + ": Sunny, 72°F";
};

Map<ToolSpecification, ToolExecutor> tools = new HashMap<>();
tools.put(weatherSpec, weatherExecutor);

Assistant assistant = AiServices.builder(Assistant.class)
    .chatModel(chatModel)
    .tools(tools)
    .build();

Immediate Return Tools

import dev.langchain4j.agent.tool.Tool;
import java.util.Set;

class ActionTools {
    @Tool("Send email to user")
    public String sendEmail(String to, String subject, String body) {
        // Start async email sending
        return "Email queued for sending";
    }
}

Assistant assistant = AiServices.builder(Assistant.class)
    .chatModel(chatModel)
    .tools(tools, Set.of("sendEmail")) // Mark as immediate return
    .build();

// Tool returns immediately without waiting for completion
String response = assistant.chat("Send an email to john@example.com");

Related APIs

  • AiServices - Service builder that registers tools and configures execution
  • ChatMemory - Memory management for tool context via @ToolMemoryId
  • ChatModel - Language models that invoke tools via function calling
  • StreamingChatModel - Streaming models with tool execution support
  • RAG - Retrieval-Augmented Generation can provide data for tools

See Also

  • ToolSpecification - Programmatic tool specification builder
  • ToolExecutionRequest - Request object containing tool name and arguments
  • JsonSchema - JSON schema generation for tool parameters
  • @P annotation - Parameter description annotation for tool method parameters

Install with Tessl CLI

npx tessl i tessl/maven-dev-langchain4j--langchain4j@1.11.0

docs

ai-services.md

chains.md

classification.md

data-types.md

document-processing.md

embedding-store.md

guardrails.md

index.md

memory.md

messages.md

models.md

output-parsing.md

prompts.md

rag.md

request-response.md

spi.md

tools.md

README.md

tile.json