or run

tessl search
Log in

Version

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
mavenpkg:maven/io.quarkiverse.langchain4j/quarkus-langchain4j-core@1.5.x
tile.json

tessl/maven-io-quarkiverse-langchain4j--quarkus-langchain4j-core

tessl install tessl/maven-io-quarkiverse-langchain4j--quarkus-langchain4j-core@1.5.0

Quarkus LangChain4j Core provides runtime integration for LangChain4j with the Quarkus framework, enabling declarative AI service creation through CDI annotations.

index.mddocs/

Quarkus LangChain4j Core

Quarkus LangChain4j Core provides runtime integration for LangChain4j with the Quarkus framework, enabling seamless incorporation of Large Language Models (LLMs) into Quarkus applications through declarative CDI annotations.

Package Information

  • Group ID: io.quarkiverse.langchain4j
  • Artifact ID: quarkus-langchain4j-core
  • Version: 1.5.0
  • Language: Java
  • Minimum Java: 17
  • Minimum Quarkus: 3.2.0

Installation:

<dependency>
    <groupId>io.quarkiverse.langchain4j</groupId>
    <artifactId>quarkus-langchain4j-core</artifactId>
    <version>1.5.0</version>
</dependency>

Quick Start

Create an AI service interface:

import io.quarkiverse.langchain4j.RegisterAiService;
import dev.langchain4j.service.UserMessage;
import jakarta.inject.Inject;

@RegisterAiService
public interface AssistantService {
    @UserMessage("What is the capital of {country}?")
    String chat(String country);
}

// Inject and use
@Inject
AssistantService assistant;

String result = assistant.chat("France");

Configure in application.properties:

quarkus.langchain4j.openai.api-key=${OPENAI_API_KEY}

Complete Quick Start Guide →

Core Concepts

Declarative AI Services

Create AI service implementations automatically using @RegisterAiService annotation. No boilerplate code required.

CDI Integration

Full Jakarta CDI support for dependency injection, lifecycle management, and integration with Quarkus ecosystem.

Tool System

Enable LLM function calling with Java methods annotated with @Tool. Includes input/output guardrails for validation.

Chat Memory

Pluggable conversation history management with automatic seeding for few-shot learning.

Observability

Built-in CDI events, metrics, and OpenTelemetry tracing for monitoring AI service interactions.

Core Capabilities

CapabilityDescriptionReference
AI Service CreationDeclarative service interfaces with automatic implementationReference →
Tool GuardrailsInput/output validation for tool executionReference →
Chat MemoryConversation history management and seedingReference →
Model SelectionCDI qualifiers for fine-grained model controlReference →
AuthenticationCustom authentication providers for API callsReference →
Response AugmentationTransform and enhance AI responsesReference →
ObservabilityCDI events for monitoring interactionsReference →
Cost EstimationTrack API costs based on token usageReference →
Error HandlingCustom error handlers for tool failuresReference →
Content AnnotationsMultimodal support (images, audio, video, PDF)Reference →
ConfigurationMicroProfile Config for behavior customizationReference →

Key Annotations

Service Definition

@RegisterAiService                    // Create AI service as CDI bean
@RegisterAiService(modelName = "...")  // Use specific model
@RegisterAiService(tools = {...})     // Register tool classes

Message Templates

@UserMessage("...")           // User message template
@SystemMessage("...")         // System message template
@MemoryId                     // Mark parameter as memory ID

Tool System

@Tool("description")                    // Mark method as tool
@ToolInputGuardrails({...})             // Validate inputs
@ToolOutputGuardrails({...})            // Validate outputs
@HandleToolExecutionError               // Handle tool errors
@HandleToolArgumentError                // Handle argument errors

Content & Augmentation

@ImageUrl, @AudioUrl, @VideoUrl, @PdfUrl  // Content type markers
@ResponseAugmenter(...)                    // Transform responses

Observability & Config

@AiServiceSelector(MyService.class)  // Filter events by service
@ModelName("...")                     // Select specific model

Package Structure

Main Packages

  • io.quarkiverse.langchain4j - Core annotations and utilities
  • io.quarkiverse.langchain4j.guardrails - Tool guardrails framework
  • io.quarkiverse.langchain4j.response - Response augmentation
  • io.quarkiverse.langchain4j.auth - Model authentication
  • io.quarkiverse.langchain4j.cost - Cost estimation
  • io.quarkiverse.langchain4j.observability - Events and monitoring
  • io.quarkiverse.langchain4j.runtime.config - Configuration interfaces
  • io.quarkiverse.langchain4j.runtime.aiservice - Runtime support

External Dependencies

Requires LangChain4j 1.9.1 or compatible:

<dependency>
    <groupId>dev.langchain4j</groupId>
    <artifactId>langchain4j</artifactId>
    <version>1.9.1</version>
</dependency>

Common Patterns

Multi-Model Application

@RegisterAiService(modelName = "gpt-4")
public interface AdvancedAssistant { String chat(String message); }

@RegisterAiService(modelName = "gpt-3.5-turbo")
public interface BasicAssistant { String chat(String message); }

Tool with Guardrails

@ApplicationScoped
public class SecureTool {
    @Tool("Fetch data")
    @ToolInputGuardrails(AuthGuardrail.class)
    @ToolOutputGuardrails({PiiRedactionGuardrail.class})
    public String fetch(String userId) { return data; }
}

Per-User Memory

@RegisterAiService
public interface PersonalAssistant {
    String chat(@MemoryId String userId, @UserMessage String message);
}

Streaming with Augmentation

@RegisterAiService
public interface StreamingAssistant {
    @ResponseAugmenter(CitationAugmenter.class)
    Multi<String> chatStreaming(String message);
}

More Real-World Scenarios →

Architecture

Build-Time Processing:

  • Interface scanning and validation
  • Bytecode generation for implementations
  • Tool specification extraction
  • CDI bean metadata creation

Runtime Execution:

  • AI service method invocations
  • Model API calls
  • Tool execution and guardrails
  • Memory management
  • Event firing and metrics

Threading Model

  • AI Service Methods: Execute on worker threads (automatic offload)
  • Tools: Run on worker threads (use @Blocking if needed)
  • Guardrails: Always on worker threads
  • Streaming: Uses Mutiny Multi<T> with backpressure
  • Event Loop: Blocking operations not allowed

Error Propagation

  1. Tool Errors@HandleToolExecutionError or message to LLM
  2. Argument Errors@HandleToolArgumentError or message to LLM
  3. Fatal GuardrailsToolGuardrailException terminates immediately
  4. Non-Fatal Guardrails → Error message to LLM for retry
  5. Model API Errors → Propagate to caller

Configuration

Configure in application.properties:

# Global settings
quarkus.langchain4j.log-requests=true
quarkus.langchain4j.timeout=60s
quarkus.langchain4j.temperature=0.7

# Model-specific
quarkus.langchain4j.openai.gpt-4.api-key=${OPENAI_API_KEY}
quarkus.langchain4j.openai.gpt-4.model-name=gpt-4
quarkus.langchain4j.openai.gpt-4.temperature=0.7

# Guardrails
quarkus.langchain4j.guardrails.max-retries=3

# Tracing
quarkus.langchain4j.tracing.include-prompt=false
quarkus.langchain4j.tracing.include-completion=false

Complete Configuration Reference →

Dependency Injection Scopes

ComponentRecommended ScopeNotes
AI Services@ApplicationScopedSingleton per service interface
Tools@ApplicationScopedStateless recommended
Guardrails@ApplicationScopedMust be thread-safe
Models@ApplicationScopedSingleton per model name
Auth Providers@ApplicationScopedMust be thread-safe
Memory Providers@ApplicationScopedManages all user memories

Performance & Optimization

Startup:

  • Build-time processing for fast startup
  • Native image support (GraalVM)
  • Lazy model initialization

Runtime:

  • Connection pooling (automatic)
  • Model caching (singleton)
  • Memory caching (avoid repeated lookups)
  • Bytecode generation (no reflection)

Memory:

  • Limit message windows: MessageWindowChatMemory.withMaxMessages(N)
  • Token-based windows: withMaxTokens(N, tokenizer)
  • Implement eviction policies

Cost:

  • Use cheaper models for simple tasks
  • Set token limits
  • Cache common responses
  • Optimize prompts

Security Best Practices

  1. Never hardcode API keys - use environment variables
  2. Validate all user inputs - implement input guardrails
  3. Redact PII - use output guardrails
  4. Rate limiting - protect against abuse
  5. Audit logging - log all tool executions
  6. Use HTTPS - for all model API calls
  7. Principle of least privilege - minimal permissions
  8. Regular dependency updates - security patches

Version Compatibility

Quarkus LangChain4jLangChain4jQuarkusJava
1.5.01.9.13.2.0+17+
1.4.x1.8.x3.1.0+17+
1.3.x1.7.x3.0.0+17+

Quick Troubleshooting

IssueSolution
AI Service not injectedCheck interface is public, in scanned package, has @RegisterAiService
Model not foundVerify configuration matches model name exactly (case-sensitive)
BlockingToolNotAllowedExceptionAdd @Blocking to tool method
Tool not foundEnsure tool class has CDI scope annotation
Memory not persistingImplement caching in ChatMemoryProvider
Template variables not substitutedMatch parameter names exactly (case-sensitive)

Resources

Getting Started

Detailed Reference

External Documentation

  • LangChain4j Documentation
  • Quarkus Documentation
  • Quarkus LangChain4j Extension Guide
  • GitHub Repository

Migration

From Plain LangChain4j

Replace manual service creation:

// Before
ChatLanguageModel model = OpenAiChatModel.builder()
    .apiKey(System.getenv("OPENAI_API_KEY")).build();
MyAssistant assistant = AiServices.builder(MyAssistant.class)
    .chatLanguageModel(model).build();

// After
@RegisterAiService
public interface MyAssistant { String chat(String message); }

@Inject MyAssistant assistant;

Configure in properties:

quarkus.langchain4j.openai.api-key=${OPENAI_API_KEY}

From Earlier Versions

  1. Check deprecation warnings in build logs
  2. Update annotation imports if packages changed
  3. Review configuration property changes
  4. Test guardrails if upgrading from pre-guardrails version

Glossary

  • AI Service: CDI bean created from @RegisterAiService interface
  • Tool: Java method annotated with @Tool that LLM can invoke
  • Guardrail: Validation logic for tool inputs/outputs
  • Memory: Conversation history storage
  • Memory ID: Identifier for isolating conversations (e.g., user ID)
  • RAG: Retrieval Augmented Generation
  • Streaming: Incremental response delivery via reactive streams
  • Augmenter: Post-processor for AI responses
  • CDI: Contexts and Dependency Injection (Jakarta EE)
  • SPI: Service Provider Interface

Need more details? See the guides, examples, and reference documentation for comprehensive information.