tessl install tessl/maven-io-quarkiverse-langchain4j--quarkus-langchain4j-core@1.5.0Quarkus LangChain4j Core provides runtime integration for LangChain4j with the Quarkus framework, enabling declarative AI service creation through CDI annotations.
Quarkus LangChain4j Core provides runtime integration for LangChain4j with the Quarkus framework, enabling seamless incorporation of Large Language Models (LLMs) into Quarkus applications through declarative CDI annotations.
Installation:
<dependency>
<groupId>io.quarkiverse.langchain4j</groupId>
<artifactId>quarkus-langchain4j-core</artifactId>
<version>1.5.0</version>
</dependency>Create an AI service interface:
import io.quarkiverse.langchain4j.RegisterAiService;
import dev.langchain4j.service.UserMessage;
import jakarta.inject.Inject;
@RegisterAiService
public interface AssistantService {
@UserMessage("What is the capital of {country}?")
String chat(String country);
}
// Inject and use
@Inject
AssistantService assistant;
String result = assistant.chat("France");Configure in application.properties:
quarkus.langchain4j.openai.api-key=${OPENAI_API_KEY}Create AI service implementations automatically using @RegisterAiService annotation. No boilerplate code required.
Full Jakarta CDI support for dependency injection, lifecycle management, and integration with Quarkus ecosystem.
Enable LLM function calling with Java methods annotated with @Tool. Includes input/output guardrails for validation.
Pluggable conversation history management with automatic seeding for few-shot learning.
Built-in CDI events, metrics, and OpenTelemetry tracing for monitoring AI service interactions.
| Capability | Description | Reference |
|---|---|---|
| AI Service Creation | Declarative service interfaces with automatic implementation | Reference → |
| Tool Guardrails | Input/output validation for tool execution | Reference → |
| Chat Memory | Conversation history management and seeding | Reference → |
| Model Selection | CDI qualifiers for fine-grained model control | Reference → |
| Authentication | Custom authentication providers for API calls | Reference → |
| Response Augmentation | Transform and enhance AI responses | Reference → |
| Observability | CDI events for monitoring interactions | Reference → |
| Cost Estimation | Track API costs based on token usage | Reference → |
| Error Handling | Custom error handlers for tool failures | Reference → |
| Content Annotations | Multimodal support (images, audio, video, PDF) | Reference → |
| Configuration | MicroProfile Config for behavior customization | Reference → |
@RegisterAiService // Create AI service as CDI bean
@RegisterAiService(modelName = "...") // Use specific model
@RegisterAiService(tools = {...}) // Register tool classes@UserMessage("...") // User message template
@SystemMessage("...") // System message template
@MemoryId // Mark parameter as memory ID@Tool("description") // Mark method as tool
@ToolInputGuardrails({...}) // Validate inputs
@ToolOutputGuardrails({...}) // Validate outputs
@HandleToolExecutionError // Handle tool errors
@HandleToolArgumentError // Handle argument errors@ImageUrl, @AudioUrl, @VideoUrl, @PdfUrl // Content type markers
@ResponseAugmenter(...) // Transform responses@AiServiceSelector(MyService.class) // Filter events by service
@ModelName("...") // Select specific modelio.quarkiverse.langchain4j - Core annotations and utilitiesio.quarkiverse.langchain4j.guardrails - Tool guardrails frameworkio.quarkiverse.langchain4j.response - Response augmentationio.quarkiverse.langchain4j.auth - Model authenticationio.quarkiverse.langchain4j.cost - Cost estimationio.quarkiverse.langchain4j.observability - Events and monitoringio.quarkiverse.langchain4j.runtime.config - Configuration interfacesio.quarkiverse.langchain4j.runtime.aiservice - Runtime supportRequires LangChain4j 1.9.1 or compatible:
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j</artifactId>
<version>1.9.1</version>
</dependency>@RegisterAiService(modelName = "gpt-4")
public interface AdvancedAssistant { String chat(String message); }
@RegisterAiService(modelName = "gpt-3.5-turbo")
public interface BasicAssistant { String chat(String message); }@ApplicationScoped
public class SecureTool {
@Tool("Fetch data")
@ToolInputGuardrails(AuthGuardrail.class)
@ToolOutputGuardrails({PiiRedactionGuardrail.class})
public String fetch(String userId) { return data; }
}@RegisterAiService
public interface PersonalAssistant {
String chat(@MemoryId String userId, @UserMessage String message);
}@RegisterAiService
public interface StreamingAssistant {
@ResponseAugmenter(CitationAugmenter.class)
Multi<String> chatStreaming(String message);
}Build-Time Processing:
Runtime Execution:
@Blocking if needed)Multi<T> with backpressure@HandleToolExecutionError or message to LLM@HandleToolArgumentError or message to LLMToolGuardrailException terminates immediatelyConfigure in application.properties:
# Global settings
quarkus.langchain4j.log-requests=true
quarkus.langchain4j.timeout=60s
quarkus.langchain4j.temperature=0.7
# Model-specific
quarkus.langchain4j.openai.gpt-4.api-key=${OPENAI_API_KEY}
quarkus.langchain4j.openai.gpt-4.model-name=gpt-4
quarkus.langchain4j.openai.gpt-4.temperature=0.7
# Guardrails
quarkus.langchain4j.guardrails.max-retries=3
# Tracing
quarkus.langchain4j.tracing.include-prompt=false
quarkus.langchain4j.tracing.include-completion=falseComplete Configuration Reference →
| Component | Recommended Scope | Notes |
|---|---|---|
| AI Services | @ApplicationScoped | Singleton per service interface |
| Tools | @ApplicationScoped | Stateless recommended |
| Guardrails | @ApplicationScoped | Must be thread-safe |
| Models | @ApplicationScoped | Singleton per model name |
| Auth Providers | @ApplicationScoped | Must be thread-safe |
| Memory Providers | @ApplicationScoped | Manages all user memories |
Startup:
Runtime:
Memory:
MessageWindowChatMemory.withMaxMessages(N)withMaxTokens(N, tokenizer)Cost:
| Quarkus LangChain4j | LangChain4j | Quarkus | Java |
|---|---|---|---|
| 1.5.0 | 1.9.1 | 3.2.0+ | 17+ |
| 1.4.x | 1.8.x | 3.1.0+ | 17+ |
| 1.3.x | 1.7.x | 3.0.0+ | 17+ |
| Issue | Solution |
|---|---|
| AI Service not injected | Check interface is public, in scanned package, has @RegisterAiService |
| Model not found | Verify configuration matches model name exactly (case-sensitive) |
| BlockingToolNotAllowedException | Add @Blocking to tool method |
| Tool not found | Ensure tool class has CDI scope annotation |
| Memory not persisting | Implement caching in ChatMemoryProvider |
| Template variables not substituted | Match parameter names exactly (case-sensitive) |
Replace manual service creation:
// Before
ChatLanguageModel model = OpenAiChatModel.builder()
.apiKey(System.getenv("OPENAI_API_KEY")).build();
MyAssistant assistant = AiServices.builder(MyAssistant.class)
.chatLanguageModel(model).build();
// After
@RegisterAiService
public interface MyAssistant { String chat(String message); }
@Inject MyAssistant assistant;Configure in properties:
quarkus.langchain4j.openai.api-key=${OPENAI_API_KEY}@RegisterAiService interface@Tool that LLM can invokeNeed more details? See the guides, examples, and reference documentation for comprehensive information.