tessl install tessl/maven-io-quarkiverse-langchain4j--quarkus-langchain4j-core@1.5.0Quarkus LangChain4j Core provides runtime integration for LangChain4j with the Quarkus framework, enabling declarative AI service creation through CDI annotations.
This guide walks you through creating your first AI-powered Quarkus application using LangChain4j.
Add Quarkus LangChain4j Core to your pom.xml:
<dependency>
<groupId>io.quarkiverse.langchain4j</groupId>
<artifactId>quarkus-langchain4j-core</artifactId>
<version>1.5.0</version>
</dependency>
<!-- Add a model provider (e.g., OpenAI) -->
<dependency>
<groupId>io.quarkiverse.langchain4j</groupId>
<artifactId>quarkus-langchain4j-openai</artifactId>
<version>1.5.0</version>
</dependency>Add your API key to application.properties:
quarkus.langchain4j.openai.api-key=${OPENAI_API_KEY}Set the environment variable:
export OPENAI_API_KEY=your-api-key-hereCreate a simple AI service interface:
package com.example;
import io.quarkiverse.langchain4j.RegisterAiService;
import dev.langchain4j.service.UserMessage;
@RegisterAiService
public interface AssistantService {
@UserMessage("What is the capital of {country}?")
String getCapital(String country);
@UserMessage("Tell me a joke about {topic}")
String tellJoke(String topic);
}What's happening here:
@RegisterAiService creates a CDI bean automatically@UserMessage defines the prompt template{country} and {topic} are template variables that match parameter namesString means you get the AI response as textInject your AI service into any CDI bean:
package com.example;
import jakarta.inject.Inject;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.QueryParam;
@Path("/assistant")
public class AssistantResource {
@Inject
AssistantService assistant;
@GET
@Path("/capital")
public String getCapital(@QueryParam("country") String country) {
return assistant.getCapital(country);
}
@GET
@Path("/joke")
public String getJoke(@QueryParam("topic") String topic) {
return assistant.tellJoke(topic);
}
}Start your application:
mvn quarkus:devTest the endpoints:
curl "http://localhost:8080/assistant/capital?country=France"
# Response: "The capital of France is Paris."
curl "http://localhost:8080/assistant/joke?topic=programming"
# Response: "Why do programmers prefer dark mode?..."Provide context to guide the AI's behavior:
@RegisterAiService
public interface CodeReviewer {
@SystemMessage("You are an expert code reviewer. Provide constructive feedback.")
@UserMessage("Review this code: {code}")
String reviewCode(String code);
}Enable the AI to remember previous messages:
@RegisterAiService
public interface ChatBot {
String chat(@MemoryId String userId, @UserMessage String message);
}
// Each user gets their own conversation history
@Inject ChatBot bot;
bot.chat("user123", "My name is Alice");
bot.chat("user123", "What's my name?"); // Response: "Your name is Alice"Let the AI call Java methods:
import dev.langchain4j.agent.tool.Tool;
import jakarta.enterprise.context.ApplicationScoped;
@ApplicationScoped
public class WeatherTool {
@Tool("Get current weather for a city")
public String getWeather(String city) {
// Call weather API
return "Current weather in " + city + ": Sunny, 22°C";
}
}
@RegisterAiService(tools = WeatherTool.class)
public interface WeatherAssistant {
String chat(String message);
}
// Usage
assistant.chat("What's the weather in Paris?");
// AI automatically calls getWeather("Paris") and respondsUse multiple models in the same application:
@RegisterAiService(modelName = "gpt-4")
public interface AdvancedAssistant {
String chat(String message);
}
@RegisterAiService(modelName = "gpt-3.5-turbo")
public interface QuickAssistant {
String chat(String message);
}Configure in application.properties:
quarkus.langchain4j.openai.gpt-4.api-key=${OPENAI_API_KEY}
quarkus.langchain4j.openai.gpt-4.model-name=gpt-4
quarkus.langchain4j.openai.gpt-4.temperature=0.7
quarkus.langchain4j.openai.gpt-3-5-turbo.api-key=${OPENAI_API_KEY}
quarkus.langchain4j.openai.gpt-3-5-turbo.model-name=gpt-3.5-turbo
quarkus.langchain4j.openai.gpt-3-5-turbo.temperature=0.3Stream responses for better user experience:
import io.smallrye.mutiny.Multi;
@RegisterAiService
public interface StreamingAssistant {
Multi<String> chatStreaming(String message);
}
// Usage
@Inject StreamingAssistant assistant;
assistant.chatStreaming("Tell me a long story")
.subscribe().with(
chunk -> System.out.print(chunk), // Print each chunk as it arrives
error -> System.err.println("Error: " + error),
() -> System.out.println("\nComplete")
);# Logging
quarkus.langchain4j.log-requests=true
quarkus.langchain4j.log-responses=true
# Timeouts
quarkus.langchain4j.timeout=60s
# Model parameters
quarkus.langchain4j.temperature=0.7
quarkus.langchain4j.openai.gpt-4.max-tokens=2000
# Chat memory
quarkus.langchain4j.chat-memory.type=in-memory
quarkus.langchain4j.chat-memory.max-messages=100@RegisterAiService annotation{variableName} matches parameter name exactly-parameters flag✓ How to add Quarkus LangChain4j to your project
✓ How to create declarative AI services
✓ How to inject and use AI services
✓ How to add conversation memory
✓ How to enable function calling with tools
✓ How to configure multiple models
✓ How to stream responses