Quarkus extension that integrates Hugging Face language models with Quarkus applications through LangChain4j
The QuarkusHuggingFaceChatModel class provides text generation capabilities using Hugging Face inference endpoints. It implements the LangChain4j ChatModel interface and supports comprehensive configuration options for controlling generation behavior.
Main class for Hugging Face chat model integration.
package io.quarkiverse.langchain4j.huggingface;
/**
* Quarkus-specific implementation of Hugging Face chat model.
* Implements dev.langchain4j.model.chat.ChatModel interface.
*/
public class QuarkusHuggingFaceChatModel implements dev.langchain4j.model.chat.ChatModel {
/**
* Shared client factory instance for creating Hugging Face REST clients.
*/
public static final QuarkusHuggingFaceClientFactory CLIENT_FACTORY;
/**
* Creates a new builder for configuring the chat model.
*
* @return A new Builder instance
*/
public static QuarkusHuggingFaceChatModel.Builder builder();
/**
* Executes a chat request and returns a chat response.
*
* @param chatRequest Chat request containing messages and optional parameters
* @return Chat response containing the generated AI message
*/
public dev.langchain4j.model.chat.response.ChatResponse doChat(
dev.langchain4j.model.chat.request.ChatRequest chatRequest
);
}The Builder class provides a fluent API for configuring chat model instances programmatically.
/**
* Builder for creating QuarkusHuggingFaceChatModel instances with custom configuration.
*/
public static class Builder {
/**
* Enable/disable request logging.
*/
public boolean logRequests;
/**
* Enable/disable response logging.
*/
public boolean logResponses;
/**
* Sets the Hugging Face API access token.
* Required when using Hugging Face hosted inference API.
*
* @param accessToken The Hugging Face API token (starts with "hf_")
* @return This builder instance
*/
public Builder accessToken(String accessToken);
/**
* Sets the inference endpoint URL.
* Can be Hugging Face Hub API, private endpoint, or local deployment.
*
* @param url The endpoint URL
* @return This builder instance
*/
public Builder url(java.net.URL url);
/**
* Sets the timeout duration for API calls.
*
* @param timeout Timeout duration (default: 15 seconds)
* @return This builder instance
*/
public Builder timeout(java.time.Duration timeout);
/**
* Sets the sampling temperature.
* Controls randomness in generation. Higher values (e.g., 1.0) make output more random,
* lower values (e.g., 0.1) make it more deterministic.
*
* @param temperature Temperature value (typically 0.0-100.0, commonly 0.1-2.0)
* @return This builder instance
*/
public Builder temperature(Double temperature);
/**
* Sets the maximum number of new tokens to generate.
*
* @param maxNewTokens Maximum number of tokens (typically 0-250 depending on model)
* @return This builder instance
*/
public Builder maxNewTokens(Integer maxNewTokens);
/**
* Sets whether to return the full text including the input prompt.
*
* @param returnFullText true to include input in response, false to return only generated text
* @return This builder instance
*/
public Builder returnFullText(Boolean returnFullText);
/**
* Sets whether to wait for the model to be ready.
* If true, the request will wait for model loading. If false, may receive 503 error if model not loaded.
*
* @param waitForModel true to wait for model (default), false to fail fast
* @return This builder instance
*/
public Builder waitForModel(Boolean waitForModel);
/**
* Sets whether to use sampling or greedy decoding.
* When true, uses probabilistic sampling. When false, uses greedy decoding (always picks most likely token).
*
* @param doSample Optional boolean for sampling strategy
* @return This builder instance
*/
public Builder doSample(java.util.Optional<Boolean> doSample);
/**
* Sets the top-K filtering value.
* Limits sampling to the K most likely next tokens. Lower values make output more focused.
*
* @param topK Number of top tokens to consider (e.g., 50)
* @return This builder instance
*/
public Builder topK(java.util.OptionalInt topK);
/**
* Sets the top-P (nucleus sampling) value.
* Limits sampling to tokens whose cumulative probability is below P.
* Common values: 0.9-0.95 for balanced results.
*
* @param topP Cumulative probability threshold (0.0-1.0)
* @return This builder instance
*/
public Builder topP(java.util.OptionalDouble topP);
/**
* Sets the repetition penalty.
* Penalizes repeated tokens to reduce repetitive output.
* Values > 1.0 penalize repetition, 1.0 is neutral, < 1.0 encourages repetition.
*
* @param repetitionPenalty Penalty value (typically 1.0-1.5)
* @return This builder instance
*/
public Builder repetitionPenalty(java.util.OptionalDouble repetitionPenalty);
/**
* Enables or disables request logging.
* When enabled, logs outgoing requests (with API key masked).
*
* @param logRequests true to enable logging, false to disable
* @return This builder instance
*/
public Builder logRequests(boolean logRequests);
/**
* Enables or disables response logging.
* When enabled, logs incoming responses.
*
* @param logResponses true to enable logging, false to disable
* @return This builder instance
*/
public Builder logResponses(boolean logResponses);
/**
* Builds and returns the configured chat model instance.
*
* @return Configured QuarkusHuggingFaceChatModel instance
* @throws IllegalArgumentException if required configuration is missing
*/
public QuarkusHuggingFaceChatModel build();
}import io.quarkiverse.langchain4j.huggingface.QuarkusHuggingFaceChatModel;
import java.net.URL;
QuarkusHuggingFaceChatModel chatModel = QuarkusHuggingFaceChatModel.builder()
.accessToken("hf_your_token_here")
.url(new URL("https://api-inference.huggingface.co/models/tiiuae/falcon-7b-instruct"))
.build();import io.quarkiverse.langchain4j.huggingface.QuarkusHuggingFaceChatModel;
import java.net.URL;
import java.time.Duration;
import java.util.Optional;
import java.util.OptionalInt;
import java.util.OptionalDouble;
QuarkusHuggingFaceChatModel chatModel = QuarkusHuggingFaceChatModel.builder()
.accessToken("hf_your_token_here")
.url(new URL("https://api-inference.huggingface.co/models/tiiuae/falcon-7b-instruct"))
.timeout(Duration.ofSeconds(30))
.temperature(0.7)
.maxNewTokens(150)
.returnFullText(false)
.waitForModel(true)
.doSample(Optional.of(true))
.topK(OptionalInt.of(50))
.topP(OptionalDouble.of(0.95))
.repetitionPenalty(OptionalDouble.of(1.1))
.logRequests(true)
.logResponses(true)
.build();import dev.langchain4j.data.message.UserMessage;
import dev.langchain4j.data.message.AiMessage;
import dev.langchain4j.model.chat.request.ChatRequest;
import dev.langchain4j.model.chat.response.ChatResponse;
import java.util.List;
// Create user message
UserMessage userMessage = UserMessage.from("Explain quantum computing in simple terms");
// Create chat request
ChatRequest chatRequest = ChatRequest.builder()
.messages(List.of(userMessage))
.build();
// Execute chat request
ChatResponse chatResponse = chatModel.doChat(chatRequest);
// Extract generated text
String generatedText = chatResponse.aiMessage().text();
System.out.println(generatedText);// Use a different Hugging Face model
QuarkusHuggingFaceChatModel customModel = QuarkusHuggingFaceChatModel.builder()
.accessToken("hf_your_token_here")
.url(new URL("https://api-inference.huggingface.co/models/google/flan-t5-small"))
.temperature(0.8)
.build();// Use locally deployed model
QuarkusHuggingFaceChatModel localModel = QuarkusHuggingFaceChatModel.builder()
.accessToken("dummy") // May not need real token for local deployment
.url(new URL("http://localhost:8085"))
.temperature(0.7)
.build();
// Use AWS-hosted Hugging Face endpoint
QuarkusHuggingFaceChatModel awsModel = QuarkusHuggingFaceChatModel.builder()
.accessToken("your_endpoint_token")
.url(new URL("https://your-endpoint.endpoints.huggingface.cloud"))
.temperature(0.7)
.build();import java.util.Optional;
import java.util.OptionalInt;
import java.util.OptionalDouble;
// Conservative, focused generation
QuarkusHuggingFaceChatModel conservativeModel = QuarkusHuggingFaceChatModel.builder()
.accessToken("hf_your_token_here")
.url(new URL("https://api-inference.huggingface.co/models/tiiuae/falcon-7b-instruct"))
.temperature(0.3)
.topP(OptionalDouble.of(0.85))
.topK(OptionalInt.of(30))
.repetitionPenalty(OptionalDouble.of(1.2))
.build();
// Creative, diverse generation
QuarkusHuggingFaceChatModel creativeModel = QuarkusHuggingFaceChatModel.builder()
.accessToken("hf_your_token_here")
.url(new URL("https://api-inference.huggingface.co/models/tiiuae/falcon-7b-instruct"))
.temperature(1.2)
.topP(OptionalDouble.of(0.95))
.topK(OptionalInt.of(80))
.doSample(Optional.of(true))
.build();When using declarative configuration, the following properties are available:
# API Key (required for Hugging Face Hub API)
quarkus.langchain4j.huggingface.api-key=hf_your_token_here
# Inference endpoint URL
quarkus.langchain4j.huggingface.chat-model.inference-endpoint-url=https://api-inference.huggingface.co/models/tiiuae/falcon-7b-instruct
# Timeout (default: 10s)
quarkus.langchain4j.huggingface.timeout=30s
# Sampling temperature (default: 1.0)
quarkus.langchain4j.huggingface.chat-model.temperature=0.7
# Maximum new tokens (no default, model-dependent)
quarkus.langchain4j.huggingface.chat-model.max-new-tokens=100
# Return full text including input (default: false)
quarkus.langchain4j.huggingface.chat-model.return-full-text=false
# Wait for model to be ready (default: true)
quarkus.langchain4j.huggingface.chat-model.wait-for-model=true
# Use sampling vs greedy decoding (optional)
quarkus.langchain4j.huggingface.chat-model.do-sample=true
# Top-K filtering (optional)
quarkus.langchain4j.huggingface.chat-model.top-k=50
# Top-P nucleus sampling (optional)
quarkus.langchain4j.huggingface.chat-model.top-p=0.95
# Repetition penalty (optional, default: 1.0)
quarkus.langchain4j.huggingface.chat-model.repetition-penalty=1.1
# Logging
quarkus.langchain4j.huggingface.log-requests=true
quarkus.langchain4j.huggingface.log-responses=true
quarkus.langchain4j.huggingface.chat-model.log-requests=true
quarkus.langchain4j.huggingface.chat-model.log-responses=trueWhen using LangChain4j AI Services in Quarkus, the configured chat model is automatically injected:
import io.quarkiverse.langchain4j.RegisterAiService;
import dev.langchain4j.service.UserMessage;
@RegisterAiService
public interface AssistantService {
@UserMessage("Tell me a joke about {topic}")
String tellJoke(String topic);
}
// Quarkus automatically injects configured Hugging Face chat model
@Inject
AssistantService assistant;
String joke = assistant.tellJoke("programming");The default chat model when using Hugging Face Hub API:
tiiuae/falcon-7b-instructhttps://api-inference.huggingface.co/models/tiiuae/falcon-7b-instructInstall with Tessl CLI
npx tessl i tessl/maven-io-quarkiverse-langchain4j--quarkus-langchain4j-hugging-face