Quarkus extension deployment module for integrating Ollama LLM models with Quarkus applications through the LangChain4j framework
The Quarkus LangChain4j Ollama Deployment module provides build-time configuration interfaces that control model enablement and DevServices behavior. These configuration properties are processed during the application build phase.
The root configuration interface defines the top-level configuration structure for Ollama integration.
package io.quarkiverse.langchain4j.ollama.deployment;
import io.quarkiverse.langchain4j.ollama.deployment.devservices.OllamaDevServicesBuildConfig;
import io.quarkus.runtime.annotations.ConfigRoot;
import io.smallrye.config.ConfigMapping;
import static io.quarkus.runtime.annotations.ConfigPhase.BUILD_TIME;
@ConfigRoot(phase = BUILD_TIME)
@ConfigMapping(prefix = "quarkus.langchain4j.ollama")
public interface LangChain4jOllamaOpenAiBuildConfig {
/**
* Chat model related settings
*/
ChatModelBuildConfig chatModel();
/**
* Embedding model related settings
*/
EmbeddingModelBuildConfig embeddingModel();
/**
* Dev services related settings
*/
OllamaDevServicesBuildConfig devservices();
}Configuration Prefix: quarkus.langchain4j.ollama
Phase: BUILD_TIME - These properties are read and processed during application build
Methods:
chatModel() - Returns chat model build configurationembeddingModel() - Returns embedding model build configurationdevservices() - Returns DevServices configurationControls whether the chat model should be enabled at build time.
package io.quarkiverse.langchain4j.ollama.deployment;
import java.util.Optional;
import io.quarkus.runtime.annotations.ConfigDocDefault;
import io.quarkus.runtime.annotations.ConfigGroup;
@ConfigGroup
public interface ChatModelBuildConfig {
/**
* Whether the model should be enabled
*/
@ConfigDocDefault("true")
Optional<Boolean> enabled();
}Configuration Property: quarkus.langchain4j.ollama.chat-model.enabled
Type: Optional<Boolean>
Default: true (chat model is enabled by default)
Description: When set to false, the chat model provider candidate will not be registered, and no ChatModel or StreamingChatModel beans will be created.
Disable chat model in application.properties:
quarkus.langchain4j.ollama.chat-model.enabled=falseControls whether the embedding model should be enabled at build time.
package io.quarkiverse.langchain4j.ollama.deployment;
import java.util.Optional;
import io.quarkus.runtime.annotations.ConfigDocDefault;
import io.quarkus.runtime.annotations.ConfigGroup;
@ConfigGroup
public interface EmbeddingModelBuildConfig {
/**
* Whether the model should be enabled
*/
@ConfigDocDefault("true")
Optional<Boolean> enabled();
}Configuration Property: quarkus.langchain4j.ollama.embedding-model.enabled
Type: Optional<Boolean>
Default: true (embedding model is enabled by default)
Description: When set to false, the embedding model provider candidate will not be registered, and no EmbeddingModel bean will be created.
Disable embedding model in application.properties:
quarkus.langchain4j.ollama.embedding-model.enabled=falseAll build-time configuration properties support named instances, allowing multiple Ollama configurations with different settings.
Named configurations use the pattern:
quarkus.langchain4j.ollama.<config-name>.<property>Configure multiple Ollama instances:
# Default configuration
quarkus.langchain4j.ollama.chat-model.enabled=true
quarkus.langchain4j.ollama.embedding-model.enabled=true
# Named configuration "instance1"
quarkus.langchain4j.ollama.instance1.chat-model.enabled=true
quarkus.langchain4j.ollama.instance1.embedding-model.enabled=false
# Named configuration "instance2"
quarkus.langchain4j.ollama.instance2.chat-model.enabled=false
quarkus.langchain4j.ollama.instance2.embedding-model.enabled=trueNamed configurations are automatically detected by the implicitlyConfiguredProviders build step and registered as ImplicitlyUserConfiguredChatProviderBuildItem instances.
The build-time configuration is processed by the OllamaProcessor.providerCandidates() build step:
@BuildStep
public void providerCandidates(
BuildProducer<ChatModelProviderCandidateBuildItem> chatProducer,
BuildProducer<EmbeddingModelProviderCandidateBuildItem> embeddingProducer,
LangChain4jOllamaOpenAiBuildConfig config
) {
if (config.chatModel().enabled().isEmpty() || config.chatModel().enabled().get()) {
chatProducer.produce(new ChatModelProviderCandidateBuildItem("ollama"));
}
if (config.embeddingModel().enabled().isEmpty() || config.embeddingModel().enabled().get()) {
embeddingProducer.produce(new EmbeddingModelProviderCandidateBuildItem("ollama"));
}
}Behavior:
enabled() returns Optional.empty() (property not set), the model is enabled (default behavior)enabled() returns Optional.of(true), the model is enabledenabled() returns Optional.of(false), the model is disabled and no provider candidate is registeredWhen models are enabled, the deployment module produces:
ChatModelProviderCandidateBuildItem with provider name "ollama"EmbeddingModelProviderCandidateBuildItem with provider name "ollama"These build items participate in the Quarkus LangChain4j provider selection mechanism.
Named configurations are detected and registered:
@BuildStep
public void implicitlyConfiguredProviders(
LangChain4jOllamaFixedRuntimeConfig fixedRuntimeConfig,
BuildProducer<ImplicitlyUserConfiguredChatProviderBuildItem> producer
) {
fixedRuntimeConfig.namedConfig().keySet().forEach(configName -> {
producer.produce(new ImplicitlyUserConfiguredChatProviderBuildItem(configName, "ollama"));
});
}This build step examines the fixed runtime configuration to discover named Ollama instances and registers them as implicitly configured providers.
| Property | Type | Default | Description |
|---|---|---|---|
quarkus.langchain4j.ollama.chat-model.enabled | Optional<Boolean> | true | Enable/disable chat model provider registration |
quarkus.langchain4j.ollama.embedding-model.enabled | Optional<Boolean> | true | Enable/disable embedding model provider registration |
quarkus.langchain4j.ollama.<name>.chat-model.enabled | Optional<Boolean> | true | Enable/disable named chat model |
quarkus.langchain4j.ollama.<name>.embedding-model.enabled | Optional<Boolean> | true | Enable/disable named embedding model |
@ConfigMapping for type-safe configurationInstall with Tessl CLI
npx tessl i tessl/maven-io-quarkiverse-langchain4j--quarkus-langchain4j-ollama-deployment@1.7.0