CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/maven-io-quarkiverse-langchain4j--quarkus-langchain4j-ollama-deployment

Quarkus extension deployment module for integrating Ollama LLM models with Quarkus applications through the LangChain4j framework

Overview
Eval results
Files

build-time-configuration.mddocs/

Build-Time Configuration

The Quarkus LangChain4j Ollama Deployment module provides build-time configuration interfaces that control model enablement and DevServices behavior. These configuration properties are processed during the application build phase.

Configuration Root

The root configuration interface defines the top-level configuration structure for Ollama integration.

package io.quarkiverse.langchain4j.ollama.deployment;

import io.quarkiverse.langchain4j.ollama.deployment.devservices.OllamaDevServicesBuildConfig;
import io.quarkus.runtime.annotations.ConfigRoot;
import io.smallrye.config.ConfigMapping;
import static io.quarkus.runtime.annotations.ConfigPhase.BUILD_TIME;

@ConfigRoot(phase = BUILD_TIME)
@ConfigMapping(prefix = "quarkus.langchain4j.ollama")
public interface LangChain4jOllamaOpenAiBuildConfig {
    /**
     * Chat model related settings
     */
    ChatModelBuildConfig chatModel();

    /**
     * Embedding model related settings
     */
    EmbeddingModelBuildConfig embeddingModel();

    /**
     * Dev services related settings
     */
    OllamaDevServicesBuildConfig devservices();
}

Configuration Prefix: quarkus.langchain4j.ollama

Phase: BUILD_TIME - These properties are read and processed during application build

Methods:

  • chatModel() - Returns chat model build configuration
  • embeddingModel() - Returns embedding model build configuration
  • devservices() - Returns DevServices configuration

Chat Model Build Configuration

Controls whether the chat model should be enabled at build time.

package io.quarkiverse.langchain4j.ollama.deployment;

import java.util.Optional;
import io.quarkus.runtime.annotations.ConfigDocDefault;
import io.quarkus.runtime.annotations.ConfigGroup;

@ConfigGroup
public interface ChatModelBuildConfig {
    /**
     * Whether the model should be enabled
     */
    @ConfigDocDefault("true")
    Optional<Boolean> enabled();
}

Configuration Property: quarkus.langchain4j.ollama.chat-model.enabled

Type: Optional<Boolean>

Default: true (chat model is enabled by default)

Description: When set to false, the chat model provider candidate will not be registered, and no ChatModel or StreamingChatModel beans will be created.

Usage Example

Disable chat model in application.properties:

quarkus.langchain4j.ollama.chat-model.enabled=false

Embedding Model Build Configuration

Controls whether the embedding model should be enabled at build time.

package io.quarkiverse.langchain4j.ollama.deployment;

import java.util.Optional;
import io.quarkus.runtime.annotations.ConfigDocDefault;
import io.quarkus.runtime.annotations.ConfigGroup;

@ConfigGroup
public interface EmbeddingModelBuildConfig {
    /**
     * Whether the model should be enabled
     */
    @ConfigDocDefault("true")
    Optional<Boolean> enabled();
}

Configuration Property: quarkus.langchain4j.ollama.embedding-model.enabled

Type: Optional<Boolean>

Default: true (embedding model is enabled by default)

Description: When set to false, the embedding model provider candidate will not be registered, and no EmbeddingModel bean will be created.

Usage Example

Disable embedding model in application.properties:

quarkus.langchain4j.ollama.embedding-model.enabled=false

Named Configuration Support

All build-time configuration properties support named instances, allowing multiple Ollama configurations with different settings.

Named Configuration Pattern

Named configurations use the pattern:

quarkus.langchain4j.ollama.<config-name>.<property>

Usage Example

Configure multiple Ollama instances:

# Default configuration
quarkus.langchain4j.ollama.chat-model.enabled=true
quarkus.langchain4j.ollama.embedding-model.enabled=true

# Named configuration "instance1"
quarkus.langchain4j.ollama.instance1.chat-model.enabled=true
quarkus.langchain4j.ollama.instance1.embedding-model.enabled=false

# Named configuration "instance2"
quarkus.langchain4j.ollama.instance2.chat-model.enabled=false
quarkus.langchain4j.ollama.instance2.embedding-model.enabled=true

Named configurations are automatically detected by the implicitlyConfiguredProviders build step and registered as ImplicitlyUserConfiguredChatProviderBuildItem instances.

Configuration Processing

The build-time configuration is processed by the OllamaProcessor.providerCandidates() build step:

@BuildStep
public void providerCandidates(
    BuildProducer<ChatModelProviderCandidateBuildItem> chatProducer,
    BuildProducer<EmbeddingModelProviderCandidateBuildItem> embeddingProducer,
    LangChain4jOllamaOpenAiBuildConfig config
) {
    if (config.chatModel().enabled().isEmpty() || config.chatModel().enabled().get()) {
        chatProducer.produce(new ChatModelProviderCandidateBuildItem("ollama"));
    }
    if (config.embeddingModel().enabled().isEmpty() || config.embeddingModel().enabled().get()) {
        embeddingProducer.produce(new EmbeddingModelProviderCandidateBuildItem("ollama"));
    }
}

Behavior:

  • If enabled() returns Optional.empty() (property not set), the model is enabled (default behavior)
  • If enabled() returns Optional.of(true), the model is enabled
  • If enabled() returns Optional.of(false), the model is disabled and no provider candidate is registered

Related Build Steps

Provider Candidate Registration

When models are enabled, the deployment module produces:

  • ChatModelProviderCandidateBuildItem with provider name "ollama"
  • EmbeddingModelProviderCandidateBuildItem with provider name "ollama"

These build items participate in the Quarkus LangChain4j provider selection mechanism.

Implicit Configuration Detection

Named configurations are detected and registered:

@BuildStep
public void implicitlyConfiguredProviders(
    LangChain4jOllamaFixedRuntimeConfig fixedRuntimeConfig,
    BuildProducer<ImplicitlyUserConfiguredChatProviderBuildItem> producer
) {
    fixedRuntimeConfig.namedConfig().keySet().forEach(configName -> {
        producer.produce(new ImplicitlyUserConfiguredChatProviderBuildItem(configName, "ollama"));
    });
}

This build step examines the fixed runtime configuration to discover named Ollama instances and registers them as implicitly configured providers.

Configuration Reference

PropertyTypeDefaultDescription
quarkus.langchain4j.ollama.chat-model.enabledOptional<Boolean>trueEnable/disable chat model provider registration
quarkus.langchain4j.ollama.embedding-model.enabledOptional<Boolean>trueEnable/disable embedding model provider registration
quarkus.langchain4j.ollama.<name>.chat-model.enabledOptional<Boolean>trueEnable/disable named chat model
quarkus.langchain4j.ollama.<name>.embedding-model.enabledOptional<Boolean>trueEnable/disable named embedding model

Notes

  • Build-time configuration properties are processed during application build, not at runtime
  • These properties control whether provider candidates are registered and whether synthetic beans are created
  • Disabling a model at build time means it cannot be enabled at runtime
  • Named configurations allow multiple Ollama instances with independent enablement settings
  • The configuration interfaces use Quarkus SmallRye Config's @ConfigMapping for type-safe configuration

Install with Tessl CLI

npx tessl i tessl/maven-io-quarkiverse-langchain4j--quarkus-langchain4j-ollama-deployment

docs

architecture.md

build-step-processing.md

build-time-configuration.md

devservices.md

index.md

native-image-support.md

runtime-configuration.md

runtime-model-types.md

synthetic-beans.md

tile.json