Quarkus extension deployment module for OpenAI integration with LangChain4j providing build-time processing and CDI bean generation
Comprehensive configuration properties for the Quarkus LangChain4j OpenAI extension. All properties are prefixed with quarkus.langchain4j.openai and can be set in application.properties or application.yml.
Build-time configuration properties processed during Quarkus application build, controlling which model types are enabled and included in the final application.
# Whether chat model support should be enabled at build time
# Type: Boolean
# Default: true
# Phase: BUILD_TIME
quarkus.langchain4j.openai.chat-model.enabled=trueWhen disabled, chat model CDI beans will not be created and chat model service providers will not be registered.
# Whether embedding model support should be enabled at build time
# Type: Boolean
# Default: true
# Phase: BUILD_TIME
quarkus.langchain4j.openai.embedding-model.enabled=trueWhen disabled, embedding model CDI beans will not be created and embedding model service providers will not be registered.
# Whether moderation model support should be enabled at build time
# Type: Boolean
# Default: true
# Phase: BUILD_TIME
quarkus.langchain4j.openai.moderation-model.enabled=trueWhen disabled, moderation model CDI beans will not be created and moderation model service providers will not be registered.
# Whether image model support should be enabled at build time
# Type: Boolean
# Default: true
# Phase: BUILD_TIME
quarkus.langchain4j.openai.image-model.enabled=trueWhen disabled, image model CDI beans will not be created and image model service providers will not be registered.
Runtime configuration properties that apply to all model types, providing shared settings for API access, networking, and logging.
# OpenAI API base URL
# Type: String
# Default: https://api.openai.com/v1/
quarkus.langchain4j.openai.base-url=https://api.openai.com/v1/
# OpenAI API key (required when enable-integration=true)
# Type: String
# Required: Yes (when integration enabled)
# Environment variable: QUARKUS_LANGCHAIN4J_OPENAI_API_KEY
quarkus.langchain4j.openai.api-key=sk-your-api-key-here
# OpenAI organization ID (optional)
# Type: String
# Required: No
quarkus.langchain4j.openai.organization-id=org-xxxxx
# Quarkus TLS configuration name for custom SSL/TLS settings
# Type: String
# Required: No
quarkus.langchain4j.openai.tls-configuration-name=my-tls-config
# Request timeout duration
# Type: Duration
# Default: 10s
# Format: ISO-8601 duration or simple format (e.g., "10s", "30s", "1m")
quarkus.langchain4j.openai.timeout=10s
# Maximum number of retry attempts for failed requests
# Type: Integer
# Default: 1
# Range: 0 to unlimited
quarkus.langchain4j.openai.max-retries=1
# Enable logging of requests
# Type: Boolean
# Default: false
quarkus.langchain4j.openai.log-requests=false
# Enable logging of responses
# Type: Boolean
# Default: false
quarkus.langchain4j.openai.log-responses=false
# Enable logging of requests in cURL format
# Type: Boolean
# Default: false
quarkus.langchain4j.openai.log-requests-curl=false
# Enable or disable OpenAI integration
# Type: Boolean
# Default: true
# When false, disabled model instances are created that throw exceptions
quarkus.langchain4j.openai.enable-integration=true
# HTTP proxy type
# Type: String
# Default: HTTP
# Values: HTTP, HTTPS, SOCKS
quarkus.langchain4j.openai.proxy-type=HTTP
# HTTP proxy host
# Type: String
# Required: No
quarkus.langchain4j.openai.proxy-host=proxy.example.com
# HTTP proxy port
# Type: Integer
# Default: 3128
quarkus.langchain4j.openai.proxy-port=3128Runtime configuration for OpenAI chat models (GPT models) controlling model selection, sampling parameters, output formatting, and behavior.
# OpenAI chat model name
# Type: String
# Default: gpt-4o-mini
# Examples: gpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-3.5-turbo, o1, o1-mini, o1-preview
quarkus.langchain4j.openai.chat-model.model-name=gpt-4o-mini
# Sampling temperature for response randomness
# Type: Double
# Default: 1.0
# Range: 0.0 to 2.0
# Higher values (e.g., 1.5) make output more random, lower values (e.g., 0.2) more deterministic
quarkus.langchain4j.openai.chat-model.temperature=1.0
# Nucleus sampling parameter
# Type: Double
# Default: 1.0
# Range: 0.0 to 1.0
# Alternative to temperature for controlling randomness
quarkus.langchain4j.openai.chat-model.top-p=1.0
# Maximum number of tokens to generate (DEPRECATED - use max-completion-tokens)
# Type: Integer
# Required: No
# This property is deprecated, use max-completion-tokens instead
quarkus.langchain4j.openai.chat-model.max-tokens=
# Maximum number of completion tokens
# Type: Integer
# Required: No
# Controls the maximum length of the generated response
quarkus.langchain4j.openai.chat-model.max-completion-tokens=
# Presence penalty to discourage topic repetition
# Type: Double
# Default: 0
# Range: -2.0 to 2.0
# Positive values discourage repeating topics already mentioned
quarkus.langchain4j.openai.chat-model.presence-penalty=0
# Frequency penalty to discourage word repetition
# Type: Double
# Default: 0
# Range: -2.0 to 2.0
# Positive values discourage repeating specific words
quarkus.langchain4j.openai.chat-model.frequency-penalty=0
# Response format specification
# Type: String
# Required: No
# Values: "json_object" for JSON mode, or JSON schema for structured output
# Example: "json_object" or a JSON schema definition
quarkus.langchain4j.openai.chat-model.response-format=
# Enable strict JSON schema validation
# Type: Boolean
# Required: No
# When true, enforces strict adherence to the provided JSON schema
quarkus.langchain4j.openai.chat-model.strict-json-schema=
# Stop sequences to halt generation
# Type: List<String>
# Required: No
# Comma-separated list of sequences that stop generation when encountered
quarkus.langchain4j.openai.chat-model.stop=
# Reasoning effort level for o1 models
# Type: String
# Required: No
# Values: "minimal", "low", "medium", "high"
# Only applicable to o1 model series (o1, o1-mini, o1-preview)
# Controls computational effort spent on reasoning before responding
quarkus.langchain4j.openai.chat-model.reasoning-effort=
# Service tier for request priority and latency
# Type: String
# Default: default
# Values: "auto", "default", "flex", "priority"
# - "auto": Let OpenAI select the appropriate tier
# - "default": Standard service tier with consistent latency
# - "flex": Lower cost with higher variance in latency
# - "priority": Higher cost with lower latency guarantees
quarkus.langchain4j.openai.chat-model.service-tier=default
# Enable request logging (overrides global setting)
# Type: Boolean
# Required: No
quarkus.langchain4j.openai.chat-model.log-requests=
# Enable response logging (overrides global setting)
# Type: Boolean
# Required: No
quarkus.langchain4j.openai.chat-model.log-responses=Runtime configuration for OpenAI embedding models controlling model selection and usage tracking.
# OpenAI embedding model name
# Type: String
# Default: text-embedding-ada-002
# Examples: text-embedding-ada-002, text-embedding-3-small, text-embedding-3-large
quarkus.langchain4j.openai.embedding-model.model-name=text-embedding-ada-002
# User identifier for tracking and monitoring
# Type: String
# Required: No
# Used to track usage by end-user for monitoring and abuse prevention
quarkus.langchain4j.openai.embedding-model.user=
# Enable request logging (overrides global setting)
# Type: Boolean
# Required: No
quarkus.langchain4j.openai.embedding-model.log-requests=
# Enable response logging (overrides global setting)
# Type: Boolean
# Required: No
quarkus.langchain4j.openai.embedding-model.log-responses=Runtime configuration for OpenAI moderation models controlling model selection for content safety checks.
# OpenAI moderation model name
# Type: String
# Default: omni-moderation-latest
# Examples: omni-moderation-latest, omni-moderation-2024-09-26, text-moderation-latest, text-moderation-stable
quarkus.langchain4j.openai.moderation-model.model-name=omni-moderation-latest
# Enable request logging (overrides global setting)
# Type: Boolean
# Required: No
quarkus.langchain4j.openai.moderation-model.log-requests=
# Enable response logging (overrides global setting)
# Type: Boolean
# Required: No
quarkus.langchain4j.openai.moderation-model.log-responses=Runtime configuration for OpenAI image generation models (DALL-E) controlling model selection, image parameters, and persistence.
# OpenAI image model name
# Type: String
# Default: dall-e-3
# Examples: dall-e-3, dall-e-2
quarkus.langchain4j.openai.image-model.model-name=dall-e-3
# Whether to persist generated images to disk
# Type: Boolean
# Default: false
quarkus.langchain4j.openai.image-model.persist=false
# Directory for persisted images
# Type: Path
# Default: ${java.io.tmpdir}/dall-e-images
# Only used when persist=true
quarkus.langchain4j.openai.image-model.persist-directory=${java.io.tmpdir}/dall-e-images
# Image response format
# Type: String
# Default: url
# Values: "url" (returns image URL) or "b64_json" (returns base64-encoded image)
quarkus.langchain4j.openai.image-model.response-format=url
# Image size
# Type: String
# Default: 1024x1024
# DALL-E 3 values: "1024x1024", "1024x1792", "1792x1024"
# DALL-E 2 values: "256x256", "512x512", "1024x1024"
quarkus.langchain4j.openai.image-model.size=1024x1024
# Image quality
# Type: String
# Default: standard
# Values: "standard", "hd" (DALL-E 3 only)
quarkus.langchain4j.openai.image-model.quality=standard
# Number of images to generate
# Type: Integer
# Default: 1
# Range: 1 to 10 (DALL-E 2), 1 (DALL-E 3)
quarkus.langchain4j.openai.image-model.number=1
# Image style
# Type: String
# Default: vivid
# Values: "vivid", "natural" (DALL-E 3 only)
quarkus.langchain4j.openai.image-model.style=vivid
# User identifier for tracking and monitoring
# Type: String
# Required: No
quarkus.langchain4j.openai.image-model.user=
# Enable request logging (overrides global setting)
# Type: Boolean
# Required: No
quarkus.langchain4j.openai.image-model.log-requests=
# Enable response logging (overrides global setting)
# Type: Boolean
# Required: No
quarkus.langchain4j.openai.image-model.log-responses=All configuration properties support named instances for multiple OpenAI configurations. Replace the default prefix quarkus.langchain4j.openai with quarkus.langchain4j.openai.<name>.
Example with named configuration "premium":
# Named configuration "premium" - global settings
quarkus.langchain4j.openai.premium.api-key=sk-premium-key
quarkus.langchain4j.openai.premium.timeout=30s
# Named configuration "premium" - chat model settings
quarkus.langchain4j.openai.premium.chat-model.model-name=gpt-4o
quarkus.langchain4j.openai.premium.chat-model.temperature=0.7
# Named configuration "fast" - different settings
quarkus.langchain4j.openai.fast.api-key=sk-fast-key
quarkus.langchain4j.openai.fast.chat-model.model-name=gpt-3.5-turbo
quarkus.langchain4j.openai.fast.chat-model.max-completion-tokens=500Inject named models using @ModelName qualifier:
@Inject
@ModelName("premium")
ChatLanguageModel premiumModel;
@Inject
@ModelName("fast")
ChatLanguageModel fastModel;quarkus.langchain4j.openai.api-key=sk-your-api-key
quarkus.langchain4j.openai.chat-model.model-name=gpt-4o-mini
quarkus.langchain4j.openai.chat-model.temperature=0.7# Enable only chat and embedding models
quarkus.langchain4j.openai.chat-model.enabled=true
quarkus.langchain4j.openai.embedding-model.enabled=true
quarkus.langchain4j.openai.moderation-model.enabled=false
quarkus.langchain4j.openai.image-model.enabled=false
# Configure chat model
quarkus.langchain4j.openai.api-key=sk-your-api-key
quarkus.langchain4j.openai.chat-model.model-name=gpt-4o-mini
# Configure embedding model
quarkus.langchain4j.openai.embedding-model.model-name=text-embedding-3-smallquarkus.langchain4j.openai.api-key=sk-your-api-key
quarkus.langchain4j.openai.chat-model.model-name=gpt-4o
quarkus.langchain4j.openai.chat-model.response-format=json_object
quarkus.langchain4j.openai.chat-model.strict-json-schema=truequarkus.langchain4j.openai.api-key=sk-your-api-key
quarkus.langchain4j.openai.proxy-type=HTTP
quarkus.langchain4j.openai.proxy-host=corporate-proxy.example.com
quarkus.langchain4j.openai.proxy-port=8080quarkus.langchain4j.openai.api-key=sk-your-api-key
quarkus.langchain4j.openai.log-requests=true
quarkus.langchain4j.openai.log-responses=true
quarkus.langchain4j.openai.log-requests-curl=truequarkus.langchain4j.openai.api-key=sk-your-api-key
quarkus.langchain4j.openai.image-model.model-name=dall-e-3
quarkus.langchain4j.openai.image-model.size=1024x1792
quarkus.langchain4j.openai.image-model.quality=hd
quarkus.langchain4j.openai.image-model.style=vivid
quarkus.langchain4j.openai.image-model.persist=true
quarkus.langchain4j.openai.image-model.persist-directory=/var/imagesquarkus.langchain4j.openai.base-url=https://custom-api.example.com/v1/
quarkus.langchain4j.openai.api-key=custom-api-key
quarkus.langchain4j.openai.chat-model.model-name=custom-modelAll configuration properties can be set via environment variables using uppercase with underscores. For example:
QUARKUS_LANGCHAIN4J_OPENAI_API_KEY=sk-your-api-key
QUARKUS_LANGCHAIN4J_OPENAI_CHAT_MODEL_MODEL_NAME=gpt-4o-mini
QUARKUS_LANGCHAIN4J_OPENAI_CHAT_MODEL_TEMPERATURE=0.7This is particularly useful for containerized deployments and cloud environments where secrets should not be hardcoded in configuration files.
The deployment module validates configuration at build/startup time:
enable-integration=true, the API key must be provided or startup will failConfiguration errors will result in clear error messages during application startup, preventing deployment of misconfigured applications.
Install with Tessl CLI
npx tessl i tessl/maven-io-quarkiverse-langchain4j--quarkus-langchain4j-openai-deployment@1.7.0