CtrlK
CommunityDocumentationLog inGet started
Tessl Logo

tessl/maven-org-springframework-ai--spring-ai-starter-model-openai

Spring Boot Starter for OpenAI integration providing auto-configuration for chat completion, embeddings, image generation, audio speech synthesis, audio transcription, and content moderation models. Includes high-level ChatClient API and conversation memory support.

Overview
Eval results
Files

configuration.mddocs/reference/

Configuration

Complete reference for all configuration properties available in the Spring AI OpenAI Starter.

Base OpenAI Connection Properties

Prefix: spring.ai.openai

These properties apply to all OpenAI models unless overridden by model-specific properties.

# OpenAI API key (required)
spring.ai.openai.api-key=sk-...

# API base URL (default: https://api.openai.com)
spring.ai.openai.base-url=https://api.openai.com

# OpenAI project ID (optional)
spring.ai.openai.project-id=proj_...

# OpenAI organization ID (optional)
spring.ai.openai.organization-id=org-...

Chat Model Configuration

Prefix: spring.ai.openai.chat

Connection Properties

# Override base API key for chat
spring.ai.openai.chat.api-key=sk-...

# Override base URL for chat
spring.ai.openai.chat.base-url=https://api.openai.com

# Override project ID for chat
spring.ai.openai.chat.project-id=proj_...

# Override organization ID for chat
spring.ai.openai.chat.organization-id=org-...

# Chat completions API endpoint path
# Default: /v1/chat/completions
spring.ai.openai.chat.completions-path=/v1/chat/completions

Model Options

# Model name
# Default: gpt-4o-mini
# Reasoning: o4-mini, o3, o3-mini, o1, o1-mini, o1-pro
# Flagship: gpt-4.1, gpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-4
# Search: gpt-4o-search-preview, gpt-4o-mini-search-preview
# Legacy: gpt-3.5-turbo, gpt-3.5-turbo-instruct
spring.ai.openai.chat.model=gpt-4o-mini

# Sampling temperature (0.0-2.0)
# Default: 0.7
spring.ai.openai.chat.temperature=0.7

# Maximum tokens in response
spring.ai.openai.chat.max-tokens=1000

# Maximum completion tokens (for reasoning models like o1)
spring.ai.openai.chat.max-completion-tokens=5000

# Nucleus sampling (0.0-1.0)
spring.ai.openai.chat.top-p=1.0

# Frequency penalty (0.0-2.0)
spring.ai.openai.chat.frequency-penalty=0.0

# Presence penalty (0.0-2.0)
spring.ai.openai.chat.presence-penalty=0.0

# Stop sequences (comma-separated)
spring.ai.openai.chat.stop=\\n,END

# Number of completions to generate
spring.ai.openai.chat.n=1

# Include log probabilities
spring.ai.openai.chat.logprobs=false

# Number of top log probabilities
spring.ai.openai.chat.top-logprobs=0

# Seed for deterministic sampling
spring.ai.openai.chat.seed=12345

# User identifier for abuse monitoring
spring.ai.openai.chat.user=user-123

# Reasoning effort for reasoning models (o1, o3 series)
# Options: low, medium, high
spring.ai.openai.chat.reasoning-effort=medium

# Verbosity for web search models
# Options: low, medium, high
spring.ai.openai.chat.verbosity=medium

# Enable parallel tool calls
spring.ai.openai.chat.parallel-tool-calls=true

# Service tier (default, auto)
spring.ai.openai.chat.service-tier=auto

# Prompt cache key for optimization
spring.ai.openai.chat.prompt-cache-key=my-cache-key

# Safety identifier for tracking
spring.ai.openai.chat.safety-identifier=my-safety-id

# Store conversations for distillation/evals
spring.ai.openai.chat.store=false

# Logit bias (token ID to bias value mapping)
spring.ai.openai.chat.options.logit-bias={123:-100,456:100}

# Output modalities (text, audio)
spring.ai.openai.chat.options.output-modalities=text,audio

# Audio output parameters
spring.ai.openai.chat.options.audio-parameters.voice=alloy
spring.ai.openai.chat.options.audio-parameters.format=mp3

# Response format (text, json_object, json_schema)
spring.ai.openai.chat.options.response-format.type=json_object

# Stream options (include usage in streaming)
spring.ai.openai.chat.options.stream-options.include-usage=true

# Tool choice (none, auto, required, or specific function)
spring.ai.openai.chat.options.tool-choice=auto

# Web search options
spring.ai.openai.chat.options.web-search-options.context-size=medium

# Metadata
spring.ai.openai.chat.options.metadata.user-id=user123
spring.ai.openai.chat.options.metadata.session-id=session456

# Extra body parameters (for custom OpenAI-compatible servers)
spring.ai.openai.chat.options.extra-body.custom-param=value

Example Configuration

spring.ai.openai.api-key=${OPENAI_API_KEY}
spring.ai.openai.chat.model=gpt-4o
spring.ai.openai.chat.temperature=0.8
spring.ai.openai.chat.max-tokens=2000
spring.ai.openai.chat.frequency-penalty=0.5
spring.ai.openai.chat.presence-penalty=0.5

Embedding Model Configuration

Prefix: spring.ai.openai.embedding

Connection Properties

# Override base API key for embeddings
spring.ai.openai.embedding.api-key=sk-...

# Override base URL for embeddings
spring.ai.openai.embedding.base-url=https://api.openai.com

# Override project ID for embeddings
spring.ai.openai.embedding.project-id=proj_...

# Override organization ID for embeddings
spring.ai.openai.embedding.organization-id=org-...

# Embeddings API endpoint path
# Default: /v1/embeddings
spring.ai.openai.embedding.embeddings-path=/v1/embeddings

Model Options

# Model name
# Default: text-embedding-ada-002
# Options: text-embedding-ada-002, text-embedding-3-small, text-embedding-3-large
spring.ai.openai.embedding.model=text-embedding-ada-002

# Encoding format (float or base64)
spring.ai.openai.embedding.encoding-format=float

# Embedding dimensions (for -3-small and -3-large models)
spring.ai.openai.embedding.dimensions=1536

# User identifier
spring.ai.openai.embedding.user=user-123

Document Processing

# How to handle document metadata when embedding
# Options: EMBED (default), NONE, ALL
spring.ai.openai.embedding.metadata-mode=EMBED

Example Configuration

spring.ai.openai.api-key=${OPENAI_API_KEY}
spring.ai.openai.embedding.model=text-embedding-3-large
spring.ai.openai.embedding.dimensions=1024
spring.ai.openai.embedding.metadata-mode=EMBED

Image Model Configuration

Prefix: spring.ai.openai.image

Connection Properties

# Override base API key for images
spring.ai.openai.image.api-key=sk-...

# Override base URL for images
spring.ai.openai.image.base-url=https://api.openai.com

# Override project ID for images
spring.ai.openai.image.project-id=proj_...

# Override organization ID for images
spring.ai.openai.image.organization-id=org-...

# Images API endpoint path
# Default: v1/images/generations
spring.ai.openai.image.images-path=v1/images/generations

Model Options

# Model name
# Default: dall-e-3
# Options: dall-e-2, dall-e-3
spring.ai.openai.image.model=dall-e-3

# Number of images to generate
# Default: 1
# Note: dall-e-3 only supports n=1
spring.ai.openai.image.n=1

# Image quality
# Default: standard
# Options: standard, hd (dall-e-3 only)
spring.ai.openai.image.quality=standard

# Response format
# Default: url
# Options: url, b64_json
spring.ai.openai.image.response-format=url

# Image size
# dall-e-3: 1024x1024, 1024x1792, 1792x1024
# dall-e-2: 256x256, 512x512, 1024x1024
spring.ai.openai.image.size=1024x1024

# Image style (dall-e-3 only)
# Default: vivid
# Options: vivid, natural
spring.ai.openai.image.style=vivid

# User identifier
spring.ai.openai.image.user=user-123

Example Configuration

spring.ai.openai.api-key=${OPENAI_API_KEY}
spring.ai.openai.image.model=dall-e-3
spring.ai.openai.image.quality=hd
spring.ai.openai.image.size=1792x1024
spring.ai.openai.image.style=natural

Audio Speech Configuration

Prefix: spring.ai.openai.audio.speech

Connection Properties

# Override base API key for speech
spring.ai.openai.audio.speech.api-key=sk-...

# Override base URL for speech
spring.ai.openai.audio.speech.base-url=https://api.openai.com

# Override project ID for speech
spring.ai.openai.audio.speech.project-id=proj_...

# Override organization ID for speech
spring.ai.openai.audio.speech.organization-id=org-...

Model Options

# Model name
# Default: gpt-4o-mini-tts
# Options: gpt-4o-mini-tts, tts-1, tts-1-hd
spring.ai.openai.audio.speech.model=gpt-4o-mini-tts

# Voice type
# Default: alloy
# Options: alloy, echo, fable, onyx, nova, shimmer
spring.ai.openai.audio.speech.voice=alloy

# Audio response format
# Default: mp3
# Options: mp3, opus, aac, flac
spring.ai.openai.audio.speech.response-format=mp3

# Speech speed (0.25-4.0)
# Default: 1.0
spring.ai.openai.audio.speech.options.speed=1.0

# Input text to convert to speech
spring.ai.openai.audio.speech.options.input=Your text here

Example Configuration

spring.ai.openai.api-key=${OPENAI_API_KEY}
spring.ai.openai.audio.speech.model=tts-1-hd
spring.ai.openai.audio.speech.voice=nova
spring.ai.openai.audio.speech.response-format=flac
spring.ai.openai.audio.speech.speed=1.2

Audio Transcription Configuration

Prefix: spring.ai.openai.audio.transcription

Connection Properties

# Override base API key for transcription
spring.ai.openai.audio.transcription.api-key=sk-...

# Override base URL for transcription
spring.ai.openai.audio.transcription.base-url=https://api.openai.com

# Override project ID for transcription
spring.ai.openai.audio.transcription.project-id=proj_...

# Override organization ID for transcription
spring.ai.openai.audio.transcription.organization-id=org-...

Model Options

# Model name
# Default: whisper-1
spring.ai.openai.audio.transcription.model=whisper-1

# Input language (ISO-639-1 code)
# Example: en, es, fr, de, etc.
spring.ai.openai.audio.transcription.language=en

# Guidance prompt for transcription style
spring.ai.openai.audio.transcription.prompt=This is a technical discussion.

# Response format
# Default: text
# Options: text, json, verbose_json, srt, vtt
spring.ai.openai.audio.transcription.response-format=text

# Temperature (0.0-1.0)
# Default: 0.7
spring.ai.openai.audio.transcription.options.temperature=0.7

# Timestamp granularities (segment, word)
# Options: segment, word
spring.ai.openai.audio.transcription.options.timestamp-granularities=segment

Example Configuration

spring.ai.openai.api-key=${OPENAI_API_KEY}
spring.ai.openai.audio.transcription.model=whisper-1
spring.ai.openai.audio.transcription.language=en
spring.ai.openai.audio.transcription.response-format=verbose_json
spring.ai.openai.audio.transcription.temperature=0.0

Moderation Configuration

Prefix: spring.ai.openai.moderation

Connection Properties

# Override base API key for moderation
spring.ai.openai.moderation.api-key=sk-...

# Override base URL for moderation
spring.ai.openai.moderation.base-url=https://api.openai.com

# Override project ID for moderation
spring.ai.openai.moderation.project-id=proj_...

# Override organization ID for moderation
spring.ai.openai.moderation.organization-id=org-...

Model Options

# Model name
# Default: omni-moderation-latest
# Options: omni-moderation-latest, text-moderation-latest, text-moderation-stable
spring.ai.openai.moderation.options.model=omni-moderation-latest

Example Configuration

spring.ai.openai.api-key=${OPENAI_API_KEY}
spring.ai.openai.moderation.options.model=text-moderation-stable

ChatClient Configuration

Prefix: spring.ai.chat.client

# Enable ChatClient.Builder bean
# Default: true
spring.ai.chat.client.enabled=true

# Log prompt content in observations
# Default: false
spring.ai.chat.client.observations.log-prompt=false

# Log completion content in observations
# Default: false
spring.ai.chat.client.observations.log-completion=false

Example Configuration

spring.ai.chat.client.enabled=true
spring.ai.chat.client.observations.log-prompt=true
spring.ai.chat.client.observations.log-completion=true

Model Activation Control

Control which models are auto-configured:

# Disable specific models (default: openai for all)
spring.ai.chat-model=none
spring.ai.embedding-model=none
spring.ai.image-model=none
spring.ai.audio-speech-model=none
spring.ai.audio-transcription-model=none
spring.ai.moderation-model=none

Complete Example Configuration

application.properties

# Base OpenAI Configuration
spring.ai.openai.api-key=${OPENAI_API_KEY}
spring.ai.openai.base-url=https://api.openai.com
spring.ai.openai.organization-id=${OPENAI_ORG_ID}

# Chat Model
spring.ai.openai.chat.model=gpt-4o
spring.ai.openai.chat.temperature=0.7
spring.ai.openai.chat.max-tokens=2000
spring.ai.openai.chat.frequency-penalty=0.3
spring.ai.openai.chat.presence-penalty=0.3

# Embedding Model
spring.ai.openai.embedding.model=text-embedding-3-large
spring.ai.openai.embedding.dimensions=1024
spring.ai.openai.embedding.metadata-mode=EMBED

# Image Model
spring.ai.openai.image.model=dall-e-3
spring.ai.openai.image.quality=hd
spring.ai.openai.image.size=1024x1024
spring.ai.openai.image.style=natural

# Audio Speech
spring.ai.openai.audio.speech.model=tts-1-hd
spring.ai.openai.audio.speech.voice=nova
spring.ai.openai.audio.speech.response-format=mp3
spring.ai.openai.audio.speech.speed=1.0

# Audio Transcription
spring.ai.openai.audio.transcription.model=whisper-1
spring.ai.openai.audio.transcription.language=en
spring.ai.openai.audio.transcription.response-format=text
spring.ai.openai.audio.transcription.temperature=0.0

# Moderation
spring.ai.openai.moderation.options.model=omni-moderation-latest

# ChatClient
spring.ai.chat.client.enabled=true
spring.ai.chat.client.observations.log-prompt=false
spring.ai.chat.client.observations.log-completion=false

application.yml

spring:
  ai:
    openai:
      api-key: ${OPENAI_API_KEY}
      base-url: https://api.openai.com
      organization-id: ${OPENAI_ORG_ID}

      chat:
        model: gpt-4o
        temperature: 0.7
        max-tokens: 2000
        frequency-penalty: 0.3
        presence-penalty: 0.3

      embedding:
        model: text-embedding-3-large
        dimensions: 1024
        metadata-mode: EMBED

      image:
        model: dall-e-3
        quality: hd
        size: 1024x1024
        style: natural

      audio:
        speech:
          model: tts-1-hd
          voice: nova
          response-format: mp3
          speed: 1.0

        transcription:
          model: whisper-1
          language: en
          response-format: text
          temperature: 0.0

      moderation:
        model: omni-moderation-latest

    chat:
      client:
        enabled: true
        observations:
          log-prompt: false
          log-completion: false

Environment Variables

Instead of hardcoding API keys in properties files, use environment variables:

spring.ai.openai.api-key=${OPENAI_API_KEY}
spring.ai.openai.organization-id=${OPENAI_ORG_ID:}
spring.ai.openai.project-id=${OPENAI_PROJECT_ID:}

Set environment variables:

export OPENAI_API_KEY="sk-..."
export OPENAI_ORG_ID="org-..."
export OPENAI_PROJECT_ID="proj_..."

Profile-specific Configuration

Use Spring profiles for different environments:

application-dev.properties

spring.ai.openai.api-key=${OPENAI_DEV_API_KEY}
spring.ai.openai.chat.model=gpt-3.5-turbo
spring.ai.openai.chat.temperature=0.9

application-prod.properties

spring.ai.openai.api-key=${OPENAI_PROD_API_KEY}
spring.ai.openai.chat.model=gpt-4o
spring.ai.openai.chat.temperature=0.7
spring.ai.chat.client.observations.log-prompt=true
spring.ai.chat.client.observations.log-completion=true

Activate profile:

java -jar app.jar --spring.profiles.active=prod

Programmatic Configuration

Override properties programmatically:

import org.springframework.ai.openai.OpenAiChatModel;
import org.springframework.ai.openai.OpenAiChatOptions;
import org.springframework.ai.openai.api.OpenAiApi;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class OpenAiConfig {

    @Bean
    public OpenAiChatModel customChatModel() {
        OpenAiApi api = new OpenAiApi(
            "https://api.openai.com",
            System.getenv("OPENAI_API_KEY")
        );

        OpenAiChatOptions options = OpenAiChatOptions.builder()
            .model("gpt-4o")
            .temperature(0.8)
            .maxTokens(1500)
            .build();

        return new OpenAiChatModel(api, options);
    }
}

Best Practices

  1. Use environment variables: Never commit API keys to source control
  2. Profile-specific configs: Different settings for dev/staging/prod
  3. Reasonable defaults: Set sensible defaults for temperature, tokens, etc.
  4. Monitor costs: Lower temperature and max_tokens for cost control
  5. Enable observations in prod: Help with debugging and monitoring
  6. Override per-model: Use model-specific properties when needed
  7. Test configurations: Validate settings in lower environments first
tessl i tessl/maven-org-springframework-ai--spring-ai-starter-model-openai@1.1.1

docs

index.md

tile.json