CtrlK
CommunityDocumentationLog inGet started
Tessl Logo

tessl/maven-org-springframework-ai--spring-ai-starter-model-openai

Spring Boot Starter for OpenAI integration providing auto-configuration for chat completion, embeddings, image generation, audio speech synthesis, audio transcription, and content moderation models. Includes high-level ChatClient API and conversation memory support.

Overview
Eval results
Files

Spring AI OpenAI Starter

Spring Boot Starter for OpenAI integration providing auto-configuration for chat completion, embeddings, image generation, audio speech synthesis, audio transcription, and content moderation models. Includes high-level ChatClient API and conversation memory support.

Quick Links

ResourceDescription
Quick Start GuideGet started in 5 minutes
Real-World ScenariosProduction-ready examples
Configuration ReferenceAll configuration options
API ReferenceDetailed API documentation

Package Information

  • Package: org.springframework.ai:spring-ai-starter-model-openai
  • Version: 1.1.2
  • Type: Maven / Spring Boot Starter
  • Java: 21+
  • Spring Boot: 3.x+

Installation

Maven:

<dependency>
    <groupId>org.springframework.ai</groupId>
    <artifactId>spring-ai-starter-model-openai</artifactId>
    <version>1.1.2</version>
</dependency>

Gradle:

implementation 'org.springframework.ai:spring-ai-starter-model-openai:1.1.2'

Core Features

Auto-Configured Models

ModelDescriptionBean Class
ChatConversational AI with streamingOpenAiChatModel
EmbeddingsText vectorizationOpenAiEmbeddingModel
ImagesDALL-E image generationOpenAiImageModel
SpeechText-to-speech synthesisOpenAiAudioSpeechModel
TranscriptionSpeech-to-text (Whisper)OpenAiAudioTranscriptionModel
ModerationContent safetyOpenAiModerationModel

High-Level APIs

APIDescriptionDocumentation
ChatClientFluent chat interfaceDetails
ChatMemoryConversation historyDetails

Quick Start

1. Configure

spring.ai.openai.api-key=sk-...

2. Inject & Use

@Service
public class ChatService {
    private final ChatClient chatClient;

    public ChatService(ChatClient.Builder builder) {
        this.chatClient = builder.build();
    }

    public String chat(String message) {
        return chatClient.prompt()
            .user(message)
            .call()
            .content();
    }
}

Full Quick Start Guide →

Core Concepts

Auto-Configuration

The starter automatically configures beans when dependencies are present:

  • Conditional: Only creates beans for models you use
  • Customizable: Override with custom @Bean definitions
  • Configurable: Properties in spring.ai.openai.*

Disable specific models:

spring.ai.chat-model=none
spring.ai.embedding-model=none

API Structure

All models follow consistent patterns:

Synchronous:

ChatResponse response = chatModel.call(prompt);

Streaming:

Flux<ChatResponse> stream = chatModel.stream(prompt);

With Options:

Prompt prompt = new Prompt(message, options);
ChatResponse response = chatModel.call(prompt);

Message Types

package org.springframework.ai.chat.messages;

public interface Message {
    String getContent();
    MessageType getMessageType();
    Map<String, Object> getMetadata();
}

public class UserMessage implements Message { }
public class SystemMessage implements Message { }
public class AssistantMessage implements Message { }

Response Structure

package org.springframework.ai.chat.model;

public class ChatResponse {
    public Result getResult();
    public List<Result> getResults();
    public ChatResponseMetadata getMetadata();
}

public class Result {
    public AssistantMessage getOutput();
    public ChatGenerationMetadata getMetadata();
}

Configuration Quick Reference

Chat Model

spring.ai.openai.chat.model=gpt-4o-mini
spring.ai.openai.chat.temperature=0.7
spring.ai.openai.chat.max-tokens=1000

Available Models: gpt-4o, gpt-4o-mini, gpt-4-turbo, o1, o3, gpt-3.5-turbo

Full Configuration →

Embedding Model

spring.ai.openai.embedding.model=text-embedding-ada-002
spring.ai.openai.embedding.dimensions=1536

Models: text-embedding-ada-002, text-embedding-3-small, text-embedding-3-large

Image Model

spring.ai.openai.image.model=dall-e-3
spring.ai.openai.image.quality=hd
spring.ai.openai.image.size=1024x1024

Models: dall-e-2, dall-e-3

Common Use Cases

Use CaseModelsGuide
ChatbotChat + MemoryExample
Document Q&AChat + EmbeddingsExample
Content ModerationModerationExample
Image GenerationImagesExample
TranslationChatExample
TranscriptionTranscription + ChatExample
Code ReviewChatExample
Streaming UIChat (streaming)Example
Tool CallingChat + FunctionsExample
Data ExtractionChatExample

Error Handling

All models throw OpenAiApiException:

import org.springframework.ai.openai.api.OpenAiApiException;

try {
    ChatResponse response = chatModel.call(prompt);
} catch (OpenAiApiException e) {
    // Handle: 401 (auth), 429 (rate limit), 400 (invalid), 500+ (server)
}

Retry: Automatic exponential backoff for transient errors (429, 500+)

Observability

Enable metrics and tracing:

spring.ai.chat.client.observations.log-prompt=true
spring.ai.chat.client.observations.log-completion=true

Integrates with Micrometer for:

  • Request/response timing
  • Token usage tracking
  • Error tracking
  • Model information

API Reference

Chat Models

DocumentDescription
Chat ModelOpenAiChatModel API, options, streaming
ChatClientHigh-level fluent API, advisors
Chat MemoryConversation history, repositories

Other Models

DocumentDescription
Embedding ModelVector embeddings, similarity search
Image ModelDALL-E image generation
Audio SpeechText-to-speech synthesis
Audio TranscriptionSpeech-to-text (Whisper)
ModerationContent safety and filtering

Configuration

DocumentDescription
ConfigurationComplete property reference

Architecture

Bean Lifecycle

  1. Auto-Configuration detects spring-ai-openai on classpath
  2. Conditional Beans created based on properties
  3. API Clients configured with base URL and API key
  4. Model Beans wrapped with retry logic and observability
  5. ChatClient.Builder (prototype scope) available for injection

Key Components

// Low-level API clients
OpenAiApi          // Chat & embeddings
OpenAiImageApi     // Image generation
OpenAiAudioApi     // Speech & transcription
OpenAiModerationApi // Content moderation

// Model implementations
OpenAiChatModel              // Singleton, thread-safe
OpenAiEmbeddingModel         // Singleton, thread-safe
OpenAiImageModel             // Singleton, thread-safe
OpenAiAudioSpeechModel       // Singleton, thread-safe
OpenAiAudioTranscriptionModel // Singleton, thread-safe
OpenAiModerationModel        // Singleton, thread-safe

// High-level APIs
ChatClient.Builder  // Prototype scope
ChatMemory          // Singleton (with proper repository)

Thread Safety

All model beans are thread-safe and can be used concurrently.

Best Practices

  1. Environment Variables: Store API keys securely
  2. Temperature: Lower (0.0-0.3) for consistency, higher (0.7-1.0) for creativity
  3. Token Limits: Set maxTokens to control costs
  4. Streaming: Use for long responses and better UX
  5. Caching: Cache embeddings and repeated queries
  6. Moderation: Check user input before processing
  7. Error Handling: Catch OpenAiApiException and retry transient errors
  8. Monitoring: Enable observability for production
  9. Rate Limits: Monitor usage and implement backoff strategies
  10. Model Selection: Use gpt-4o-mini for simple tasks, gpt-4o for complex ones

Next Steps

Getting Started

Configuration

API Documentation

Resources

Support

tessl i tessl/maven-org-springframework-ai--spring-ai-starter-model-openai@1.1.1
Workspace
tessl
Visibility
Public
Created
Last updated
Describes
mavenpkg:maven/org.springframework.ai/spring-ai-starter-model-openai@1.1.x