CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/maven-org-springframework-ai--spring-ai-spring-boot-autoconfigure

Spring AI Spring Boot Auto Configuration modules providing automatic setup for AI models, vector stores, MCP, and retry capabilities

Overview
Eval results
Files

multi-provider.mddocs/examples/

Multi-Provider Setup

Examples of using multiple AI providers for fallback, load balancing, and specialized tasks.

Fallback Between Providers

@Service
public class FallbackChatService {
    private final ChatModel primaryModel;
    private final ChatModel fallbackModel;
    private final RetryTemplate retryTemplate;
    
    public FallbackChatService(
            @Qualifier("openAiChatModel") ChatModel primary,
            @Qualifier("anthropicChatModel") ChatModel fallback,
            RetryTemplate retryTemplate) {
        this.primaryModel = primary;
        this.fallbackModel = fallback;
        this.retryTemplate = retryTemplate;
    }
    
    public String chat(String message) {
        try {
            return retryTemplate.execute(context -> 
                primaryModel.call(message)
            );
        } catch (Exception e) {
            log.warn("Primary model failed, using fallback: {}", 
                    e.getMessage());
            return fallbackModel.call(message);
        }
    }
}

Configuration:

# Primary provider
spring.ai.openai.api-key=${OPENAI_API_KEY}
spring.ai.openai.chat.options.model=gpt-4

# Fallback provider
spring.ai.anthropic.api-key=${ANTHROPIC_API_KEY}
spring.ai.anthropic.chat.options.model=claude-3-5-sonnet-20241022

# Retry configuration
spring.ai.retry.max-attempts=3

Load Balancing Across Providers

@Service
public class LoadBalancedChatService {
    private final List<ChatModel> providers;
    private final AtomicInteger counter = new AtomicInteger(0);
    
    public LoadBalancedChatService(
            @Qualifier("openAiChatModel") ChatModel openai,
            @Qualifier("anthropicChatModel") ChatModel anthropic,
            @Qualifier("azureOpenAiChatModel") ChatModel azure) {
        this.providers = List.of(openai, anthropic, azure);
    }
    
    public String chat(String message) {
        // Round-robin load balancing
        int index = counter.getAndIncrement() % providers.size();
        ChatModel model = providers.get(index);
        
        try {
            return model.call(message);
        } catch (Exception e) {
            // Try next provider on failure
            int nextIndex = (index + 1) % providers.size();
            return providers.get(nextIndex).call(message);
        }
    }
}

Specialized Providers for Different Tasks

@Service
public class SpecializedAiService {
    private final ChatModel creativeModel;
    private final ChatModel analyticalModel;
    private final ChatModel codingModel;
    
    public SpecializedAiService(
            @Qualifier("openAiChatModel") ChatModel creative,
            @Qualifier("anthropicChatModel") ChatModel analytical,
            @Qualifier("deepseekChatModel") ChatModel coding) {
        this.creativeModel = creative;
        this.analyticalModel = analytical;
        this.codingModel = coding;
    }
    
    public String generateCreative(String prompt) {
        return creativeModel.call(prompt);
    }
    
    public String analyzeData(String data) {
        return analyticalModel.call(
            "Analyze this data: " + data
        );
    }
    
    public String generateCode(String specification) {
        return codingModel.call(
            "Generate code for: " + specification
        );
    }
}

Configuration:

# Creative tasks - OpenAI with high temperature
spring.ai.openai.chat.options.model=gpt-4
spring.ai.openai.chat.options.temperature=0.9

# Analytical tasks - Claude with low temperature
spring.ai.anthropic.chat.options.model=claude-3-5-sonnet-20241022
spring.ai.anthropic.chat.options.temperature=0.3

# Coding tasks - DeepSeek
spring.ai.deepseek.chat.options.model=deepseek-coder
spring.ai.deepseek.chat.options.temperature=0.2

Cost Optimization with Provider Selection

@Service
public class CostOptimizedChatService {
    private final ChatModel cheapModel;
    private final ChatModel expensiveModel;
    
    public String chat(String message, boolean isComplex) {
        if (isComplex) {
            // Use powerful model for complex queries
            return expensiveModel.call(message);
        } else {
            // Use cheaper model for simple queries
            return cheapModel.call(message);
        }
    }
}

See Real-World Scenarios →

Install with Tessl CLI

npx tessl i tessl/maven-org-springframework-ai--spring-ai-spring-boot-autoconfigure

docs

examples

complete-applications.md

edge-cases.md

multi-provider.md

real-world-scenarios.md

index.md

tile.json