Build AI agents with Spring AI 2.0 - basic agent, memory, tools/MCP, agentic workflows, guardrails, and observability
86
85%
Does it follow best practices?
Impact
90%
2.43xAverage score across 3 eval scenarios
Passed
No known issues
Use this skill when building AI agent applications with Spring AI 2.0.x on Spring Boot 4.0.x.
Until 2.0.0 GA is released, add the Spring Milestones repository:
<repositories>
<repository>
<id>spring-milestones</id>
<url>https://repo.spring.io/milestone</url>
</repository>
</repositories>repositories {
maven { url "https://repo.spring.io/milestone" }
}<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-bom</artifactId>
<version>2.0.0-M4</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>dependencies {
implementation platform("org.springframework.ai:spring-ai-bom:2.0.0-M4")
}| Provider | Maven artifactId | Notes |
|---|---|---|
| OpenAI (SDK) | spring-ai-starter-model-openai-sdk | New in 2.0 -- uses official OpenAI Java SDK, supports Azure Foundry & GitHub Models |
| OpenAI (legacy) | spring-ai-starter-model-openai | Previous RestClient-based integration |
| Anthropic | spring-ai-starter-model-anthropic | Now uses official Anthropic Java SDK internally |
| Azure OpenAI | spring-ai-starter-model-azure-openai | |
| Ollama | spring-ai-starter-model-ollama | |
| Vertex AI Gemini | spring-ai-starter-model-vertex-ai | Deprecated in 2.0 -- migrate to other providers |
| Mistral AI | spring-ai-starter-model-mistral-ai | |
| Amazon Bedrock | spring-ai-starter-model-bedrock |
Example:
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-starter-model-openai-sdk</artifactId>
</dependency><dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>When migrating from Spring AI 1.1.x to 2.0:
tools.jackson package) instead of Jackson 2 (com.fasterxml.jackson). Custom serializers/deserializers must be updated.spring.ai.openai-sdk.chat.options.temperature=0.7 or in codegpt-5-mini instead of previous defaultorg.springframework.ai.mcp.spring.annotations -> org.springframework.ai.mcp.annotation (see Section 6)io.modelcontextprotocol.sdk:mcp-spring-* -> org.springframework.ai:mcp-spring-*McpSyncClientCustomizer / McpAsyncClientCustomizer -> McpClientCustomizer<McpClient.SyncSpec> / McpClientCustomizer<McpClient.AsyncSpec>ChatMemory advisors insteaddisableMemory() deprecated -- Use disableInternalConversationHistory() insteadAn OpenRewrite recipe automates most migrations:
mvn org.openrewrite.maven:rewrite-maven-plugin:6.32.0:run \
-Drewrite.configLocation=https://raw.githubusercontent.com/spring-projects/spring-ai/refs/heads/main/src/rewrite/migrate-to-2-0-0-M3.yaml \
-Drewrite.activeRecipes=org.springframework.ai.migration.MigrateToSpringAI200M3org.springframework.ai.chat.client.ChatClient is the primary entry point. Spring Boot auto-configures a ChatClient.Builder bean.
// Inject the auto-configured builder
@RestController
public class AgentController {
private final ChatClient chatClient;
public AgentController(ChatClient.Builder chatClientBuilder) {
this.chatClient = chatClientBuilder.build();
}
}
// Or create from a ChatModel directly
ChatClient chatClient = ChatClient.create(chatModel);
// Or via static builder
ChatClient chatClient = ChatClient.builder(chatModel)
.defaultSystem("You are a helpful assistant.")
.build();ChatClient chatClient = ChatClient.builder(chatModel)
.defaultSystem("You are a helpful agent.") // system prompt
.defaultAdvisors(new SimpleLoggerAdvisor()) // advisors
.defaultTools(new MyTools()) // tool objects
.defaultToolNames("weatherTool") // tool bean names
.defaultOptions(ChatOptions.builder().temperature(0.7).build())
.build();// Simple string response
String answer = chatClient.prompt()
.system("You are a travel agent.")
.user("Plan a trip to Paris")
.call()
.content();
// With template parameters
String answer = chatClient.prompt()
.user(u -> u
.text("Tell me about {topic} in {language}")
.param("topic", "Spring AI")
.param("language", "English"))
.call()
.content();
// Entity mapping (structured output)
record ActorFilms(String actor, List<String> movies) {}
ActorFilms result = chatClient.prompt()
.user("Generate filmography for Tom Hanks")
.call()
.entity(ActorFilms.class);
// Parameterized type
List<ActorFilms> results = chatClient.prompt()
.user("Generate 5 actors with filmographies")
.call()
.entity(new ParameterizedTypeReference<List<ActorFilms>>() {});
// Streaming
Flux<String> stream = chatClient.prompt()
.user("Tell me a story")
.stream()
.content();
// Full response with metadata
ChatClientResponse ccr = chatClient.prompt()
.user("Hello")
.call()
.chatClientResponse();
ChatResponse chatResponse = ccr.chatResponse();
Map<String, Object> advisorContext = ccr.context();call()| Method | Return Type | Description |
|---|---|---|
content() | String | Plain text response |
chatResponse() | ChatResponse | Full response with generations and metadata |
chatClientResponse() | ChatClientResponse | Response plus advisor context |
entity(Class<T>) | T | Deserialized structured output |
entity(ParameterizedTypeReference<T>) | T | Generic type structured output |
responseEntity(Class<T>) | ResponseEntity<T> | Entity plus ChatResponse metadata |
stream()| Method | Return Type |
|---|---|
content() | Flux<String> |
chatResponse() | Flux<ChatResponse> |
chatClientResponse() | Flux<ChatClientResponse> |
@Configuration
public class ChatClientConfig {
@Bean
public ChatClient openAiChatClient(
@Qualifier("openAiChatModel") ChatModel chatModel) {
return ChatClient.create(chatModel);
}
@Bean
public ChatClient anthropicChatClient(
@Qualifier("anthropicChatModel") ChatModel chatModel) {
return ChatClient.create(chatModel);
}
}Disable auto-configured builder: spring.ai.chat.client.enabled=false
org.springframework.ai.chat.memory.ChatMemory -- manages conversational context.
org.springframework.ai.chat.memory.ChatMemoryRepository -- persists messages.
ChatMemory chatMemory = MessageWindowChatMemory.builder()
.chatMemoryRepository(repository) // optional, defaults to InMemoryChatMemoryRepository
.maxMessages(20) // default is 20
.build();| Repository | Starter artifactId | Notes |
|---|---|---|
InMemoryChatMemoryRepository | (included by default) | ConcurrentHashMap storage |
JdbcChatMemoryRepository | spring-ai-starter-model-chat-memory-repository-jdbc | PostgreSQL, MySQL, MariaDB, SQL Server, HSQLDB, Oracle |
CassandraChatMemoryRepository | spring-ai-starter-model-chat-memory-repository-cassandra | TTL support |
Neo4jChatMemoryRepository | spring-ai-starter-model-chat-memory-repository-neo4j | Graph-based storage |
CosmosDBChatMemoryRepository | spring-ai-starter-model-chat-memory-repository-cosmos-db | Azure Cosmos DB |
MongoChatMemoryRepository | spring-ai-starter-model-chat-memory-repository-mongodb | TTL support |
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-starter-model-chat-memory-repository-jdbc</artifactId>
</dependency>spring.ai.chat.memory.repository.jdbc.initialize-schema=always@Autowired
JdbcChatMemoryRepository chatMemoryRepository;
ChatMemory chatMemory = MessageWindowChatMemory.builder()
.chatMemoryRepository(chatMemoryRepository)
.maxMessages(10)
.build();There are three advisor types for integrating memory into ChatClient:
MessageChatMemoryAdvisor -- injects conversation history as Message objects in the prompt (preserves message structure):
ChatMemory chatMemory = MessageWindowChatMemory.builder().build();
ChatClient chatClient = ChatClient.builder(chatModel)
.defaultAdvisors(
MessageChatMemoryAdvisor.builder(chatMemory).build()
)
.build();
String conversationId = "session-123";
String response = chatClient.prompt()
.user("What did I just say?")
.advisors(a -> a.param(ChatMemory.CONVERSATION_ID, conversationId))
.call()
.content();PromptChatMemoryAdvisor -- appends conversation memory as plain text to system prompt:
ChatClient chatClient = ChatClient.builder(chatModel)
.defaultAdvisors(
PromptChatMemoryAdvisor.builder(chatMemory).build()
)
.build();VectorStoreChatMemoryAdvisor -- retrieves relevant memory from a VectorStore and appends to system message:
ChatClient chatClient = ChatClient.builder(chatModel)
.defaultAdvisors(
VectorStoreChatMemoryAdvisor.builder(vectorStore).build()
)
.build();ChatMemory chatMemory = MessageWindowChatMemory.builder().build();
String conversationId = "007";
// Turn 1
chatMemory.add(conversationId, new UserMessage("My name is James Bond"));
ChatResponse r1 = chatModel.call(new Prompt(chatMemory.get(conversationId)));
chatMemory.add(conversationId, r1.getResult().getOutput());
// Turn 2
chatMemory.add(conversationId, new UserMessage("What is my name?"));
ChatResponse r2 = chatModel.call(new Prompt(chatMemory.get(conversationId)));
chatMemory.add(conversationId, r2.getResult().getOutput());
// r2 contains "James Bond"import org.springframework.ai.tool.annotation.Tool;
import org.springframework.ai.tool.annotation.ToolParam;
@Component
public class WeatherTools {
@Tool(description = "Get the current weather for a location")
public WeatherResponse getCurrentWeather(
@ToolParam(description = "City name") String city,
@ToolParam(description = "Temperature unit", required = false) String unit) {
// implementation
return new WeatherResponse(city, 22.0, "Celsius");
}
}
public record WeatherResponse(String city, double temperature, String unit) {}| Attribute | Type | Default | Description |
|---|---|---|---|
name | String | method name | Tool identifier |
description | String | "" | Description sent to the LLM |
returnDirect | boolean | false | Return tool result directly to user without LLM post-processing |
resultConverter | Class<? extends ToolCallResultConverter> | DefaultToolCallResultConverter.class | Custom result serialization |
| Attribute | Type | Default | Description |
|---|---|---|---|
description | String | "" | Parameter description for schema |
required | boolean | true | Whether the parameter is required |
// Per-request tools
String response = chatClient.prompt()
.user("What's the weather in Paris?")
.tools(new WeatherTools())
.call()
.content();
// Default tools (all requests)
ChatClient chatClient = ChatClient.builder(chatModel)
.defaultTools(new WeatherTools())
.build();
// From Spring bean names
ChatClient chatClient = ChatClient.builder(chatModel)
.defaultToolNames("currentWeather")
.build();@Configuration
public class ToolConfig {
@Bean("currentWeather")
@Description("Get the weather in a location")
public Function<WeatherRequest, WeatherResponse> currentWeather() {
return request -> new WeatherResponse(request.city(), 22.0, "C");
}
}
// Use by bean name
chatClient.prompt()
.user("Weather in Paris?")
.toolNames("currentWeather")
.call()
.content();import org.springframework.ai.tool.FunctionToolCallback;
ToolCallback toolCallback = FunctionToolCallback
.builder("currentWeather", (WeatherRequest req) -> getWeather(req))
.description("Get the weather in location")
.inputType(WeatherRequest.class)
.build();
chatClient.prompt()
.user("Weather in Paris?")
.toolCallbacks(toolCallback)
.call()
.content();2.0 breaking change: Conversation history is no longer automatically included in ToolContext.
Use ChatMemory advisors for conversation history management instead.
@Tool(description = "Get customer info")
public Customer getCustomer(Long id, ToolContext toolContext) {
String tenantId = (String) toolContext.getContext().get("tenantId");
return customerRepository.findById(id, tenantId);
}
// Pass context when calling
chatClient.prompt()
.user("Tell me about customer 42")
.tools(new CustomerTools())
.toolContext(Map.of("tenantId", "acme"))
.call()
.content();ChatOptions chatOptions = ToolCallingChatOptions.builder()
.toolCallbacks(ToolCallbacks.from(new MyTools()))
.internalToolExecutionEnabled(false) // disable auto-execution
.build();
ToolCallingManager toolCallingManager = ToolCallingManager.builder().build();
Prompt prompt = new Prompt("Do the thing", chatOptions);
ChatResponse response = chatModel.call(prompt);
while (response.hasToolCalls()) {
ToolExecutionResult result = toolCallingManager.executeToolCalls(prompt, response);
prompt = new Prompt(result.conversationHistory(), chatOptions);
response = chatModel.call(prompt);
}
String finalAnswer = response.getResult().getOutput().getText();import org.springframework.ai.tool.augmentation.AugmentedToolCallbackProvider;
public record AgentThinking(
@ToolParam(description = "Your reasoning for calling this tool", required = true)
String innerThought,
@ToolParam(description = "Confidence level (low, medium, high)", required = false)
String confidence
) {}
AugmentedToolCallbackProvider<AgentThinking> provider = AugmentedToolCallbackProvider
.<AgentThinking>builder()
.toolObject(new MyTools())
.argumentType(AgentThinking.class)
.argumentConsumer(event -> {
log.info("Tool: {} | Reasoning: {}", event.toolDefinition().name(),
event.arguments().innerThought());
})
.removeExtraArgumentsAfterProcessing(true)
.build();
ChatClient chatClient = ChatClient.builder(chatModel)
.defaultToolCallbacks(provider)
.build();spring.ai.tools.throw-exception-on-error=falseAuto-configured bean: DefaultToolExecutionExceptionProcessor
Optional, CompletableFuture, Mono, Flux, Function, Supplier, ConsumerAdvisors intercept and transform ChatClient requests and responses in an ordered chain.
org.springframework.ai.chat.client.advisor)public interface Advisor extends Ordered {
String getName();
}
public interface CallAdvisor extends Advisor {
ChatClientResponse adviseCall(
ChatClientRequest chatClientRequest,
CallAdvisorChain callAdvisorChain);
}
public interface StreamAdvisor extends Advisor {
Flux<ChatClientResponse> adviseStream(
ChatClientRequest chatClientRequest,
StreamAdvisorChain streamAdvisorChain);
}getOrder() value = higher precedence = processes request FIRSTOrdered.HIGHEST_PRECEDENCE (Integer.MIN_VALUE) for first executionOrdered.LOWEST_PRECEDENCE (Integer.MAX_VALUE) for last execution| Advisor | Purpose |
|---|---|
MessageChatMemoryAdvisor | Injects memory as Message list |
PromptChatMemoryAdvisor | Injects memory into system text |
VectorStoreChatMemoryAdvisor | RAG-based memory from VectorStore |
QuestionAnswerAdvisor | Naive RAG pattern (query VectorStore, augment prompt) |
RetrievalAugmentationAdvisor | Modular RAG architecture |
ReReadingAdvisor | RE2 re-reading strategy for improved reasoning |
SafeGuardAdvisor | Content safety / guardrails |
SimpleLoggerAdvisor | Debug logging of requests/responses |
ToolCallAdvisor | Advisor-controlled tool execution |
ChatClient chatClient = ChatClient.builder(chatModel)
.defaultAdvisors(
MessageChatMemoryAdvisor.builder(chatMemory).build(),
QuestionAnswerAdvisor.builder(vectorStore).build(),
new SimpleLoggerAdvisor()
)
.build();
// Per-request advisors
chatClient.prompt()
.advisors(new SafeGuardAdvisor())
.user("Potentially harmful input")
.call()
.content();public class MyLoggingAdvisor implements CallAdvisor, StreamAdvisor {
private static final Logger log = LoggerFactory.getLogger(MyLoggingAdvisor.class);
@Override
public String getName() {
return this.getClass().getSimpleName();
}
@Override
public int getOrder() {
return 0;
}
@Override
public ChatClientResponse adviseCall(ChatClientRequest request,
CallAdvisorChain chain) {
log.debug("Request: {}", request);
ChatClientResponse response = chain.nextCall(request);
log.debug("Response: {}", response);
return response;
}
@Override
public Flux<ChatClientResponse> adviseStream(ChatClientRequest request,
StreamAdvisorChain chain) {
log.debug("Stream request: {}", request);
return chain.nextStream(request);
}
}ToolCallingManager toolCallingManager = ToolCallingManager.builder().build();
ToolCallAdvisor toolCallAdvisor = ToolCallAdvisor.builder()
.toolCallingManager(toolCallingManager)
.advisorOrder(BaseAdvisor.HIGHEST_PRECEDENCE + 300)
.build();
ChatClient chatClient = ChatClient.builder(chatModel)
.defaultAdvisors(toolCallAdvisor)
.build();<!-- Standard (STDIO + Servlet SSE/Streamable-HTTP) -->
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-starter-mcp-client</artifactId>
</dependency>
<!-- WebFlux-based transports -->
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-starter-mcp-client-webflux</artifactId>
</dependency><!-- STDIO server -->
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-starter-mcp-server</artifactId>
</dependency>
<!-- WebMVC server (SSE/Streamable-HTTP/Stateless) -->
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-starter-mcp-server-webmvc</artifactId>
</dependency>
<!-- WebFlux server -->
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-starter-mcp-server-webflux</artifactId>
</dependency>spring:
ai:
mcp:
client:
enabled: true
name: my-mcp-client
version: 1.0.0
type: SYNC # SYNC or ASYNC
request-timeout: 20s
toolcallback:
enabled: true # register MCP tools as Spring AI ToolCallbacks
# STDIO transport
stdio:
connections:
my-server:
command: npx
args:
- "-y"
- "@modelcontextprotocol/server-filesystem"
- "/Users/me/data"
env:
API_KEY: secret
# Or reference Claude Desktop JSON format
servers-configuration: classpath:mcp-servers.json
# SSE transport
sse:
connections:
remote-server:
url: http://localhost:8080
sse-endpoint: /sse # default
# Streamable-HTTP transport
streamable-http:
connections:
server1:
url: http://localhost:8083
endpoint: /mcp # default@Autowired
private SyncMcpToolCallbackProvider toolCallbackProvider;
ChatClient chatClient = ChatClient.builder(chatModel)
.defaultToolCallbacks(toolCallbackProvider)
.build();
String response = chatClient.prompt()
.user("List files in my data directory")
.call()
.content();spring:
ai:
mcp:
server:
type: SYNC # SYNC or ASYNC
stdio: false # true for STDIO protocol
protocol: SSE # SSE, STREAMABLE, or STATELESSimport org.springframework.ai.mcp.annotation.McpTool;
import org.springframework.ai.mcp.annotation.McpToolParam;
@Component
public class CalculatorServer {
@McpTool(name = "add", description = "Add two numbers")
public int add(
@McpToolParam(description = "First number", required = true) int a,
@McpToolParam(description = "Second number", required = true) int b) {
return a + b;
}
}@McpResource(uri = "config://{key}", name = "Configuration")
public String getConfig(String key) { ... }
@McpPrompt(name = "analysis_prompt", description = "Data analysis prompt")
public String analysisPrompt() { ... }
@McpComplete(name = "region_completion")
public List<String> completeRegions(String partial) { ... }@McpLogging(clients = "my-server")
public void handleLog(LoggingMessageNotification notification) { ... }
@McpSampling(clients = "my-server")
public CreateMessageResult handleSampling(CreateMessageRequest request) { ... }
@McpElicitation(clients = "my-server")
public ElicitResult handleElicitation(ElicitRequest request) { ... }
@McpProgress(clients = "my-server")
public void handleProgress(ProgressNotification notification) { ... }@Component
public class MyToolFilter implements McpToolFilter {
@Override
public boolean test(McpConnectionInfo info, McpSchema.Tool tool) {
return !tool.name().startsWith("experimental_");
}
}// In 2.0: McpSyncClientCustomizer and McpAsyncClientCustomizer are replaced
// by a single generic McpClientCustomizer<B> interface
@Component
public class MyMcpCustomizer implements McpClientCustomizer<McpClient.SyncSpec> {
@Override
public void customize(String serverName, McpClient.SyncSpec spec) {
spec.requestTimeout(Duration.ofSeconds(30));
spec.sampling(request -> { /* handle */ return result; });
}
}Spring AI implements five patterns from Anthropic's "Building Effective Agents" research.
Sequential steps where each output feeds the next.
public class ChainWorkflow {
private final ChatClient chatClient;
private final String[] steps;
public String chain(String userInput) {
String response = userInput;
for (String stepPrompt : steps) {
String input = String.format("{%s}\n{%s}", stepPrompt, response);
response = chatClient.prompt(input).call().content();
}
return response;
}
}Classify input, then route to specialized handler.
Map<String, String> routes = Map.of(
"billing", "You are a billing specialist...",
"technical", "You are a technical support engineer...",
"general", "You are a customer service rep..."
);
RoutingWorkflow workflow = new RoutingWorkflow(chatClient);
String response = workflow.route("My account was charged twice", routes);Process independent items concurrently.
List<String> results = new ParallelizationWorkflow(chatClient)
.parallel(
"Analyze market impact for this stakeholder group.",
List.of("Customers: ...", "Employees: ...", "Investors: ..."),
4 // parallelism
);Dynamic task decomposition with worker execution.
OrchestratorWorkersWorkflow workflow = new OrchestratorWorkersWorkflow(chatClient);
WorkerResponse response = workflow.process(
"Generate technical and user-friendly docs for a REST API endpoint"
);Iterative generate-evaluate-refine loop.
EvaluatorOptimizerWorkflow workflow = new EvaluatorOptimizerWorkflow(chatClient);
RefinedResponse response = workflow.loop(
"Create a Java class implementing a thread-safe counter"
);
// response.solution() -- final refined code
// response.chainOfThought() -- evolution across iterationsReference implementations: https://github.com/spring-projects/spring-ai-examples/tree/main/agentic-patterns
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency># ChatClient logging
spring.ai.chat.client.observations.log-prompt=false
spring.ai.chat.client.observations.log-completion=false
# ChatModel logging
spring.ai.chat.observations.log-prompt=false
spring.ai.chat.observations.log-completion=false
spring.ai.chat.observations.include-error-logging=false
# Tool calling
spring.ai.tools.observations.include-content=false
# Image model
spring.ai.image.observations.log-prompt=false
# Vector store
spring.ai.vectorstore.observations.log-query-response=false| Component | Observation Name | Prometheus Base |
|---|---|---|
| ChatClient | spring.ai.chat.client | gen_ai_chat_client_operation |
| ChatModel | gen_ai.client.operation | gen_ai_client_operation_seconds |
| Tool Calling | spring.ai.tool | (framework) |
| VectorStore | db.vector.client.operation | db_vector_client_operation_seconds |
gen_ai.operation.name -- operation type
gen_ai.system -- provider name (openai, anthropic, spring_ai)
gen_ai.request.model -- requested model
gen_ai.response.model -- actual model used
spring.ai.kind -- chat_client, advisor, tool_call, vector_storegen_ai_client_token_usage_total{gen_ai_token_type="input"}
gen_ai_client_token_usage_total{gen_ai_token_type="output"}
gen_ai_client_token_usage_total{gen_ai_token_type="total"}logging.level.org.springframework.ai.chat.client.advisor=DEBUGMinimal Spring Boot agent with memory, tools, and observability:
@SpringBootApplication
public class AgentApplication {
public static void main(String[] args) {
SpringApplication.run(AgentApplication.class, args);
}
@Bean
ChatClient chatClient(ChatClient.Builder builder, ChatMemory chatMemory) {
return builder
.defaultSystem("You are a helpful travel assistant.")
.defaultAdvisors(
MessageChatMemoryAdvisor.builder(chatMemory).build(),
new SimpleLoggerAdvisor()
)
.defaultTools(new TravelTools())
.build();
}
@Bean
ChatMemory chatMemory(ChatMemoryRepository repository) {
return MessageWindowChatMemory.builder()
.chatMemoryRepository(repository)
.maxMessages(20)
.build();
}
}
@Component
class TravelTools {
@Tool(description = "Search for flights between two cities on a given date")
public List<Flight> searchFlights(
@ToolParam(description = "Departure city") String from,
@ToolParam(description = "Arrival city") String to,
@ToolParam(description = "Date in YYYY-MM-DD format") String date) {
return flightService.search(from, to, LocalDate.parse(date));
}
@Tool(description = "Book a flight by flight number")
public BookingConfirmation bookFlight(
@ToolParam(description = "Flight number") String flightNumber) {
return bookingService.book(flightNumber);
}
}
@RestController
@RequestMapping("/chat")
class ChatController {
private final ChatClient chatClient;
ChatController(ChatClient chatClient) {
this.chatClient = chatClient;
}
@PostMapping
String chat(@RequestParam String message,
@RequestParam(defaultValue = "default") String sessionId) {
return chatClient.prompt()
.user(message)
.advisors(a -> a.param(ChatMemory.CONVERSATION_ID, sessionId))
.call()
.content();
}
}# Model provider (OpenAI SDK starter)
spring.ai.openai-sdk.api-key=${OPENAI_API_KEY}
spring.ai.openai-sdk.chat.options.model=gpt-5-mini
spring.ai.openai-sdk.chat.options.temperature=0.7 # REQUIRED in 2.0 -- no implicit default
# Memory (JDBC)
spring.ai.chat.memory.repository.jdbc.initialize-schema=always
spring.datasource.url=jdbc:postgresql://localhost:5432/agentdb
spring.datasource.username=agent
spring.datasource.password=secret
# Observability
spring.ai.chat.client.observations.log-prompt=true
spring.ai.chat.client.observations.log-completion=true
spring.ai.chat.observations.log-prompt=true
spring.ai.tools.observations.include-content=true
# Actuator
management.endpoints.web.exposure.include=health,metrics,prometheus<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-starter-model-openai-sdk</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-starter-model-chat-memory-repository-jdbc</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<scope>runtime</scope>
</dependency>
</dependencies>| Package | Contents |
|---|---|
org.springframework.ai.chat.client | ChatClient, ChatClient.Builder |
org.springframework.ai.chat.client.advisor | Advisor, CallAdvisor, StreamAdvisor, SimpleLoggerAdvisor |
org.springframework.ai.chat.model | ChatModel, ChatResponse, Generation |
org.springframework.ai.chat.memory | ChatMemory, ChatMemoryRepository, MessageWindowChatMemory |
org.springframework.ai.tool.annotation | @Tool, @ToolParam |
org.springframework.ai.tool | ToolCallback, ToolCallbackProvider, FunctionToolCallback, ToolCallingManager |
org.springframework.ai.support | ToolCallbacks (helper: ToolCallbacks.from(toolBean) returns ToolCallback[]) |
org.springframework.ai.tool.metadata | ToolDefinition, ToolMetadata |
org.springframework.ai.tool.augmentation | AugmentedToolCallbackProvider |
org.springframework.ai.mcp | SyncMcpToolCallbackProvider, AsyncMcpToolCallbackProvider, McpToolFilter |
org.springframework.ai.mcp.annotation | @McpTool, @McpToolParam, @McpResource, @McpPrompt, @McpComplete (moved from o.s.ai.mcp.spring.annotations in 1.1.x) |
org.springframework.ai.chat.prompt | Prompt, PromptTemplate |