Common shared infrastructure for integrating Google Gemini AI models with Quarkus applications through the LangChain4j framework, providing base chat model functionality, schema mapping, and embedding model support.
Utility classes for mapping between LangChain4j and Gemini formats, handling responses, extracting data, and providing REST client authentication.
Maps LangChain4j chat messages, tool specifications, and generation configuration to Gemini's request format. This is the primary utility for converting between LangChain4j and Gemini APIs.
/**
* Utility for mapping LangChain4j types to Gemini API request format.
*/
public final class ContentMapper {
/**
* Maps LangChain4j chat messages and configuration to a Gemini request.
*
* @param messages List of ChatMessage objects from LangChain4j
* @param toolSpecifications List of ToolSpecification objects (function declarations)
* @param generationConfig Generation configuration parameters
* @param modelId The Gemini model identifier
* @param useGoogleSearch Whether to enable Google Search integration
* @return GenerateContentRequest ready for Gemini API
*/
public static GenerateContentRequest map(
List<ChatMessage> messages,
List<ToolSpecification> toolSpecifications,
GenerationConfig generationConfig,
String modelId,
boolean useGoogleSearch
);
}Usage Example:
// LangChain4j messages
List<ChatMessage> messages = List.of(
SystemMessage.from("You are a helpful assistant."),
UserMessage.from("What is the capital of France?")
);
// LangChain4j tool specifications
ToolSpecification weatherTool = ToolSpecification.builder()
.name("get_weather")
.description("Get current weather")
.parameters(/* JsonSchema */)
.build();
// Generation configuration
GenerationConfig config = GenerationConfig.builder()
.temperature(0.7)
.maxOutputTokens(1024)
.build();
// Map to Gemini request format
GenerateContentRequest geminiRequest = ContentMapper.map(
messages,
List.of(weatherTool),
config,
"gemini-1.5-pro",
false // useGoogleSearch
);
// Now ready to send to Gemini APIConverts LangChain4j's JsonSchema format to Gemini's Schema format, enabling structured output compatibility across frameworks.
/**
* Utility for mapping LangChain4j JsonSchema to Gemini Schema.
*/
public final class SchemaMapper {
/**
* Converts a LangChain4j JsonSchema to Gemini Schema format.
*
* @param jsonSchema The LangChain4j JsonSchema
* @return Equivalent Gemini Schema
*/
public static Schema fromJsonSchemaToSchema(JsonSchema jsonSchema);
}Usage Example:
// LangChain4j JsonSchema
JsonSchema langchain4jSchema = JsonSchema.builder()
.schema(Map.of(
"type", "object",
"properties", Map.of(
"name", Map.of("type", "string"),
"age", Map.of("type", "integer")
),
"required", List.of("name")
))
.build();
// Convert to Gemini Schema
Schema geminiSchema = SchemaMapper.fromJsonSchemaToSchema(langchain4jSchema);
// Use in GenerationConfig
GenerationConfig config = GenerationConfig.builder()
.responseSchema(geminiSchema)
.responseMimeType("application/json")
.build();Extracts and processes data from Gemini's response format, including text, token usage, tool execution requests, thoughts, and finish reasons.
/**
* Utility for extracting data from GenerateContentResponse.
*/
public final class GenerateContentResponseHandler {
/**
* Extracts the text content from the response.
*
* @param response The Gemini API response
* @return The text content, or empty string if none
*/
public static String getText(GenerateContentResponse response);
/**
* Extracts the model's reasoning thoughts from the response.
* Only available when ThinkingConfig.includeThoughts is true.
*
* @param response The Gemini API response
* @return The thoughts/reasoning text, or empty string if none
*/
public static String getThoughts(GenerateContentResponse response);
/**
* Extracts the finish reason from the response.
*
* @param response The Gemini API response
* @return The FinishReason enum value
*/
public static GenerateContentResponse.FinishReason getFinishReason(GenerateContentResponse response);
/**
* Converts usage metadata to LangChain4j TokenUsage.
*
* @param usageMetadata The Gemini usage metadata
* @return TokenUsage object
*/
public static TokenUsage getTokenUsage(GenerateContentResponse.UsageMetadata usageMetadata);
/**
* Extracts tool execution requests from the response.
* Returns function calls that need to be executed.
*
* @param response The Gemini API response
* @return List of ToolExecutionRequest objects
*/
public static List<ToolExecutionRequest> getToolExecutionRequests(GenerateContentResponse response);
}Usage Examples:
// Extract text from response
GenerateContentResponse response = chatModel.generateContext(request);
String text = GenerateContentResponseHandler.getText(response);
System.out.println("Response: " + text);
// Extract thoughts (reasoning)
String thoughts = GenerateContentResponseHandler.getThoughts(response);
if (!thoughts.isEmpty()) {
System.out.println("Model's reasoning:\n" + thoughts);
}
// Check finish reason
GenerateContentResponse.FinishReason finishReason =
GenerateContentResponseHandler.getFinishReason(response);
switch (finishReason) {
case STOP -> System.out.println("Completed normally");
case MAX_TOKENS -> System.out.println("Hit token limit");
case SAFETY -> System.out.println("Blocked by safety filters");
default -> System.out.println("Other reason: " + finishReason);
}
// Get token usage
TokenUsage usage = GenerateContentResponseHandler.getTokenUsage(
response.usageMetadata()
);
System.out.println("Input tokens: " + usage.inputTokenCount());
System.out.println("Output tokens: " + usage.outputTokenCount());
System.out.println("Total tokens: " + usage.totalTokenCount());
// Extract tool execution requests
List<ToolExecutionRequest> toolRequests =
GenerateContentResponseHandler.getToolExecutionRequests(response);
for (ToolExecutionRequest toolRequest : toolRequests) {
System.out.println("Function to call: " + toolRequest.name());
System.out.println("Arguments: " + toolRequest.arguments());
// Execute the function
String result = executeTool(toolRequest);
// Send result back to model
// ...
}Maps Gemini's finish reasons to LangChain4j's finish reasons, providing compatibility with LangChain4j's standard response handling.
/**
* Utility for mapping Gemini finish reasons to LangChain4j finish reasons.
*/
public final class FinishReasonMapper {
/**
* Maps a Gemini FinishReason to LangChain4j FinishReason.
*
* @param finishReason The Gemini finish reason
* @return Equivalent LangChain4j FinishReason
*/
public static FinishReason map(GenerateContentResponse.FinishReason finishReason);
}Usage Example:
GenerateContentResponse.FinishReason geminiReason =
GenerateContentResponseHandler.getFinishReason(response);
// Convert to LangChain4j format
FinishReason langchain4jReason = FinishReasonMapper.map(geminiReason);
// Use in LangChain4j ChatResponse
ChatResponse chatResponse = ChatResponse.builder()
.aiMessage(aiMessage)
.finishReason(langchain4jReason)
.tokenUsage(tokenUsage)
.build();Mapping Table:
| Gemini FinishReason | LangChain4j FinishReason |
|---|---|
| STOP | STOP |
| MAX_TOKENS | LENGTH |
| SAFETY | CONTENT_FILTER |
| RECITATION | CONTENT_FILTER |
| OTHER | OTHER |
| All others | OTHER |
REST client filter for adding authentication to Gemini API requests. Implements Google Cloud authentication for requests.
/**
* REST client filter for Google Cloud authentication.
* Implements ResteasyReactiveClientRequestFilter for Quarkus REST client.
*/
public class ModelAuthProviderFilter implements ResteasyReactiveClientRequestFilter {
/**
* Creates an authentication filter for a specific model.
*
* @param modelId The Gemini model identifier
*/
public ModelAuthProviderFilter(String modelId);
/**
* Filters the request to add authentication headers.
*
* @param requestContext The REST client request context
*/
@Override
public void filter(ResteasyReactiveClientRequestContext requestContext);
}Usage Example:
// Register filter with REST client
@RegisterRestClient
@RegisterProvider(ModelAuthProviderFilter.class)
public interface GeminiRestApi {
@POST
@Path("/v1/models/{model}:generateContent")
GenerateContentResponse generateContent(
@PathParam("model") String model,
GenerateContentRequest request
);
}
// The filter automatically adds authentication headers
// based on Google Cloud credentials (environment or service account)Here's how all utilities work together in a complete implementation:
public class MyGeminiChatModel extends GeminiChatLanguageModel {
private final GeminiRestApi restApi;
public MyGeminiChatModel(String apiKey) {
super(
"gemini-1.5-pro",
0.7, 1024, 40, 0.9,
null, Collections.emptyList(),
null, false, false
);
this.restApi = createRestApi(apiKey);
}
@Override
protected GenerateContentResponse generateContext(GenerateContentRequest request) {
return restApi.generateContent("gemini-1.5-pro", request);
}
@Override
public ChatResponse doChat(ChatRequest chatRequest) {
// Step 1: Map LangChain4j to Gemini format using ContentMapper
GenerateContentRequest geminiRequest = ContentMapper.map(
chatRequest.messages(),
chatRequest.toolSpecifications(),
buildGenerationConfig(),
modelId,
useGoogleSearch
);
// Step 2: Call Gemini API
GenerateContentResponse geminiResponse = generateContext(geminiRequest);
// Step 3: Extract data using GenerateContentResponseHandler
String text = GenerateContentResponseHandler.getText(geminiResponse);
TokenUsage tokenUsage = GenerateContentResponseHandler.getTokenUsage(
geminiResponse.usageMetadata()
);
GenerateContentResponse.FinishReason geminiFinishReason =
GenerateContentResponseHandler.getFinishReason(geminiResponse);
// Step 4: Map finish reason using FinishReasonMapper
FinishReason finishReason = FinishReasonMapper.map(geminiFinishReason);
// Step 5: Check for tool calls
List<ToolExecutionRequest> toolRequests =
GenerateContentResponseHandler.getToolExecutionRequests(geminiResponse);
// Step 6: Build LangChain4j response
AiMessage aiMessage;
if (!toolRequests.isEmpty()) {
aiMessage = AiMessage.from(toolRequests);
} else {
aiMessage = AiMessage.from(text);
}
return ChatResponse.builder()
.aiMessage(aiMessage)
.tokenUsage(tokenUsage)
.finishReason(finishReason)
.build();
}
private GenerationConfig buildGenerationConfig() {
return GenerationConfig.builder()
.temperature(temperature)
.maxOutputTokens(maxOutputTokens)
.topK(topK)
.topP(topP)
.build();
}
private GeminiRestApi createRestApi(String apiKey) {
// Create REST client with auth filter
// ...
}
}getToolExecutionRequests() before getText() when tools are enabledtry {
String text = GenerateContentResponseHandler.getText(response);
// Check finish reason for issues
GenerateContentResponse.FinishReason finishReason =
GenerateContentResponseHandler.getFinishReason(response);
if (finishReason == GenerateContentResponse.FinishReason.SAFETY) {
throw new RuntimeException("Response blocked by safety filters");
} else if (finishReason == GenerateContentResponse.FinishReason.MAX_TOKENS) {
logger.warn("Response truncated due to token limit");
}
} catch (Exception e) {
logger.error("Error processing response", e);
throw e;
}These utilities integrate with:
Install with Tessl CLI
npx tessl i tessl/maven-io-quarkiverse-langchain4j--quarkus-langchain4j-gemini-common