OpenAI compatible model factory for the Embabel Agent Framework
Options converters transform portable LlmOptions (framework-agnostic) into OpenAI-specific OpenAiChatOptions. Different converters support different model capabilities.
Are you using GPT-5 models?
└─ YES → Use Gpt5ChatOptionsConverter
└─ NO → Do you need explicit control over all parameters?
└─ YES → Use StandardOpenAiOptionsConverter
└─ NO → Use OpenAiChatOptionsConverter (default)Safe default that works with most OpenAI models.
/**
* Default options converter for OpenAI models.
* Safe default that works with most OpenAI models.
* Some models may not support all options.
*/
object OpenAiChatOptionsConverter : OptionsConverter<OpenAiChatOptions> {
override fun convertOptions(options: LlmOptions): OpenAiChatOptions
}Use when:
Behavior:
LlmOptions fields to OpenAiChatOptionsExample:
// Uses OpenAiChatOptionsConverter by default
val service = factory.openAiCompatibleLlm(
model = "gpt-3.5-turbo",
pricingModel = PricingModel.usdPer1MTokens(0.5, 1.5),
provider = "OpenAI",
knowledgeCutoffDate = LocalDate.of(2021, 9, 1)
// optionsConverter not specified = uses OpenAiChatOptionsConverter
)Explicit support for all standard OpenAI parameters.
/**
* Options converter for OpenAI models that support all parameters.
* Explicitly supports: temperature, topP, maxTokens, presencePenalty, frequencyPenalty.
*/
object StandardOpenAiOptionsConverter : OptionsConverter<OpenAiChatOptions> {
override fun convertOptions(options: LlmOptions): OpenAiChatOptions
}Use when:
Supported parameters:
Example:
val service = factory.openAiCompatibleLlm(
model = "gpt-4",
pricingModel = PricingModel.usdPer1MTokens(30.0, 60.0),
provider = "OpenAI",
knowledgeCutoffDate = LocalDate.of(2023, 4, 1),
optionsConverter = StandardOpenAiOptionsConverter
)
// When calling the service, all parameters will be honored
val options = LlmOptions(
temperature = 0.7,
topP = 0.9,
maxTokens = 1000,
presencePenalty = 0.5,
frequencyPenalty = 0.5
)Special converter for GPT-5 models that don't support temperature adjustment.
/**
* Options converter for GPT-5 models that don't support temperature adjustment.
* Logs a warning if temperature is set to a non-default value (anything other than 1.0).
* Supports: topP, maxTokens, presencePenalty, frequencyPenalty.
* Does NOT support: temperature
*/
object Gpt5ChatOptionsConverter : OptionsConverter<OpenAiChatOptions> {
override fun convertOptions(options: LlmOptions): OpenAiChatOptions
}Use when:
Supported parameters:
NOT supported:
Example:
val gpt5Service = factory.openAiCompatibleLlm(
model = "gpt-5-turbo",
pricingModel = PricingModel.usdPer1MTokens(10.0, 30.0),
provider = "OpenAI",
knowledgeCutoffDate = LocalDate.of(2024, 10, 1),
optionsConverter = Gpt5ChatOptionsConverter // Required for GPT-5
)
// If you try to set temperature != 1.0, a warning is logged
val options = LlmOptions(
temperature = 0.7, // WARNING: This will be ignored and logged
topP = 0.9, // This works
maxTokens = 1000 // This works
)| Feature | OpenAiChatOptionsConverter | StandardOpenAiOptionsConverter | Gpt5ChatOptionsConverter |
|---|---|---|---|
| Use case | Safe default | Explicit control | GPT-5 models |
| Temperature | ✓ | ✓ | ✗ (warns if != 1.0) |
| TopP | ✓ | ✓ | ✓ |
| MaxTokens | ✓ | ✓ | ✓ |
| PresencePenalty | ✓ | ✓ | ✓ |
| FrequencyPenalty | ✓ | ✓ | ✓ |
| Warnings | None | None | Yes (for temperature) |
| Recommended for | Most models | GPT-4, GPT-4 Turbo | GPT-5 |
You can create your own converter by implementing the OptionsConverter interface:
fun interface OptionsConverter<O : ChatOptions> {
fun convertOptions(options: LlmOptions): O
}Example - Custom converter with default maxTokens:
object CustomConverter : OptionsConverter<OpenAiChatOptions> {
override fun convertOptions(options: LlmOptions): OpenAiChatOptions {
return OpenAiChatOptions.builder()
.withTemperature(options.temperature ?: 0.7)
.withTopP(options.topP)
.withMaxTokens(options.maxTokens ?: 2000) // Default to 2000 if not specified
.withPresencePenalty(options.presencePenalty)
.withFrequencyPenalty(options.frequencyPenalty)
.build()
}
}
// Use it
val service = factory.openAiCompatibleLlm(
model = "gpt-4",
pricingModel = PricingModel.usdPer1MTokens(30.0, 60.0),
provider = "OpenAI",
knowledgeCutoffDate = LocalDate.of(2023, 4, 1),
optionsConverter = CustomConverter
)Example - Converter that caps temperature:
object CappedTemperatureConverter : OptionsConverter<OpenAiChatOptions> {
override fun convertOptions(options: LlmOptions): OpenAiChatOptions {
val temperature = options.temperature?.coerceIn(0.0, 1.0) // Cap at 1.0
return OpenAiChatOptions.builder()
.withTemperature(temperature)
.withTopP(options.topP)
.withMaxTokens(options.maxTokens)
.withPresencePenalty(options.presencePenalty)
.withFrequencyPenalty(options.frequencyPenalty)
.build()
}
}In Java, access converters using .INSTANCE:
import com.embabel.agent.openai.OpenAiChatOptionsConverter;
import com.embabel.agent.openai.Gpt5ChatOptionsConverter;
import com.embabel.agent.openai.StandardOpenAiOptionsConverter;
// Default converter
LlmService<?> service1 = factory.openAiCompatibleLlm(
"gpt-3.5-turbo",
PricingModel.usdPer1MTokens(0.5, 1.5),
"OpenAI",
LocalDate.of(2021, 9, 1),
OpenAiChatOptionsConverter.INSTANCE, // Note: .INSTANCE for Java
RetryUtils.DEFAULT_RETRY_TEMPLATE
);
// Standard converter
LlmService<?> service2 = factory.openAiCompatibleLlm(
"gpt-4",
PricingModel.usdPer1MTokens(30.0, 60.0),
"OpenAI",
LocalDate.of(2023, 4, 1),
StandardOpenAiOptionsConverter.INSTANCE,
RetryUtils.DEFAULT_RETRY_TEMPLATE
);
// GPT-5 converter
LlmService<?> service3 = factory.openAiCompatibleLlm(
"gpt-5-turbo",
PricingModel.usdPer1MTokens(10.0, 30.0),
"OpenAI",
LocalDate.of(2024, 10, 1),
Gpt5ChatOptionsConverter.INSTANCE,
RetryUtils.DEFAULT_RETRY_TEMPLATE
);Issue: "Parameter not supported" error
OpenAiChatOptionsConverter (more forgiving) or check model documentationIssue: GPT-5 temperature warnings
Gpt5ChatOptionsConverterIssue: Parameters being ignored silently
OpenAiChatOptionsConverter with a model that doesn't support all parametersStandardOpenAiOptionsConverter for explicit behavior, or check model capabilitiesInstall with Tessl CLI
npx tessl i tessl/maven-com-embabel-agent--embabel-agent-openai