CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/maven-com-embabel-agent--embabel-agent-openai

OpenAI compatible model factory for the Embabel Agent Framework

Overview
Eval results
Files

options-converters.mddocs/

Options Converters

Options converters transform portable LlmOptions (framework-agnostic) into OpenAI-specific OpenAiChatOptions. Different converters support different model capabilities.

Quick Decision Guide

Are you using GPT-5 models?
└─ YES → Use Gpt5ChatOptionsConverter
└─ NO → Do you need explicit control over all parameters?
    └─ YES → Use StandardOpenAiOptionsConverter
    └─ NO → Use OpenAiChatOptionsConverter (default)

OpenAiChatOptionsConverter (Default)

Safe default that works with most OpenAI models.

/**
 * Default options converter for OpenAI models.
 * Safe default that works with most OpenAI models.
 * Some models may not support all options.
 */
object OpenAiChatOptionsConverter : OptionsConverter<OpenAiChatOptions> {
    override fun convertOptions(options: LlmOptions): OpenAiChatOptions
}

Use when:

  • You're using standard OpenAI models (GPT-3.5, GPT-4, GPT-4 Turbo)
  • You want a safe default without worrying about parameter support
  • You're prototyping or getting started

Behavior:

  • Converts all LlmOptions fields to OpenAiChatOptions
  • Some models may silently ignore unsupported parameters
  • No warnings or errors for unsupported parameters

Example:

// Uses OpenAiChatOptionsConverter by default
val service = factory.openAiCompatibleLlm(
    model = "gpt-3.5-turbo",
    pricingModel = PricingModel.usdPer1MTokens(0.5, 1.5),
    provider = "OpenAI",
    knowledgeCutoffDate = LocalDate.of(2021, 9, 1)
    // optionsConverter not specified = uses OpenAiChatOptionsConverter
)

StandardOpenAiOptionsConverter

Explicit support for all standard OpenAI parameters.

/**
 * Options converter for OpenAI models that support all parameters.
 * Explicitly supports: temperature, topP, maxTokens, presencePenalty, frequencyPenalty.
 */
object StandardOpenAiOptionsConverter : OptionsConverter<OpenAiChatOptions> {
    override fun convertOptions(options: LlmOptions): OpenAiChatOptions
}

Use when:

  • You're certain your model supports all OpenAI parameters
  • You want explicit, predictable parameter handling
  • You're using GPT-4 or GPT-4 Turbo models
  • You need fine control over generation parameters

Supported parameters:

  • temperature (0.0-2.0): Controls randomness. Lower = more deterministic, higher = more creative
  • topP (0.0-1.0): Nucleus sampling. Alternative to temperature
  • maxTokens: Maximum tokens to generate in the response
  • presencePenalty (-2.0 to 2.0): Penalty for introducing new topics
  • frequencyPenalty (-2.0 to 2.0): Penalty for repeating tokens

Example:

val service = factory.openAiCompatibleLlm(
    model = "gpt-4",
    pricingModel = PricingModel.usdPer1MTokens(30.0, 60.0),
    provider = "OpenAI",
    knowledgeCutoffDate = LocalDate.of(2023, 4, 1),
    optionsConverter = StandardOpenAiOptionsConverter
)

// When calling the service, all parameters will be honored
val options = LlmOptions(
    temperature = 0.7,
    topP = 0.9,
    maxTokens = 1000,
    presencePenalty = 0.5,
    frequencyPenalty = 0.5
)

Gpt5ChatOptionsConverter

Special converter for GPT-5 models that don't support temperature adjustment.

/**
 * Options converter for GPT-5 models that don't support temperature adjustment.
 * Logs a warning if temperature is set to a non-default value (anything other than 1.0).
 * Supports: topP, maxTokens, presencePenalty, frequencyPenalty.
 * Does NOT support: temperature
 */
object Gpt5ChatOptionsConverter : OptionsConverter<OpenAiChatOptions> {
    override fun convertOptions(options: LlmOptions): OpenAiChatOptions
}

Use when:

  • You're using GPT-5 models
  • You need to avoid temperature-related errors or warnings

Supported parameters:

  • topP (0.0-1.0): Nucleus sampling
  • maxTokens: Maximum tokens to generate
  • presencePenalty (-2.0 to 2.0): Penalty for new topics
  • frequencyPenalty (-2.0 to 2.0): Penalty for repetition

NOT supported:

  • temperature: GPT-5 models do not support temperature adjustment. If you set temperature to anything other than 1.0, a warning is logged and the parameter is ignored.

Example:

val gpt5Service = factory.openAiCompatibleLlm(
    model = "gpt-5-turbo",
    pricingModel = PricingModel.usdPer1MTokens(10.0, 30.0),
    provider = "OpenAI",
    knowledgeCutoffDate = LocalDate.of(2024, 10, 1),
    optionsConverter = Gpt5ChatOptionsConverter  // Required for GPT-5
)

// If you try to set temperature != 1.0, a warning is logged
val options = LlmOptions(
    temperature = 0.7,  // WARNING: This will be ignored and logged
    topP = 0.9,         // This works
    maxTokens = 1000    // This works
)

Comparison Table

FeatureOpenAiChatOptionsConverterStandardOpenAiOptionsConverterGpt5ChatOptionsConverter
Use caseSafe defaultExplicit controlGPT-5 models
Temperature✗ (warns if != 1.0)
TopP
MaxTokens
PresencePenalty
FrequencyPenalty
WarningsNoneNoneYes (for temperature)
Recommended forMost modelsGPT-4, GPT-4 TurboGPT-5

Creating Custom Converters

You can create your own converter by implementing the OptionsConverter interface:

fun interface OptionsConverter<O : ChatOptions> {
    fun convertOptions(options: LlmOptions): O
}

Example - Custom converter with default maxTokens:

object CustomConverter : OptionsConverter<OpenAiChatOptions> {
    override fun convertOptions(options: LlmOptions): OpenAiChatOptions {
        return OpenAiChatOptions.builder()
            .withTemperature(options.temperature ?: 0.7)
            .withTopP(options.topP)
            .withMaxTokens(options.maxTokens ?: 2000)  // Default to 2000 if not specified
            .withPresencePenalty(options.presencePenalty)
            .withFrequencyPenalty(options.frequencyPenalty)
            .build()
    }
}

// Use it
val service = factory.openAiCompatibleLlm(
    model = "gpt-4",
    pricingModel = PricingModel.usdPer1MTokens(30.0, 60.0),
    provider = "OpenAI",
    knowledgeCutoffDate = LocalDate.of(2023, 4, 1),
    optionsConverter = CustomConverter
)

Example - Converter that caps temperature:

object CappedTemperatureConverter : OptionsConverter<OpenAiChatOptions> {
    override fun convertOptions(options: LlmOptions): OpenAiChatOptions {
        val temperature = options.temperature?.coerceIn(0.0, 1.0)  // Cap at 1.0

        return OpenAiChatOptions.builder()
            .withTemperature(temperature)
            .withTopP(options.topP)
            .withMaxTokens(options.maxTokens)
            .withPresencePenalty(options.presencePenalty)
            .withFrequencyPenalty(options.frequencyPenalty)
            .build()
    }
}

Java Usage

In Java, access converters using .INSTANCE:

import com.embabel.agent.openai.OpenAiChatOptionsConverter;
import com.embabel.agent.openai.Gpt5ChatOptionsConverter;
import com.embabel.agent.openai.StandardOpenAiOptionsConverter;

// Default converter
LlmService<?> service1 = factory.openAiCompatibleLlm(
    "gpt-3.5-turbo",
    PricingModel.usdPer1MTokens(0.5, 1.5),
    "OpenAI",
    LocalDate.of(2021, 9, 1),
    OpenAiChatOptionsConverter.INSTANCE,  // Note: .INSTANCE for Java
    RetryUtils.DEFAULT_RETRY_TEMPLATE
);

// Standard converter
LlmService<?> service2 = factory.openAiCompatibleLlm(
    "gpt-4",
    PricingModel.usdPer1MTokens(30.0, 60.0),
    "OpenAI",
    LocalDate.of(2023, 4, 1),
    StandardOpenAiOptionsConverter.INSTANCE,
    RetryUtils.DEFAULT_RETRY_TEMPLATE
);

// GPT-5 converter
LlmService<?> service3 = factory.openAiCompatibleLlm(
    "gpt-5-turbo",
    PricingModel.usdPer1MTokens(10.0, 30.0),
    "OpenAI",
    LocalDate.of(2024, 10, 1),
    Gpt5ChatOptionsConverter.INSTANCE,
    RetryUtils.DEFAULT_RETRY_TEMPLATE
);

Common Issues

Issue: "Parameter not supported" error

  • Cause: Model doesn't support a parameter you're passing
  • Solution: Switch to OpenAiChatOptionsConverter (more forgiving) or check model documentation

Issue: GPT-5 temperature warnings

  • Cause: Using wrong converter for GPT-5 models
  • Solution: Use Gpt5ChatOptionsConverter

Issue: Parameters being ignored silently

  • Cause: Using OpenAiChatOptionsConverter with a model that doesn't support all parameters
  • Solution: Switch to StandardOpenAiOptionsConverter for explicit behavior, or check model capabilities

Install with Tessl CLI

npx tessl i tessl/maven-com-embabel-agent--embabel-agent-openai

docs

api-reference.md

configuration.md

extending.md

index.md

java-usage.md

options-converters.md

quickstart.md

spring-integration.md

use-cases.md

tile.json