CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/maven-dev-langchain4j--langchain4j-azure-open-ai

LangChain4j integration for Azure OpenAI providing chat, streaming, embeddings, image generation, audio transcription, and token counting capabilities

Overview
Eval results
Files

language-models.mddocs/

Language Models (Deprecated)

Language models provide text completion capabilities using models like GPT-3.5-turbo-instruct. These models are deprecated in favor of Chat Models, which provide better performance and more features.

Migration Notice

⚠️ DEPRECATED: Use AzureOpenAiChatModel instead for all new development.

Why deprecated:

  • Chat models have better performance
  • Chat models support function calling
  • Chat models support structured output
  • Chat models support multi-turn conversations
  • Language models are legacy OpenAI API

Migration path:

// Old: Language Model
AzureOpenAiLanguageModel old = AzureOpenAiLanguageModel.builder()
    .endpoint(endpoint)
    .apiKey(apiKey)
    .deploymentName("gpt-35-turbo-instruct")
    .build();
String result = old.generate(prompt);

// New: Chat Model (recommended)
AzureOpenAiChatModel new = AzureOpenAiChatModel.builder()
    .endpoint(endpoint)
    .apiKey(apiKey)
    .deploymentName("gpt-35-turbo")  // Use chat model
    .build();
String result = new.generate(prompt);  // Same interface

Imports

import dev.langchain4j.model.azure.AzureOpenAiLanguageModel;
import dev.langchain4j.model.azure.AzureOpenAiStreamingLanguageModel;
import dev.langchain4j.model.azure.AzureOpenAiLanguageModelName;
import dev.langchain4j.model.output.Response;
import dev.langchain4j.model.language.StreamingResponseHandler;

AzureOpenAiLanguageModel

package dev.langchain4j.model.azure;

/**
 * @deprecated Use AzureOpenAiChatModel instead.
 * Synchronous language completion model.
 * Thread-safe: Yes
 */
@Deprecated
class AzureOpenAiLanguageModel implements dev.langchain4j.model.language.LanguageModel {
    static Builder builder();

    /**
     * Generates text completion for prompt.
     * @param prompt Input text
     * @return Response with completion and token usage
     */
    dev.langchain4j.model.output.Response<String> generate(String prompt);

    class Builder {
        // Mandatory
        Builder endpoint(String endpoint);
        Builder serviceVersion(String serviceVersion);
        Builder deploymentName(String deploymentName);

        // Authentication
        Builder apiKey(String apiKey);
        Builder nonAzureApiKey(String apiKey);
        Builder tokenCredential(com.azure.core.credential.TokenCredential credential);

        // Generation parameters
        /**
         * @param maxTokens 1 to model max
         * @default null
         */
        Builder maxTokens(Integer maxTokens);

        /**
         * @param temperature 0.0 to 2.0
         * @default 1.0
         */
        Builder temperature(Double temperature);

        /**
         * @param topP 0.0 to 1.0
         * @default 1.0
         */
        Builder topP(Double topP);

        /**
         * @param logitBias Token ID to bias map
         * @default null
         */
        Builder logitBias(java.util.Map<String, Integer> logitBias);

        Builder user(String user);

        /**
         * Number of log probabilities to return.
         * @param logprobs 0 to 5
         * @default null
         */
        Builder logprobs(Integer logprobs);

        /**
         * Echo prompt in response.
         * @param echo true to include prompt
         * @default false
         */
        Builder echo(Boolean echo);

        Builder stop(java.util.List<String> stop);

        /**
         * @param presencePenalty -2.0 to 2.0
         * @default 0.0
         */
        Builder presencePenalty(Double presencePenalty);

        /**
         * @param frequencyPenalty -2.0 to 2.0
         * @default 0.0
         */
        Builder frequencyPenalty(Double frequencyPenalty);

        /**
         * Generate N completions and return best.
         * NOT available for streaming.
         * @param bestOf 1 to 20
         * @default 1
         */
        Builder bestOf(Integer bestOf);

        // HTTP configuration
        Builder timeout(java.time.Duration timeout);
        Builder maxRetries(Integer maxRetries);
        Builder retryOptions(com.azure.core.http.policy.RetryOptions retryOptions);
        Builder proxyOptions(com.azure.core.http.ProxyOptions proxyOptions);
        Builder httpClientProvider(com.azure.core.http.HttpClientProvider httpClientProvider);
        Builder openAIClient(com.azure.ai.openai.OpenAIClient client);
        Builder customHeaders(java.util.Map<String, String> customHeaders);
        Builder userAgentSuffix(String userAgentSuffix);
        Builder logRequestsAndResponses(boolean logRequestsAndResponses);

        AzureOpenAiLanguageModel build();
    }
}

AzureOpenAiStreamingLanguageModel

/**
 * @deprecated Use AzureOpenAiStreamingChatModel instead.
 * Streaming language completion model.
 * Thread-safe: Yes
 */
@Deprecated
class AzureOpenAiStreamingLanguageModel implements dev.langchain4j.model.language.StreamingLanguageModel {
    static Builder builder();

    /**
     * Generates streaming completion.
     * @param prompt Input text
     * @param handler Handler for tokens and completion
     */
    void generate(String prompt,
                 dev.langchain4j.model.language.StreamingResponseHandler<String> handler);

    /**
     * Builder identical to AzureOpenAiLanguageModel.Builder except:
     * - No bestOf() method (not supported for streaming)
     * - openAIAsyncClient() instead of openAIClient()
     */
    class Builder {
        // Same as AzureOpenAiLanguageModel.Builder
        // Except: no bestOf(), uses openAIAsyncClient()

        Builder openAIAsyncClient(com.azure.ai.openai.OpenAIAsyncClient client);

        AzureOpenAiStreamingLanguageModel build();
    }
}

Model Names

/**
 * @deprecated Use AzureOpenAiChatModelName instead
 */
@Deprecated
enum AzureOpenAiLanguageModelName {
    /** gpt-35-turbo-instruct: Instruction-following variant */
    GPT_3_5_TURBO_INSTRUCT,
    /** gpt-35-turbo-instruct-0914: September 2023 snapshot */
    GPT_3_5_TURBO_INSTRUCT_0914,
    /** davinci-002: Legacy Davinci model */
    TEXT_DAVINCI_002,
    /** davinci-002-1: Legacy Davinci, version 1 */
    TEXT_DAVINCI_002_1;

    String modelName();
    String modelType();
    String modelVersion();
    String toString();
}

Basic Usage (Deprecated)

// DO NOT USE - Deprecated
AzureOpenAiLanguageModel model = AzureOpenAiLanguageModel.builder()
    .endpoint("https://your-resource.openai.azure.com/")
    .apiKey("your-api-key")
    .deploymentName("gpt-35-turbo-instruct")
    .serviceVersion("2024-02-15-preview")
    .temperature(0.7)
    .build();

Response<String> response = model.generate("Once upon a time");
String completion = response.content();

Migration Examples

Simple Generation

// Old (deprecated)
AzureOpenAiLanguageModel oldModel = AzureOpenAiLanguageModel.builder()
    .endpoint(endpoint)
    .apiKey(apiKey)
    .deploymentName("gpt-35-turbo-instruct")
    .temperature(0.7)
    .maxTokens(100)
    .build();
String result = oldModel.generate("Complete this: Hello").content();

// New (recommended)
AzureOpenAiChatModel newModel = AzureOpenAiChatModel.builder()
    .endpoint(endpoint)
    .apiKey(apiKey)
    .deploymentName("gpt-35-turbo")
    .temperature(0.7)
    .maxCompletionTokens(100)
    .build();
String result = newModel.generate("Complete this: Hello");

Streaming

// Old (deprecated)
AzureOpenAiStreamingLanguageModel oldStreaming =
    AzureOpenAiStreamingLanguageModel.builder()
        .endpoint(endpoint)
        .apiKey(apiKey)
        .deploymentName("gpt-35-turbo-instruct")
        .build();

oldStreaming.generate("Tell me a story",
    new StreamingResponseHandler<String>() {
        public void onNext(String token) { System.out.print(token); }
        public void onComplete(Response<String> response) {}
        public void onError(Throwable error) {}
    });

// New (recommended)
AzureOpenAiStreamingChatModel newStreaming =
    AzureOpenAiStreamingChatModel.builder()
        .endpoint(endpoint)
        .apiKey(apiKey)
        .deploymentName("gpt-35-turbo")
        .build();

newStreaming.generate("Tell me a story",
    new StreamingResponseHandler<AiMessage>() {
        public void onNext(String token) { System.out.print(token); }
        public void onComplete(Response<AiMessage> response) {}
        public void onError(Throwable error) {}
    });

Why Migrate to Chat Models

Benefits:

  1. Better performance: Chat models are optimized and faster
  2. Function calling: Support for tool use and structured workflows
  3. Multi-turn conversations: Native conversation support
  4. Structured output: JSON schema validation
  5. Vision support: GPT-4 Vision for multimodal inputs
  6. Future-proof: Active development and new features
  7. Better prompting: System messages for better control

No downsides:

  • Same API structure (generate() method)
  • Same parameters (temperature, maxTokens, etc.)
  • Same performance characteristics
  • Same pricing

Error Handling

try {
    Response<String> response = model.generate(prompt);
} catch (dev.langchain4j.exception.ContentFilteredException e) {
    System.err.println("Content filtered");
} catch (java.util.concurrent.TimeoutException e) {
    System.err.println("Request timed out");
} catch (RuntimeException e) {
    System.err.println("Error: " + e.getMessage());
}

Recommendation

Do not use language models for new development. Migrate existing code to chat models for better performance and features.

See Chat Models Documentation for complete chat model details.

Install with Tessl CLI

npx tessl i tessl/maven-dev-langchain4j--langchain4j-azure-open-ai

docs

audio-transcription.md

chat-models.md

configuration.md

embedding-model.md

image-model.md

index.md

language-models.md

token-counting.md

tile.json