CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/maven-dev-langchain4j--langchain4j-github-models

This package provides a deprecated integration module that enables Java applications to interact with GitHub Models through the LangChain4j framework. It offers chat models (both synchronous and streaming), embedding models, and support for AI services with tool integration, JSON schema responses, and responsible AI features. The module wraps Azure AI Inference SDK to provide a unified API for accessing various language models hosted on GitHub Models, including chat completion capabilities, embeddings generation, and content filtering management. As of version 1.10.0, this module has been marked for deprecation and future removal, with users recommended to migrate to the langchain4j-openai-official module for enhanced functionality and better integration. The library is designed for reusability as a foundational component in LLM-powered Java applications that need to leverage GitHub-hosted AI models, offering builder patterns for configuration, support for proxy options, custom timeouts, and comprehensive model service versioning capabilities.

Pending

Quality

Pending

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

Overview
Eval results
Files

network-configuration.mddocs/configuration/

Network Configuration

Configure timeouts, retries, proxies, and HTTP settings.

Timeout Configuration

Request Timeout

.timeout(Duration timeout)

Maximum time to wait for API response.

Recommended values:

// Fast, simple operations
.timeout(Duration.ofSeconds(30))

// Standard operations
.timeout(Duration.ofSeconds(60))

// Long/complex operations
.timeout(Duration.ofSeconds(120))

// Streaming (needs longer timeout)
.timeout(Duration.ofSeconds(90))

Example:

GitHubModelsChatModel model = GitHubModelsChatModel.builder()
    .gitHubToken(token)
    .modelName("gpt-4o")
    .timeout(Duration.ofSeconds(60))
    .build();

Timeout Strategies by Use Case

// Interactive chat (fast response needed)
GitHubModelsChatModel interactiveModel = GitHubModelsChatModel.builder()
    .gitHubToken(token)
    .modelName("gpt-4o-mini")  // Faster model
    .timeout(Duration.ofSeconds(30))
    .maxTokens(500)  // Limit response length
    .build();

// Background processing (can wait longer)
GitHubModelsChatModel backgroundModel = GitHubModelsChatModel.builder()
    .gitHubToken(token)
    .modelName("gpt-4o")
    .timeout(Duration.ofMinutes(5))
    .build();

// Batch embeddings
GitHubModelsEmbeddingModel embeddingModel = GitHubModelsEmbeddingModel.builder()
    .gitHubToken(token)
    .modelName(GitHubModelsEmbeddingModelName.TEXT_EMBEDDING_3_SMALL)
    .timeout(Duration.ofSeconds(45))
    .build();

Retry Configuration

Max Retries

.maxRetries(Integer maxRetries)

Number of retry attempts on transient failures.

Recommended values:

// Development (fast failure)
.maxRetries(1)

// Production (resilient)
.maxRetries(3)

// Critical operations
.maxRetries(5)

Example:

GitHubModelsChatModel model = GitHubModelsChatModel.builder()
    .gitHubToken(token)
    .modelName("gpt-4o")
    .maxRetries(3)
    .timeout(Duration.ofSeconds(60))
    .build();

Retry Behavior

The library automatically retries on:

  • Transient network errors
  • 5xx server errors
  • Timeout errors (within max retry count)

No retry on:

  • 4xx client errors (except 429 rate limit)
  • Authentication failures
  • Validation errors

Proxy Configuration

HTTP Proxy

.proxyOptions(ProxyOptions proxyOptions)

Configure HTTP proxy for requests.

Example:

import com.azure.core.http.ProxyOptions;
import java.net.InetSocketAddress;

ProxyOptions proxy = new ProxyOptions(
    ProxyOptions.Type.HTTP,
    new InetSocketAddress("proxy.example.com", 8080)
);

GitHubModelsChatModel model = GitHubModelsChatModel.builder()
    .gitHubToken(token)
    .modelName("gpt-4o")
    .proxyOptions(proxy)
    .build();

HTTPS Proxy

ProxyOptions proxy = new ProxyOptions(
    ProxyOptions.Type.HTTP,  // Use HTTP type for HTTPS proxy
    new InetSocketAddress("proxy.example.com", 443)
);

Proxy with Authentication

import com.azure.core.credential.TokenCredential;

ProxyOptions proxy = new ProxyOptions(
    ProxyOptions.Type.HTTP,
    new InetSocketAddress("proxy.example.com", 8080)
);
// Set credentials through system properties or proxy config
proxy.setCredentials(username, password);

Proxy from System Properties

public static ProxyOptions getSystemProxy() {
    String proxyHost = System.getProperty("http.proxyHost");
    String proxyPort = System.getProperty("http.proxyPort", "8080");

    if (proxyHost != null) {
        return new ProxyOptions(
            ProxyOptions.Type.HTTP,
            new InetSocketAddress(proxyHost, Integer.parseInt(proxyPort))
        );
    }

    return null;
}

GitHubModelsChatModel model = GitHubModelsChatModel.builder()
    .gitHubToken(token)
    .modelName("gpt-4o")
    .proxyOptions(getSystemProxy())
    .build();

Environment-Based Proxy

public static ProxyOptions getProxyForEnvironment() {
    // Corporate network uses proxy
    if (isCorporateNetwork()) {
        return new ProxyOptions(
            ProxyOptions.Type.HTTP,
            new InetSocketAddress("corporate-proxy.internal", 8080)
        );
    }

    // No proxy for other environments
    return null;
}

private static boolean isCorporateNetwork() {
    String network = System.getenv("NETWORK_TYPE");
    return "corporate".equals(network);
}

Custom Headers

Set Custom Headers

.customHeaders(Map<String, String> customHeaders)

Add custom HTTP headers to all requests.

Example:

Map<String, String> headers = new HashMap<>();
headers.put("X-Request-ID", UUID.randomUUID().toString());
headers.put("X-Application", "my-app");
headers.put("X-Environment", "production");

GitHubModelsChatModel model = GitHubModelsChatModel.builder()
    .gitHubToken(token)
    .modelName("gpt-4o")
    .customHeaders(headers)
    .build();

Tracking and Observability Headers

Map<String, String> headers = new HashMap<>();
headers.put("X-Trace-ID", getCurrentTraceId());
headers.put("X-User-ID", getCurrentUserId());
headers.put("X-Session-ID", getSessionId());

GitHubModelsChatModel model = GitHubModelsChatModel.builder()
    .gitHubToken(token)
    .modelName("gpt-4o")
    .customHeaders(headers)
    .build();

Correlation IDs

public GitHubModelsChatModel createModelWithCorrelation(String correlationId) {
    Map<String, String> headers = Collections.singletonMap(
        "X-Correlation-ID", correlationId
    );

    return GitHubModelsChatModel.builder()
        .gitHubToken(token)
        .modelName("gpt-4o")
        .customHeaders(headers)
        .build();
}

User Agent

User Agent Suffix

.userAgentSuffix(String userAgentSuffix)

Add custom suffix to User-Agent header for identification.

Example:

GitHubModelsChatModel model = GitHubModelsChatModel.builder()
    .gitHubToken(token)
    .modelName("gpt-4o")
    .userAgentSuffix("my-app/1.2.3")
    .build();

Application Identification

String appInfo = String.format("%s/%s (%s)",
    "MyApp",
    getAppVersion(),
    System.getProperty("java.version")
);

GitHubModelsChatModel model = GitHubModelsChatModel.builder()
    .gitHubToken(token)
    .modelName("gpt-4o")
    .userAgentSuffix(appInfo)
    .build();

Complete Network Configuration

Production Configuration

import com.azure.core.http.ProxyOptions;
import java.time.Duration;
import java.net.InetSocketAddress;
import java.util.HashMap;
import java.util.Map;

public class ProductionModelConfig {
    public static GitHubModelsChatModel createChatModel() {
        // Proxy configuration
        ProxyOptions proxy = null;
        String proxyHost = System.getenv("HTTP_PROXY_HOST");
        if (proxyHost != null) {
            int proxyPort = Integer.parseInt(
                System.getenv("HTTP_PROXY_PORT", "8080")
            );
            proxy = new ProxyOptions(
                ProxyOptions.Type.HTTP,
                new InetSocketAddress(proxyHost, proxyPort)
            );
        }

        // Custom headers
        Map<String, String> headers = new HashMap<>();
        headers.put("X-Application", "my-app");
        headers.put("X-Environment", "production");

        return GitHubModelsChatModel.builder()
            .gitHubToken(System.getenv("GITHUB_TOKEN"))
            .modelName(GitHubModelsChatModelName.GPT_4_O)
            .timeout(Duration.ofSeconds(60))
            .maxRetries(3)
            .proxyOptions(proxy)
            .customHeaders(headers)
            .userAgentSuffix("my-app/1.0.0")
            .build();
    }
}

Development Configuration

public class DevelopmentModelConfig {
    public static GitHubModelsChatModel createChatModel() {
        return GitHubModelsChatModel.builder()
            .gitHubToken(System.getenv("GITHUB_TOKEN"))
            .modelName("gpt-4o")
            .timeout(Duration.ofMinutes(5))  // Longer for debugging
            .maxRetries(1)  // Fast failure
            .logRequestsAndResponses(true)  // Enable logging
            .build();
    }
}

Handling Network Errors

Timeout Errors

import java.net.SocketTimeoutException;

try {
    ChatResponse response = model.chat(request);
} catch (Exception e) {
    if (e.getCause() instanceof SocketTimeoutException) {
        System.err.println("Request timed out. Consider increasing timeout or using smaller model.");
    }
}

Retry Exhaustion

try {
    ChatResponse response = model.chat(request);
} catch (HttpResponseException e) {
    System.err.println("Request failed after " +
        model.getMaxRetries() + " retries");
    System.err.println("Status: " + e.getResponse().getStatusCode());
}

Proxy Errors

try {
    ChatResponse response = model.chat(request);
} catch (Exception e) {
    if (e.getMessage().contains("proxy")) {
        System.err.println("Proxy connection failed. Check proxy configuration.");
    }
}

Best Practices

Set Appropriate Timeouts

// ✅ Good - reasonable timeout
.timeout(Duration.ofSeconds(60))

// ❌ Bad - too short (likely to fail)
.timeout(Duration.ofSeconds(5))

// ❌ Bad - too long (poor user experience)
.timeout(Duration.ofMinutes(30))

Use Retries in Production

// ✅ Good - resilient to transient failures
.maxRetries(3)

// ⚠️ Acceptable for development
.maxRetries(1)

// ❌ Bad - not resilient
.maxRetries(0)

Configure Proxy When Needed

// ✅ Good - conditional proxy setup
ProxyOptions proxy = shouldUseProxy() ? configureProxy() : null;
.proxyOptions(proxy)

// ❌ Bad - hardcoded proxy (breaks outside corporate network)
.proxyOptions(new ProxyOptions(Type.HTTP,
    new InetSocketAddress("proxy.corp", 8080)))

Add Meaningful Headers

// ✅ Good - helpful for debugging and tracking
headers.put("X-Request-ID", generateRequestId());
headers.put("X-Application", "order-service");

// ❌ Bad - generic/unhelpful headers
headers.put("X-Custom", "value");

Environment-Specific Configurations

Use Different Settings per Environment

public class EnvironmentConfig {
    public static GitHubModelsChatModel createModel() {
        String env = System.getenv("APP_ENV");

        Duration timeout;
        int maxRetries;

        switch (env != null ? env : "dev") {
            case "production":
                timeout = Duration.ofSeconds(60);
                maxRetries = 5;
                break;
            case "staging":
                timeout = Duration.ofSeconds(90);
                maxRetries = 3;
                break;
            default: // development
                timeout = Duration.ofMinutes(5);
                maxRetries = 1;
        }

        return GitHubModelsChatModel.builder()
            .gitHubToken(System.getenv("GITHUB_TOKEN"))
            .modelName("gpt-4o")
            .timeout(timeout)
            .maxRetries(maxRetries)
            .build();
    }
}

See Also

  • Builder Configuration
  • Authentication
  • Advanced Configuration
  • Error Handling

Install with Tessl CLI

npx tessl i tessl/maven-dev-langchain4j--langchain4j-github-models

docs

index.md

quick-reference.md

tile.json