This package provides a deprecated integration module that enables Java applications to interact with GitHub Models through the LangChain4j framework. It offers chat models (both synchronous and streaming), embedding models, and support for AI services with tool integration, JSON schema responses, and responsible AI features. The module wraps Azure AI Inference SDK to provide a unified API for accessing various language models hosted on GitHub Models, including chat completion capabilities, embeddings generation, and content filtering management. As of version 1.10.0, this module has been marked for deprecation and future removal, with users recommended to migrate to the langchain4j-openai-official module for enhanced functionality and better integration. The library is designed for reusability as a foundational component in LLM-powered Java applications that need to leverage GitHub-hosted AI models, offering builder patterns for configuration, support for proxy options, custom timeouts, and comprehensive model service versioning capabilities.
—
Quality
Pending
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Complete reference of available chat and embedding models with selection guidance.
GitHubModelsChatModelName.GPT_4_O // "gpt-4o"Capabilities:
Best For:
Performance: High quality, moderate speed
GitHubModelsChatModelName.GPT_4_O_MINI // "gpt-4o-mini"Capabilities:
Best For:
Performance: Good quality, fast speed
GitHubModelsChatModelName.PHI_3_5_MINI_INSTRUCT // "Phi-3.5-mini-instruct"Best For: Resource-constrained scenarios, edge deployment, simple tasks
GitHubModelsChatModelName.PHI_3_5_VISION_INSTRUCT // "Phi-3.5-vision-instruct"Capabilities: Vision understanding
Best For: Image analysis, vision tasks, multimodal applications
GitHubModelsChatModelName.PHI_3_MEDIUM_INSTRUCT_128K // 128K context
GitHubModelsChatModelName.PHI_3_MEDIUM_INSTRUCT_4K // 4K context
GitHubModelsChatModelName.PHI_3_SMALL_INSTRUCT_128K // 128K context
GitHubModelsChatModelName.PHI_3_SMALL_INSTRUCT_8K // 8K context
GitHubModelsChatModelName.PHI_3_MINI_INSTRUCT_128K // 128K context
GitHubModelsChatModelName.PHI_3_MINI_INSTRUCT_4K // 4K contextChoose By:
GitHubModelsChatModelName.AI21_JAMBA_1_5_LARGE // "ai21-jamba-1.5-large"
GitHubModelsChatModelName.AI21_JAMBA_1_5_MINI // "ai21-jamba-1.5-mini"
GitHubModelsChatModelName.AI21_JAMBA_INSTRUCT // "ai21-jamba-instruct"Best For: Instruction following, structured tasks
Choose:
GitHubModelsChatModelName.COHERE_COMMAND_R // "cohere-command-r"
GitHubModelsChatModelName.COHERE_COMMAND_R_PLUS // "cohere-command-r-plus"Best For: Strong instruction following, command-style interactions, RAG applications
Choose:
GitHubModelsChatModelName.META_LLAMA_3_1_405B_INSTRUCT // "meta-llama-3.1-405b-instruct"
GitHubModelsChatModelName.META_LLAMA_3_1_70B_INSTRUCT // "meta-llama-3.1-70b-instruct"
GitHubModelsChatModelName.META_LLAMA_3_1_8B_INSTRUCT // "meta-llama-3.1-8b-instruct"Latest Llama generation with improved capabilities
GitHubModelsChatModelName.META_LLAMA_3_70B_INSTRUCT // "meta-llama-3-70b-instruct"
GitHubModelsChatModelName.META_LLAMA_3_8B_INSTRUCT // "meta-llama-3-8b-instruct"Choose By Size:
Best For: Open-source deployment, customization, various task complexities
GitHubModelsChatModelName.MISTRAL_NEMO // "Mistral-nemo"
GitHubModelsChatModelName.MISTRAL_LARGE // "Mistral-large"
GitHubModelsChatModelName.MISTRAL_LARGE_2407 // "Mistral-large-2407"
GitHubModelsChatModelName.MISTRAL_SMALL // "Mistral-small"Best For: European languages, multilingual tasks
Choose:
GitHubModelsEmbeddingModelName.TEXT_EMBEDDING_3_SMALL // "text-embedding-3-small"Dimensions: 1536 (default), customizable Custom Dimensions: Yes (down to any size)
Best For:
Performance: High quality, fast
GitHubModelsEmbeddingModelName.TEXT_EMBEDDING_3_LARGE // "text-embedding-3-large"Dimensions: 3072 (default), customizable Custom Dimensions: Yes (down to any size)
Best For:
Performance: Highest quality, moderate speed
Dimension Tradeoff:
// Default: Maximum quality
.modelName(TEXT_EMBEDDING_3_LARGE) // 3072 dimensions
// Reduced: Better performance, slight quality reduction
.modelName(TEXT_EMBEDDING_3_LARGE).dimensions(512) // 512 dimensionsGitHubModelsEmbeddingModelName.COHERE_EMBED_V3_ENGLISH // "cohere-embed-v3-english"Dimensions: 1024 (fixed) Custom Dimensions: No
Best For:
GitHubModelsEmbeddingModelName.COHERE_EMBED_V3_MULTILINGUAL // "cohere-embed-v3-multilingual"Dimensions: 1024 (fixed) Custom Dimensions: No Languages: 100+ languages
Best For:
TEXT_EMBEDDING_3_SMALL // Best balanceTEXT_EMBEDDING_3_LARGE // Highest qualityCOHERE_EMBED_V3_MULTILINGUAL // 100+ languagesTEXT_EMBEDDING_3_SMALL.dimensions(512) // Reduced dimensions
COHERE_EMBED_V3_ENGLISH // Fixed 1024 dimensionsGenerally:
Balance:
Embeddings:
GitHubModelsChatModel model = GitHubModelsChatModel.builder()
.gitHubToken(token)
.modelName(GitHubModelsChatModelName.GPT_4_O)
.temperature(0.8)
.maxTokens(2000)
.build();GitHubModelsChatModel model = GitHubModelsChatModel.builder()
.gitHubToken(token)
.modelName(GitHubModelsChatModelName.GPT_4_O_MINI)
.temperature(0.3)
.maxTokens(500)
.timeout(Duration.ofSeconds(30))
.build();GitHubModelsChatModel model = GitHubModelsChatModel.builder()
.gitHubToken(token)
.modelName(GitHubModelsChatModelName.META_LLAMA_3_1_405B_INSTRUCT)
.temperature(0.2)
.maxTokens(3000)
.build();GitHubModelsEmbeddingModel model = GitHubModelsEmbeddingModel.builder()
.gitHubToken(token)
.modelName(GitHubModelsEmbeddingModelName.COHERE_EMBED_V3_MULTILINGUAL)
.build();GitHubModelsChatModel model = GitHubModelsChatModel.builder()
.gitHubToken(token)
.modelName(GitHubModelsChatModelName.PHI_3_5_VISION_INSTRUCT)
.build();| Model | Size | Speed | Quality | Context | Vision | Cost |
|---|---|---|---|---|---|---|
| GPT-4o | Large | Medium | Excellent | 128K | Yes | High |
| GPT-4o Mini | Small | Fast | Good | 128K | No | Low |
| Llama 3.1 405B | XLarge | Slow | Excellent | 128K | No | High |
| Llama 3.1 8B | Small | Fast | Good | 128K | No | Low |
| Phi-3.5 Mini | XSmall | Fast | Fair | Varies | No | Low |
| Model | Dimensions | Custom Dims | Languages | Quality | Cost |
|---|---|---|---|---|---|
| TEXT_EMBEDDING_3_SMALL | 1536 | Yes | English+ | Good | Low |
| TEXT_EMBEDDING_3_LARGE | 3072 | Yes | English+ | Excellent | Medium |
| COHERE_EMBED_V3_ENGLISH | 1024 | No | English | Good | Low |
| COHERE_EMBED_V3_MULTILINGUAL | 1024 | No | 100+ | Good | Low |
Install with Tessl CLI
npx tessl i tessl/maven-dev-langchain4j--langchain4j-github-models@1.11.0docs