Building applications with LLMs through composability
—
Complete list of all supported model providers for chat models and embeddings.
All models use the format: "provider:model-name"
provider: The provider identifier (e.g., openai, anthropic, google_vertexai)model-name: The specific model identifier for that providerLangChain supports 20+ chat model providers through the init_chat_model() function.
Provider ID: openai
Models:
gpt-4o - GPT-4 Omni (latest flagship model)gpt-4o-mini - Smaller, faster GPT-4 Omni variantgpt-4-turbo - GPT-4 Turbogpt-4 - GPT-4 base modelgpt-3.5-turbo - GPT-3.5 Turboo1-preview - O1 Preview modelo1-mini - O1 Mini modelo3-mini - O3 Mini modelAuthentication:
OPENAI_API_KEYopenai_api_keyExamples:
from langchain.chat_models import init_chat_model
# Using environment variable OPENAI_API_KEY
model = init_chat_model("openai:gpt-4o")
model = init_chat_model("openai:gpt-4-turbo")
model = init_chat_model("openai:gpt-3.5-turbo")
model = init_chat_model("openai:o1-preview")
# With explicit API key
model = init_chat_model("openai:gpt-4o", openai_api_key="sk-...")
# With configuration
model = init_chat_model(
"openai:gpt-4o",
temperature=0.7,
max_tokens=1000
)Provider ID: anthropic
Models:
claude-3-5-sonnet-20241022 - Claude 3.5 Sonnet (latest)claude-3-5-sonnet-20240620 - Claude 3.5 Sonnet (June 2024)claude-3-opus-20240229 - Claude 3 Opusclaude-3-sonnet-20240229 - Claude 3 Sonnetclaude-3-haiku-20240307 - Claude 3 HaikuAuthentication:
ANTHROPIC_API_KEYanthropic_api_keyExamples:
from langchain.chat_models import init_chat_model
# Using environment variable ANTHROPIC_API_KEY
model = init_chat_model("anthropic:claude-3-5-sonnet-20241022")
model = init_chat_model("anthropic:claude-3-opus-20240229")
model = init_chat_model("anthropic:claude-3-haiku-20240307")
# With explicit API key
model = init_chat_model(
"anthropic:claude-3-5-sonnet-20241022",
anthropic_api_key="sk-ant-..."
)
# With configuration
model = init_chat_model(
"anthropic:claude-3-5-sonnet-20241022",
temperature=0.7,
max_tokens=4096
)Provider ID: google_vertexai
Models:
gemini-1.5-pro - Gemini 1.5 Progemini-1.5-flash - Gemini 1.5 Flashgemini-1.0-pro - Gemini 1.0 ProAuthentication:
GOOGLE_APPLICATION_CREDENTIALS (path to credentials JSON)Examples:
from langchain.chat_models import init_chat_model
# Using Google Cloud credentials
model = init_chat_model("google_vertexai:gemini-1.5-pro")
model = init_chat_model("google_vertexai:gemini-1.5-flash")
# With configuration
model = init_chat_model(
"google_vertexai:gemini-1.5-pro",
temperature=0.8,
max_tokens=2048
)Provider ID: google_genai
Models:
gemini-1.5-pro - Gemini 1.5 Progemini-1.5-flash - Gemini 1.5 Flashgemini-1.0-pro - Gemini 1.0 ProAuthentication:
GOOGLE_API_KEYgoogle_api_keyExamples:
from langchain.chat_models import init_chat_model
# Using environment variable GOOGLE_API_KEY
model = init_chat_model("google_genai:gemini-1.5-pro")
model = init_chat_model("google_genai:gemini-1.5-flash")
# With explicit API key
model = init_chat_model(
"google_genai:gemini-1.5-pro",
google_api_key="..."
)Provider IDs: bedrock, bedrock_converse
Note: Bedrock requires full model IDs including version.
Models:
anthropic.claude-3-sonnet-20240229-v1:0, anthropic.claude-3-haiku-20240307-v1:0, anthropic.claude-3-opus-20240229-v1:0meta.llama3-70b-instruct-v1:0, meta.llama3-8b-instruct-v1:0amazon.titan-text-premier-v1:0, amazon.titan-text-express-v1cohere.command-r-v1:0, cohere.command-r-plus-v1:0mistral.mistral-7b-instruct-v0:2, mistral.mixtral-8x7b-instruct-v0:1Authentication:
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGIONExamples:
from langchain.chat_models import init_chat_model
# Using AWS credentials from environment
model = init_chat_model("bedrock:anthropic.claude-3-sonnet-20240229-v1:0")
model = init_chat_model("bedrock:meta.llama3-70b-instruct-v1:0")
model = init_chat_model("bedrock:amazon.titan-text-premier-v1:0")
# With explicit region
model = init_chat_model(
"bedrock:anthropic.claude-3-sonnet-20240229-v1:0",
region_name="us-east-1"
)Provider ID: azure_openai
Models: Same as OpenAI models (gpt-4o, gpt-4-turbo, gpt-3.5-turbo, etc.)
Authentication:
azure_deployment, azure_endpoint, api_key or AZURE_OPENAI_API_KEYExamples:
from langchain.chat_models import init_chat_model
model = init_chat_model(
"azure_openai:gpt-4o",
azure_deployment="my-gpt4-deployment",
azure_endpoint="https://my-resource.openai.azure.com/",
api_key="..."
)
# With environment variable AZURE_OPENAI_API_KEY
model = init_chat_model(
"azure_openai:gpt-4o",
azure_deployment="my-gpt4-deployment",
azure_endpoint="https://my-resource.openai.azure.com/"
)Provider ID: cohere
Models:
command-r-plus - Command R+ (latest, most capable)command-r - Command Rcommand - Commandcommand-light - Command LightAuthentication:
COHERE_API_KEYcohere_api_keyExamples:
from langchain.chat_models import init_chat_model
model = init_chat_model("cohere:command-r-plus")
model = init_chat_model("cohere:command-r")
# With explicit API key
model = init_chat_model("cohere:command-r-plus", cohere_api_key="...")Provider ID: mistralai
Models:
mistral-large-latest - Mistral Large (latest)mistral-medium-latest - Mistral Medium (latest)mistral-small-latest - Mistral Small (latest)open-mistral-7b - Open Mistral 7Bopen-mixtral-8x7b - Open Mixtral 8x7Bopen-mixtral-8x22b - Open Mixtral 8x22BAuthentication:
MISTRAL_API_KEYmistral_api_keyExamples:
from langchain.chat_models import init_chat_model
model = init_chat_model("mistralai:mistral-large-latest")
model = init_chat_model("mistralai:open-mixtral-8x7b")
# With explicit API key
model = init_chat_model("mistralai:mistral-large-latest", mistral_api_key="...")Provider ID: groq
Models:
llama-3.3-70b-versatile - Llama 3.3 70Bllama-3.1-70b-versatile - Llama 3.1 70Bllama-3.1-8b-instant - Llama 3.1 8Bmixtral-8x7b-32768 - Mixtral 8x7Bgemma-7b-it - Gemma 7BAuthentication:
GROQ_API_KEYgroq_api_keyExamples:
from langchain.chat_models import init_chat_model
model = init_chat_model("groq:llama-3.3-70b-versatile")
model = init_chat_model("groq:mixtral-8x7b-32768")
# With explicit API key
model = init_chat_model("groq:llama-3.3-70b-versatile", groq_api_key="...")Provider ID: ollama
Models: Any model available in your local Ollama installation
llama2 - Llama 2llama3 - Llama 3mistral - Mistralmixtral - Mixtralcodellama - Code Llamaphi - Phi modelsAuthentication: None (local service)
Requirements: Ollama must be running locally (default: http://localhost:11434)
Examples:
from langchain.chat_models import init_chat_model
# Using local Ollama
model = init_chat_model("ollama:llama2")
model = init_chat_model("ollama:mistral")
model = init_chat_model("ollama:codellama")
# With custom base URL
model = init_chat_model("ollama:llama2", base_url="http://localhost:11434")Provider ID: huggingface
Models: Any model from HuggingFace Hub that supports text generation
Authentication:
HUGGINGFACEHUB_API_TOKENhuggingfacehub_api_tokenExamples:
from langchain.chat_models import init_chat_model
model = init_chat_model("huggingface:HuggingFaceH4/zephyr-7b-beta")
model = init_chat_model("huggingface:meta-llama/Llama-2-7b-chat-hf")
# With explicit API token
model = init_chat_model(
"huggingface:HuggingFaceH4/zephyr-7b-beta",
huggingfacehub_api_token="..."
)Provider ID: together
Models:
meta-llama/Llama-3-70b-chat-hf, mistralai/Mixtral-8x7B-Instruct-v0.1Authentication:
TOGETHER_API_KEYtogether_api_keyExamples:
from langchain.chat_models import init_chat_model
model = init_chat_model("together:meta-llama/Llama-3-70b-chat-hf")
model = init_chat_model("together:mistralai/Mixtral-8x7B-Instruct-v0.1")
# With explicit API key
model = init_chat_model(
"together:meta-llama/Llama-3-70b-chat-hf",
together_api_key="..."
)Provider ID: fireworks
Models:
accounts/fireworks/models/llama-v3-70b-instructAuthentication:
FIREWORKS_API_KEYfireworks_api_keyExamples:
from langchain.chat_models import init_chat_model
model = init_chat_model("fireworks:accounts/fireworks/models/llama-v3-70b-instruct")
# With explicit API key
model = init_chat_model(
"fireworks:accounts/fireworks/models/llama-v3-70b-instruct",
fireworks_api_key="..."
)Provider ID: deepseek
Models:
deepseek-chat - DeepSeek Chatdeepseek-coder - DeepSeek CoderAuthentication:
DEEPSEEK_API_KEYdeepseek_api_keyExamples:
from langchain.chat_models import init_chat_model
model = init_chat_model("deepseek:deepseek-chat")
model = init_chat_model("deepseek:deepseek-coder")
# With explicit API key
model = init_chat_model("deepseek:deepseek-chat", deepseek_api_key="...")Provider ID: xai
Models:
grok-beta - Grok Betagrok-1 - Grok 1Authentication:
XAI_API_KEYxai_api_keyExamples:
from langchain.chat_models import init_chat_model
model = init_chat_model("xai:grok-beta")
# With explicit API key
model = init_chat_model("xai:grok-beta", xai_api_key="...")Provider ID: perplexity
Models:
llama-3.1-sonar-small-128k-online - Sonar Small (online)llama-3.1-sonar-large-128k-online - Sonar Large (online)llama-3.1-sonar-huge-128k-online - Sonar Huge (online)Authentication:
PERPLEXITYAI_API_KEYperplexity_api_keyExamples:
from langchain.chat_models import init_chat_model
model = init_chat_model("perplexity:llama-3.1-sonar-large-128k-online")
# With explicit API key
model = init_chat_model(
"perplexity:llama-3.1-sonar-large-128k-online",
perplexity_api_key="..."
)Provider ID: upstage
Models:
solar-1-mini-chat - Solar 1 Mini Chatsolar-pro - Solar ProAuthentication:
UPSTAGE_API_KEYupstage_api_keyExamples:
from langchain.chat_models import init_chat_model
model = init_chat_model("upstage:solar-1-mini-chat")
# With explicit API key
model = init_chat_model("upstage:solar-1-mini-chat", upstage_api_key="...")Provider ID: ibm
Models:
Authentication:
Examples:
from langchain.chat_models import init_chat_model
model = init_chat_model("ibm:ibm-model-name")Provider ID: nvidia
Models:
Authentication:
NVIDIA_API_KEYnvidia_api_keyExamples:
from langchain.chat_models import init_chat_model
model = init_chat_model("nvidia:model-name")
# With explicit API key
model = init_chat_model("nvidia:model-name", nvidia_api_key="...")Provider ID: azure_ai
Models:
Authentication:
Examples:
from langchain.chat_models import init_chat_model
model = init_chat_model("azure_ai:model-name")Provider ID: google_anthropic_vertex
Models:
claude-3-5-sonnet@20241022 - Claude 3.5 Sonnetclaude-3-opus@20240229 - Claude 3 Opusclaude-3-sonnet@20240229 - Claude 3 SonnetAuthentication:
Examples:
from langchain.chat_models import init_chat_model
model = init_chat_model("google_anthropic_vertex:claude-3-5-sonnet@20241022")LangChain supports multiple embeddings providers through the init_embeddings() function.
Provider ID: openai
Models:
text-embedding-3-small - Text Embedding 3 Small (1536 dimensions, configurable)text-embedding-3-large - Text Embedding 3 Large (3072 dimensions, configurable)text-embedding-ada-002 - Ada 002 (legacy, 1536 dimensions)Authentication:
OPENAI_API_KEYopenai_api_keyExamples:
from langchain.embeddings import init_embeddings
# Using environment variable
embeddings = init_embeddings("openai:text-embedding-3-small")
embeddings = init_embeddings("openai:text-embedding-3-large")
# With explicit API key
embeddings = init_embeddings(
"openai:text-embedding-3-small",
openai_api_key="sk-..."
)
# With configurable dimensions
embeddings = init_embeddings(
"openai:text-embedding-3-small",
dimensions=512 # Reduce from default 1536
)
embeddings = init_embeddings(
"openai:text-embedding-3-large",
dimensions=1024 # Reduce from default 3072
)Provider ID: azure_openai
Models: Same as OpenAI embeddings models
Authentication:
azure_deployment, azure_endpoint, api_key or AZURE_OPENAI_API_KEYExamples:
from langchain.embeddings import init_embeddings
embeddings = init_embeddings(
"azure_openai:text-embedding-3-small",
azure_deployment="my-embedding-deployment",
azure_endpoint="https://my-resource.openai.azure.com/",
api_key="..."
)Provider ID: google_vertexai
Models:
text-embedding-004 - Text Embedding 004textembedding-gecko@003 - Gecko 003textembedding-gecko-multilingual@001 - Gecko MultilingualAuthentication:
GOOGLE_APPLICATION_CREDENTIALSExamples:
from langchain.embeddings import init_embeddings
embeddings = init_embeddings("google_vertexai:text-embedding-004")
embeddings = init_embeddings("google_vertexai:textembedding-gecko@003")Provider ID: google_genai
Models:
embedding-001 - Embedding 001text-embedding-004 - Text Embedding 004Authentication:
GOOGLE_API_KEYgoogle_api_keyExamples:
from langchain.embeddings import init_embeddings
embeddings = init_embeddings("google_genai:embedding-001")
embeddings = init_embeddings("google_genai:text-embedding-004")
# With explicit API key
embeddings = init_embeddings(
"google_genai:embedding-001",
google_api_key="..."
)Provider ID: bedrock
Models:
amazon.titan-embed-text-v1 - Titan Text Embeddings V1 (1536 dimensions)amazon.titan-embed-text-v2:0 - Titan Text Embeddings V2 (multiple dimension options)cohere.embed-english-v3 - Cohere English V3 (1024 dimensions)cohere.embed-multilingual-v3 - Cohere Multilingual V3 (1024 dimensions)Authentication:
Examples:
from langchain.embeddings import init_embeddings
embeddings = init_embeddings("bedrock:amazon.titan-embed-text-v1")
embeddings = init_embeddings("bedrock:amazon.titan-embed-text-v2:0")
embeddings = init_embeddings("bedrock:cohere.embed-english-v3")
# With explicit region
embeddings = init_embeddings(
"bedrock:amazon.titan-embed-text-v1",
region_name="us-east-1"
)Provider ID: cohere
Models:
embed-english-v3.0 - English V3 (1024 dimensions)embed-multilingual-v3.0 - Multilingual V3 (1024 dimensions)embed-english-light-v3.0 - English Light V3 (384 dimensions)embed-multilingual-light-v3.0 - Multilingual Light V3 (384 dimensions)embed-english-v2.0 - English V2 (legacy, 4096 dimensions)embed-english-light-v2.0 - English Light V2 (legacy, 1024 dimensions)Authentication:
COHERE_API_KEYcohere_api_keyExamples:
from langchain.embeddings import init_embeddings
embeddings = init_embeddings("cohere:embed-english-v3.0")
embeddings = init_embeddings("cohere:embed-multilingual-v3.0")
# With explicit API key
embeddings = init_embeddings(
"cohere:embed-english-v3.0",
cohere_api_key="..."
)Provider ID: mistralai
Models:
mistral-embed - Mistral Embed (1024 dimensions)Authentication:
MISTRAL_API_KEYmistral_api_keyExamples:
from langchain.embeddings import init_embeddings
embeddings = init_embeddings("mistralai:mistral-embed")
# With explicit API key
embeddings = init_embeddings("mistralai:mistral-embed", mistral_api_key="...")Provider ID: huggingface
Models: Any sentence transformer model from HuggingFace Hub
sentence-transformers/all-MiniLM-L6-v2 - All MiniLM L6 V2 (384 dimensions)sentence-transformers/all-mpnet-base-v2 - All MPNet Base V2 (768 dimensions)BAAI/bge-small-en-v1.5 - BGE Small English (384 dimensions)BAAI/bge-base-en-v1.5 - BGE Base English (768 dimensions)BAAI/bge-large-en-v1.5 - BGE Large English (1024 dimensions)Authentication:
HUGGINGFACEHUB_API_TOKEN (optional for public models)huggingfacehub_api_tokenExamples:
from langchain.embeddings import init_embeddings
# Public models (no authentication required)
embeddings = init_embeddings("huggingface:sentence-transformers/all-MiniLM-L6-v2")
embeddings = init_embeddings("huggingface:BAAI/bge-base-en-v1.5")
# With explicit API token
embeddings = init_embeddings(
"huggingface:sentence-transformers/all-mpnet-base-v2",
huggingfacehub_api_token="..."
)Provider ID: ollama
Models: Any embeddings model available in your local Ollama installation
nomic-embed-text - Nomic Embed Text (768 dimensions)mxbai-embed-large - MxBAI Embed Large (1024 dimensions)all-minilm - All MiniLM (384 dimensions)Authentication: None (local service)
Requirements: Ollama must be running locally (default: http://localhost:11434)
Examples:
from langchain.embeddings import init_embeddings
# Using local Ollama
embeddings = init_embeddings("ollama:nomic-embed-text")
embeddings = init_embeddings("ollama:mxbai-embed-large")
embeddings = init_embeddings("ollama:all-minilm")
# With custom base URL
embeddings = init_embeddings(
"ollama:nomic-embed-text",
base_url="http://localhost:11434"
)Provider ID: voyage
Models:
voyage-2 - Voyage 2 (1024 dimensions)voyage-large-2 - Voyage Large 2 (1536 dimensions)voyage-code-2 - Voyage Code 2 (1536 dimensions)Authentication:
VOYAGE_API_KEYvoyage_api_keyExamples:
from langchain.embeddings import init_embeddings
embeddings = init_embeddings("voyage:voyage-2")
embeddings = init_embeddings("voyage:voyage-large-2")
# With explicit API key
embeddings = init_embeddings("voyage:voyage-2", voyage_api_key="...")| Provider | Authentication | Notable Models | Special Features |
|---|---|---|---|
| OpenAI | API Key | GPT-4o, O1 | Industry standard, function calling |
| Anthropic | API Key | Claude 3.5 Sonnet | Long context, strong reasoning |
| Google Vertex AI | GCP Credentials | Gemini 1.5 Pro | Multimodal, enterprise features |
| AWS Bedrock | AWS Credentials | Claude, Llama, Titan | Enterprise deployment, multi-provider |
| Azure OpenAI | Azure Credentials | GPT-4o | Enterprise Azure integration |
| Ollama | None (local) | Llama, Mistral | Local, private, no API costs |
| Provider | Authentication | Notable Models | Dimensions | Special Features |
|---|---|---|---|---|
| OpenAI | API Key | text-embedding-3-small | 1536 (configurable) | High quality, configurable dimensions |
| OpenAI | API Key | text-embedding-3-large | 3072 (configurable) | Highest quality OpenAI embeddings |
| Cohere | API Key | embed-english-v3.0 | 1024 | Strong retrieval optimization |
| Cohere | API Key | embed-multilingual-v3.0 | 1024 | 100+ languages supported |
| HuggingFace | Optional | all-MiniLM-L6-v2 | 384 | Fast, open-source, local |
| Ollama | None (local) | nomic-embed-text | 768 | Local, private, no API costs |
The unified string-based initialization makes it easy to switch providers:
import os
from langchain.chat_models import init_chat_model
from langchain.embeddings import init_embeddings
# Get provider from environment or config
chat_provider = os.getenv("CHAT_PROVIDER", "openai")
chat_model = os.getenv("CHAT_MODEL", "gpt-4o")
embed_provider = os.getenv("EMBED_PROVIDER", "openai")
embed_model = os.getenv("EMBED_MODEL", "text-embedding-3-small")
# Initialize with dynamic provider
model = init_chat_model(f"{chat_provider}:{chat_model}")
embeddings = init_embeddings(f"{embed_provider}:{embed_model}")
# Now you can switch providers by changing environment variables
# No code changes needed!# Temperature (all providers)
model = init_chat_model("provider:model", temperature=0.7)
# Max tokens (all providers)
model = init_chat_model("provider:model", max_tokens=1000)
# Timeout (all providers)
model = init_chat_model("provider:model", timeout=30.0)
# Rate limiting (all providers)
from langchain.rate_limiters import InMemoryRateLimiter
rate_limiter = InMemoryRateLimiter(requests_per_second=10/60)
model = init_chat_model("provider:model", rate_limiter=rate_limiter)
# Custom endpoint (OpenAI-compatible providers)
model = init_chat_model("provider:model", base_url="https://custom-endpoint.com/v1")# Batch size (most providers)
embeddings = init_embeddings("provider:model", batch_size=100)
# Dimensions (OpenAI text-embedding-3-*)
embeddings = init_embeddings("openai:text-embedding-3-small", dimensions=512)
# Timeout (all providers)
embeddings = init_embeddings("provider:model", timeout=30.0)anthropic.claude-3-sonnet-20240229-v1:0)ollama pull <model-name>Install with Tessl CLI
npx tessl i tessl/pypi-langchain@1.2.1