CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-langchain

Building applications with LLMs through composability

Pending
Overview
Eval results
Files

providers.mddocs/reference/

Providers Reference

Complete list of all supported model providers for chat models and embeddings.

Model String Format

All models use the format: "provider:model-name"

  • provider: The provider identifier (e.g., openai, anthropic, google_vertexai)
  • model-name: The specific model identifier for that provider

Chat Model Providers

LangChain supports 20+ chat model providers through the init_chat_model() function.

OpenAI

Provider ID: openai

Models:

  • gpt-4o - GPT-4 Omni (latest flagship model)
  • gpt-4o-mini - Smaller, faster GPT-4 Omni variant
  • gpt-4-turbo - GPT-4 Turbo
  • gpt-4 - GPT-4 base model
  • gpt-3.5-turbo - GPT-3.5 Turbo
  • o1-preview - O1 Preview model
  • o1-mini - O1 Mini model
  • o3-mini - O3 Mini model

Authentication:

  • Environment variable: OPENAI_API_KEY
  • Parameter: openai_api_key

Examples:

from langchain.chat_models import init_chat_model

# Using environment variable OPENAI_API_KEY
model = init_chat_model("openai:gpt-4o")
model = init_chat_model("openai:gpt-4-turbo")
model = init_chat_model("openai:gpt-3.5-turbo")
model = init_chat_model("openai:o1-preview")

# With explicit API key
model = init_chat_model("openai:gpt-4o", openai_api_key="sk-...")

# With configuration
model = init_chat_model(
    "openai:gpt-4o",
    temperature=0.7,
    max_tokens=1000
)

Anthropic

Provider ID: anthropic

Models:

  • claude-3-5-sonnet-20241022 - Claude 3.5 Sonnet (latest)
  • claude-3-5-sonnet-20240620 - Claude 3.5 Sonnet (June 2024)
  • claude-3-opus-20240229 - Claude 3 Opus
  • claude-3-sonnet-20240229 - Claude 3 Sonnet
  • claude-3-haiku-20240307 - Claude 3 Haiku

Authentication:

  • Environment variable: ANTHROPIC_API_KEY
  • Parameter: anthropic_api_key

Examples:

from langchain.chat_models import init_chat_model

# Using environment variable ANTHROPIC_API_KEY
model = init_chat_model("anthropic:claude-3-5-sonnet-20241022")
model = init_chat_model("anthropic:claude-3-opus-20240229")
model = init_chat_model("anthropic:claude-3-haiku-20240307")

# With explicit API key
model = init_chat_model(
    "anthropic:claude-3-5-sonnet-20241022",
    anthropic_api_key="sk-ant-..."
)

# With configuration
model = init_chat_model(
    "anthropic:claude-3-5-sonnet-20241022",
    temperature=0.7,
    max_tokens=4096
)

Google Vertex AI

Provider ID: google_vertexai

Models:

  • gemini-1.5-pro - Gemini 1.5 Pro
  • gemini-1.5-flash - Gemini 1.5 Flash
  • gemini-1.0-pro - Gemini 1.0 Pro

Authentication:

  • Environment variable: GOOGLE_APPLICATION_CREDENTIALS (path to credentials JSON)
  • Google Cloud SDK authentication

Examples:

from langchain.chat_models import init_chat_model

# Using Google Cloud credentials
model = init_chat_model("google_vertexai:gemini-1.5-pro")
model = init_chat_model("google_vertexai:gemini-1.5-flash")

# With configuration
model = init_chat_model(
    "google_vertexai:gemini-1.5-pro",
    temperature=0.8,
    max_tokens=2048
)

Google Generative AI

Provider ID: google_genai

Models:

  • gemini-1.5-pro - Gemini 1.5 Pro
  • gemini-1.5-flash - Gemini 1.5 Flash
  • gemini-1.0-pro - Gemini 1.0 Pro

Authentication:

  • Environment variable: GOOGLE_API_KEY
  • Parameter: google_api_key

Examples:

from langchain.chat_models import init_chat_model

# Using environment variable GOOGLE_API_KEY
model = init_chat_model("google_genai:gemini-1.5-pro")
model = init_chat_model("google_genai:gemini-1.5-flash")

# With explicit API key
model = init_chat_model(
    "google_genai:gemini-1.5-pro",
    google_api_key="..."
)

AWS Bedrock

Provider IDs: bedrock, bedrock_converse

Note: Bedrock requires full model IDs including version.

Models:

  • Anthropic Claude: anthropic.claude-3-sonnet-20240229-v1:0, anthropic.claude-3-haiku-20240307-v1:0, anthropic.claude-3-opus-20240229-v1:0
  • Meta Llama: meta.llama3-70b-instruct-v1:0, meta.llama3-8b-instruct-v1:0
  • Amazon Titan: amazon.titan-text-premier-v1:0, amazon.titan-text-express-v1
  • Cohere Command: cohere.command-r-v1:0, cohere.command-r-plus-v1:0
  • Mistral: mistral.mistral-7b-instruct-v0:2, mistral.mixtral-8x7b-instruct-v0:1

Authentication:

  • AWS credentials (environment variables, IAM role, or ~/.aws/credentials)
  • Environment variables: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION

Examples:

from langchain.chat_models import init_chat_model

# Using AWS credentials from environment
model = init_chat_model("bedrock:anthropic.claude-3-sonnet-20240229-v1:0")
model = init_chat_model("bedrock:meta.llama3-70b-instruct-v1:0")
model = init_chat_model("bedrock:amazon.titan-text-premier-v1:0")

# With explicit region
model = init_chat_model(
    "bedrock:anthropic.claude-3-sonnet-20240229-v1:0",
    region_name="us-east-1"
)

Azure OpenAI

Provider ID: azure_openai

Models: Same as OpenAI models (gpt-4o, gpt-4-turbo, gpt-3.5-turbo, etc.)

Authentication:

  • Requires: azure_deployment, azure_endpoint, api_key or AZURE_OPENAI_API_KEY

Examples:

from langchain.chat_models import init_chat_model

model = init_chat_model(
    "azure_openai:gpt-4o",
    azure_deployment="my-gpt4-deployment",
    azure_endpoint="https://my-resource.openai.azure.com/",
    api_key="..."
)

# With environment variable AZURE_OPENAI_API_KEY
model = init_chat_model(
    "azure_openai:gpt-4o",
    azure_deployment="my-gpt4-deployment",
    azure_endpoint="https://my-resource.openai.azure.com/"
)

Cohere

Provider ID: cohere

Models:

  • command-r-plus - Command R+ (latest, most capable)
  • command-r - Command R
  • command - Command
  • command-light - Command Light

Authentication:

  • Environment variable: COHERE_API_KEY
  • Parameter: cohere_api_key

Examples:

from langchain.chat_models import init_chat_model

model = init_chat_model("cohere:command-r-plus")
model = init_chat_model("cohere:command-r")

# With explicit API key
model = init_chat_model("cohere:command-r-plus", cohere_api_key="...")

Mistral AI

Provider ID: mistralai

Models:

  • mistral-large-latest - Mistral Large (latest)
  • mistral-medium-latest - Mistral Medium (latest)
  • mistral-small-latest - Mistral Small (latest)
  • open-mistral-7b - Open Mistral 7B
  • open-mixtral-8x7b - Open Mixtral 8x7B
  • open-mixtral-8x22b - Open Mixtral 8x22B

Authentication:

  • Environment variable: MISTRAL_API_KEY
  • Parameter: mistral_api_key

Examples:

from langchain.chat_models import init_chat_model

model = init_chat_model("mistralai:mistral-large-latest")
model = init_chat_model("mistralai:open-mixtral-8x7b")

# With explicit API key
model = init_chat_model("mistralai:mistral-large-latest", mistral_api_key="...")

Groq

Provider ID: groq

Models:

  • llama-3.3-70b-versatile - Llama 3.3 70B
  • llama-3.1-70b-versatile - Llama 3.1 70B
  • llama-3.1-8b-instant - Llama 3.1 8B
  • mixtral-8x7b-32768 - Mixtral 8x7B
  • gemma-7b-it - Gemma 7B

Authentication:

  • Environment variable: GROQ_API_KEY
  • Parameter: groq_api_key

Examples:

from langchain.chat_models import init_chat_model

model = init_chat_model("groq:llama-3.3-70b-versatile")
model = init_chat_model("groq:mixtral-8x7b-32768")

# With explicit API key
model = init_chat_model("groq:llama-3.3-70b-versatile", groq_api_key="...")

Ollama

Provider ID: ollama

Models: Any model available in your local Ollama installation

  • llama2 - Llama 2
  • llama3 - Llama 3
  • mistral - Mistral
  • mixtral - Mixtral
  • codellama - Code Llama
  • phi - Phi models
  • And many more...

Authentication: None (local service)

Requirements: Ollama must be running locally (default: http://localhost:11434)

Examples:

from langchain.chat_models import init_chat_model

# Using local Ollama
model = init_chat_model("ollama:llama2")
model = init_chat_model("ollama:mistral")
model = init_chat_model("ollama:codellama")

# With custom base URL
model = init_chat_model("ollama:llama2", base_url="http://localhost:11434")

HuggingFace

Provider ID: huggingface

Models: Any model from HuggingFace Hub that supports text generation

Authentication:

  • Environment variable: HUGGINGFACEHUB_API_TOKEN
  • Parameter: huggingfacehub_api_token

Examples:

from langchain.chat_models import init_chat_model

model = init_chat_model("huggingface:HuggingFaceH4/zephyr-7b-beta")
model = init_chat_model("huggingface:meta-llama/Llama-2-7b-chat-hf")

# With explicit API token
model = init_chat_model(
    "huggingface:HuggingFaceH4/zephyr-7b-beta",
    huggingfacehub_api_token="..."
)

Together AI

Provider ID: together

Models:

  • Various open-source models hosted on Together API
  • Examples: meta-llama/Llama-3-70b-chat-hf, mistralai/Mixtral-8x7B-Instruct-v0.1

Authentication:

  • Environment variable: TOGETHER_API_KEY
  • Parameter: together_api_key

Examples:

from langchain.chat_models import init_chat_model

model = init_chat_model("together:meta-llama/Llama-3-70b-chat-hf")
model = init_chat_model("together:mistralai/Mixtral-8x7B-Instruct-v0.1")

# With explicit API key
model = init_chat_model(
    "together:meta-llama/Llama-3-70b-chat-hf",
    together_api_key="..."
)

Fireworks

Provider ID: fireworks

Models:

  • Various open-source models hosted on Fireworks API
  • Examples: accounts/fireworks/models/llama-v3-70b-instruct

Authentication:

  • Environment variable: FIREWORKS_API_KEY
  • Parameter: fireworks_api_key

Examples:

from langchain.chat_models import init_chat_model

model = init_chat_model("fireworks:accounts/fireworks/models/llama-v3-70b-instruct")

# With explicit API key
model = init_chat_model(
    "fireworks:accounts/fireworks/models/llama-v3-70b-instruct",
    fireworks_api_key="..."
)

DeepSeek

Provider ID: deepseek

Models:

  • deepseek-chat - DeepSeek Chat
  • deepseek-coder - DeepSeek Coder

Authentication:

  • Environment variable: DEEPSEEK_API_KEY
  • Parameter: deepseek_api_key

Examples:

from langchain.chat_models import init_chat_model

model = init_chat_model("deepseek:deepseek-chat")
model = init_chat_model("deepseek:deepseek-coder")

# With explicit API key
model = init_chat_model("deepseek:deepseek-chat", deepseek_api_key="...")

xAI (Grok)

Provider ID: xai

Models:

  • grok-beta - Grok Beta
  • grok-1 - Grok 1

Authentication:

  • Environment variable: XAI_API_KEY
  • Parameter: xai_api_key

Examples:

from langchain.chat_models import init_chat_model

model = init_chat_model("xai:grok-beta")

# With explicit API key
model = init_chat_model("xai:grok-beta", xai_api_key="...")

Perplexity

Provider ID: perplexity

Models:

  • llama-3.1-sonar-small-128k-online - Sonar Small (online)
  • llama-3.1-sonar-large-128k-online - Sonar Large (online)
  • llama-3.1-sonar-huge-128k-online - Sonar Huge (online)

Authentication:

  • Environment variable: PERPLEXITYAI_API_KEY
  • Parameter: perplexity_api_key

Examples:

from langchain.chat_models import init_chat_model

model = init_chat_model("perplexity:llama-3.1-sonar-large-128k-online")

# With explicit API key
model = init_chat_model(
    "perplexity:llama-3.1-sonar-large-128k-online",
    perplexity_api_key="..."
)

Upstage

Provider ID: upstage

Models:

  • solar-1-mini-chat - Solar 1 Mini Chat
  • solar-pro - Solar Pro

Authentication:

  • Environment variable: UPSTAGE_API_KEY
  • Parameter: upstage_api_key

Examples:

from langchain.chat_models import init_chat_model

model = init_chat_model("upstage:solar-1-mini-chat")

# With explicit API key
model = init_chat_model("upstage:solar-1-mini-chat", upstage_api_key="...")

IBM Watson

Provider ID: ibm

Models:

  • IBM Watson models (various)

Authentication:

  • Requires IBM Cloud credentials

Examples:

from langchain.chat_models import init_chat_model

model = init_chat_model("ibm:ibm-model-name")

NVIDIA

Provider ID: nvidia

Models:

  • Various NVIDIA AI Foundation models

Authentication:

  • Environment variable: NVIDIA_API_KEY
  • Parameter: nvidia_api_key

Examples:

from langchain.chat_models import init_chat_model

model = init_chat_model("nvidia:model-name")

# With explicit API key
model = init_chat_model("nvidia:model-name", nvidia_api_key="...")

Azure AI

Provider ID: azure_ai

Models:

  • Various Azure AI models

Authentication:

  • Azure AI credentials

Examples:

from langchain.chat_models import init_chat_model

model = init_chat_model("azure_ai:model-name")

Google Anthropic Vertex

Provider ID: google_anthropic_vertex

Models:

  • Anthropic Claude models hosted on Google Vertex AI
  • claude-3-5-sonnet@20241022 - Claude 3.5 Sonnet
  • claude-3-opus@20240229 - Claude 3 Opus
  • claude-3-sonnet@20240229 - Claude 3 Sonnet

Authentication:

  • Google Cloud credentials (same as Vertex AI)

Examples:

from langchain.chat_models import init_chat_model

model = init_chat_model("google_anthropic_vertex:claude-3-5-sonnet@20241022")

Embeddings Providers

LangChain supports multiple embeddings providers through the init_embeddings() function.

OpenAI Embeddings

Provider ID: openai

Models:

  • text-embedding-3-small - Text Embedding 3 Small (1536 dimensions, configurable)
  • text-embedding-3-large - Text Embedding 3 Large (3072 dimensions, configurable)
  • text-embedding-ada-002 - Ada 002 (legacy, 1536 dimensions)

Authentication:

  • Environment variable: OPENAI_API_KEY
  • Parameter: openai_api_key

Examples:

from langchain.embeddings import init_embeddings

# Using environment variable
embeddings = init_embeddings("openai:text-embedding-3-small")
embeddings = init_embeddings("openai:text-embedding-3-large")

# With explicit API key
embeddings = init_embeddings(
    "openai:text-embedding-3-small",
    openai_api_key="sk-..."
)

# With configurable dimensions
embeddings = init_embeddings(
    "openai:text-embedding-3-small",
    dimensions=512  # Reduce from default 1536
)

embeddings = init_embeddings(
    "openai:text-embedding-3-large",
    dimensions=1024  # Reduce from default 3072
)

Azure OpenAI Embeddings

Provider ID: azure_openai

Models: Same as OpenAI embeddings models

Authentication:

  • Requires: azure_deployment, azure_endpoint, api_key or AZURE_OPENAI_API_KEY

Examples:

from langchain.embeddings import init_embeddings

embeddings = init_embeddings(
    "azure_openai:text-embedding-3-small",
    azure_deployment="my-embedding-deployment",
    azure_endpoint="https://my-resource.openai.azure.com/",
    api_key="..."
)

Google Vertex AI Embeddings

Provider ID: google_vertexai

Models:

  • text-embedding-004 - Text Embedding 004
  • textembedding-gecko@003 - Gecko 003
  • textembedding-gecko-multilingual@001 - Gecko Multilingual

Authentication:

  • Environment variable: GOOGLE_APPLICATION_CREDENTIALS
  • Google Cloud SDK authentication

Examples:

from langchain.embeddings import init_embeddings

embeddings = init_embeddings("google_vertexai:text-embedding-004")
embeddings = init_embeddings("google_vertexai:textembedding-gecko@003")

Google Generative AI Embeddings

Provider ID: google_genai

Models:

  • embedding-001 - Embedding 001
  • text-embedding-004 - Text Embedding 004

Authentication:

  • Environment variable: GOOGLE_API_KEY
  • Parameter: google_api_key

Examples:

from langchain.embeddings import init_embeddings

embeddings = init_embeddings("google_genai:embedding-001")
embeddings = init_embeddings("google_genai:text-embedding-004")

# With explicit API key
embeddings = init_embeddings(
    "google_genai:embedding-001",
    google_api_key="..."
)

AWS Bedrock Embeddings

Provider ID: bedrock

Models:

  • amazon.titan-embed-text-v1 - Titan Text Embeddings V1 (1536 dimensions)
  • amazon.titan-embed-text-v2:0 - Titan Text Embeddings V2 (multiple dimension options)
  • cohere.embed-english-v3 - Cohere English V3 (1024 dimensions)
  • cohere.embed-multilingual-v3 - Cohere Multilingual V3 (1024 dimensions)

Authentication:

  • AWS credentials (environment variables, IAM role, or ~/.aws/credentials)

Examples:

from langchain.embeddings import init_embeddings

embeddings = init_embeddings("bedrock:amazon.titan-embed-text-v1")
embeddings = init_embeddings("bedrock:amazon.titan-embed-text-v2:0")
embeddings = init_embeddings("bedrock:cohere.embed-english-v3")

# With explicit region
embeddings = init_embeddings(
    "bedrock:amazon.titan-embed-text-v1",
    region_name="us-east-1"
)

Cohere Embeddings

Provider ID: cohere

Models:

  • embed-english-v3.0 - English V3 (1024 dimensions)
  • embed-multilingual-v3.0 - Multilingual V3 (1024 dimensions)
  • embed-english-light-v3.0 - English Light V3 (384 dimensions)
  • embed-multilingual-light-v3.0 - Multilingual Light V3 (384 dimensions)
  • embed-english-v2.0 - English V2 (legacy, 4096 dimensions)
  • embed-english-light-v2.0 - English Light V2 (legacy, 1024 dimensions)

Authentication:

  • Environment variable: COHERE_API_KEY
  • Parameter: cohere_api_key

Examples:

from langchain.embeddings import init_embeddings

embeddings = init_embeddings("cohere:embed-english-v3.0")
embeddings = init_embeddings("cohere:embed-multilingual-v3.0")

# With explicit API key
embeddings = init_embeddings(
    "cohere:embed-english-v3.0",
    cohere_api_key="..."
)

Mistral AI Embeddings

Provider ID: mistralai

Models:

  • mistral-embed - Mistral Embed (1024 dimensions)

Authentication:

  • Environment variable: MISTRAL_API_KEY
  • Parameter: mistral_api_key

Examples:

from langchain.embeddings import init_embeddings

embeddings = init_embeddings("mistralai:mistral-embed")

# With explicit API key
embeddings = init_embeddings("mistralai:mistral-embed", mistral_api_key="...")

HuggingFace Embeddings

Provider ID: huggingface

Models: Any sentence transformer model from HuggingFace Hub

  • sentence-transformers/all-MiniLM-L6-v2 - All MiniLM L6 V2 (384 dimensions)
  • sentence-transformers/all-mpnet-base-v2 - All MPNet Base V2 (768 dimensions)
  • BAAI/bge-small-en-v1.5 - BGE Small English (384 dimensions)
  • BAAI/bge-base-en-v1.5 - BGE Base English (768 dimensions)
  • BAAI/bge-large-en-v1.5 - BGE Large English (1024 dimensions)

Authentication:

  • Environment variable: HUGGINGFACEHUB_API_TOKEN (optional for public models)
  • Parameter: huggingfacehub_api_token

Examples:

from langchain.embeddings import init_embeddings

# Public models (no authentication required)
embeddings = init_embeddings("huggingface:sentence-transformers/all-MiniLM-L6-v2")
embeddings = init_embeddings("huggingface:BAAI/bge-base-en-v1.5")

# With explicit API token
embeddings = init_embeddings(
    "huggingface:sentence-transformers/all-mpnet-base-v2",
    huggingfacehub_api_token="..."
)

Ollama Embeddings

Provider ID: ollama

Models: Any embeddings model available in your local Ollama installation

  • nomic-embed-text - Nomic Embed Text (768 dimensions)
  • mxbai-embed-large - MxBAI Embed Large (1024 dimensions)
  • all-minilm - All MiniLM (384 dimensions)

Authentication: None (local service)

Requirements: Ollama must be running locally (default: http://localhost:11434)

Examples:

from langchain.embeddings import init_embeddings

# Using local Ollama
embeddings = init_embeddings("ollama:nomic-embed-text")
embeddings = init_embeddings("ollama:mxbai-embed-large")
embeddings = init_embeddings("ollama:all-minilm")

# With custom base URL
embeddings = init_embeddings(
    "ollama:nomic-embed-text",
    base_url="http://localhost:11434"
)

Voyage AI Embeddings

Provider ID: voyage

Models:

  • voyage-2 - Voyage 2 (1024 dimensions)
  • voyage-large-2 - Voyage Large 2 (1536 dimensions)
  • voyage-code-2 - Voyage Code 2 (1536 dimensions)

Authentication:

  • Environment variable: VOYAGE_API_KEY
  • Parameter: voyage_api_key

Examples:

from langchain.embeddings import init_embeddings

embeddings = init_embeddings("voyage:voyage-2")
embeddings = init_embeddings("voyage:voyage-large-2")

# With explicit API key
embeddings = init_embeddings("voyage:voyage-2", voyage_api_key="...")

Provider Comparison

Chat Models

ProviderAuthenticationNotable ModelsSpecial Features
OpenAIAPI KeyGPT-4o, O1Industry standard, function calling
AnthropicAPI KeyClaude 3.5 SonnetLong context, strong reasoning
Google Vertex AIGCP CredentialsGemini 1.5 ProMultimodal, enterprise features
AWS BedrockAWS CredentialsClaude, Llama, TitanEnterprise deployment, multi-provider
Azure OpenAIAzure CredentialsGPT-4oEnterprise Azure integration
OllamaNone (local)Llama, MistralLocal, private, no API costs

Embeddings Models

ProviderAuthenticationNotable ModelsDimensionsSpecial Features
OpenAIAPI Keytext-embedding-3-small1536 (configurable)High quality, configurable dimensions
OpenAIAPI Keytext-embedding-3-large3072 (configurable)Highest quality OpenAI embeddings
CohereAPI Keyembed-english-v3.01024Strong retrieval optimization
CohereAPI Keyembed-multilingual-v3.01024100+ languages supported
HuggingFaceOptionalall-MiniLM-L6-v2384Fast, open-source, local
OllamaNone (local)nomic-embed-text768Local, private, no API costs

Switching Between Providers

The unified string-based initialization makes it easy to switch providers:

import os
from langchain.chat_models import init_chat_model
from langchain.embeddings import init_embeddings

# Get provider from environment or config
chat_provider = os.getenv("CHAT_PROVIDER", "openai")
chat_model = os.getenv("CHAT_MODEL", "gpt-4o")
embed_provider = os.getenv("EMBED_PROVIDER", "openai")
embed_model = os.getenv("EMBED_MODEL", "text-embedding-3-small")

# Initialize with dynamic provider
model = init_chat_model(f"{chat_provider}:{chat_model}")
embeddings = init_embeddings(f"{embed_provider}:{embed_model}")

# Now you can switch providers by changing environment variables
# No code changes needed!

Common Configuration Parameters

Chat Models

# Temperature (all providers)
model = init_chat_model("provider:model", temperature=0.7)

# Max tokens (all providers)
model = init_chat_model("provider:model", max_tokens=1000)

# Timeout (all providers)
model = init_chat_model("provider:model", timeout=30.0)

# Rate limiting (all providers)
from langchain.rate_limiters import InMemoryRateLimiter
rate_limiter = InMemoryRateLimiter(requests_per_second=10/60)
model = init_chat_model("provider:model", rate_limiter=rate_limiter)

# Custom endpoint (OpenAI-compatible providers)
model = init_chat_model("provider:model", base_url="https://custom-endpoint.com/v1")

Embeddings

# Batch size (most providers)
embeddings = init_embeddings("provider:model", batch_size=100)

# Dimensions (OpenAI text-embedding-3-*)
embeddings = init_embeddings("openai:text-embedding-3-small", dimensions=512)

# Timeout (all providers)
embeddings = init_embeddings("provider:model", timeout=30.0)

Provider-Specific Notes

AWS Bedrock

  • Requires full model IDs with versions (e.g., anthropic.claude-3-sonnet-20240229-v1:0)
  • Uses standard AWS authentication (credentials, IAM roles, environment variables)
  • Regional endpoints available

Azure OpenAI

  • Requires deployment name and endpoint in addition to model name
  • Uses Azure authentication mechanisms
  • Models are deployed, not directly accessed

Ollama

  • Requires Ollama server running locally
  • No authentication required
  • Models must be pulled first: ollama pull <model-name>
  • Supports custom models

HuggingFace

  • Public models don't require authentication
  • Private models require API token
  • Models downloaded and cached locally on first use
  • Can run completely offline after first download

Install with Tessl CLI

npx tessl i tessl/pypi-langchain

docs

index.md

quickstart.md

tile.json