or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

agents.mdchat-models.mdhub.mdindex.mdmiddleware.mdstorage.mdtools.md
tile.json

chat-models.mddocs/

Chat Models

Universal chat model initialization supporting 18+ providers with automatic inference and runtime configuration.

initChatModel

async function initChatModel<
  RunInput extends BaseLanguageModelInput = BaseLanguageModelInput,
  CallOptions extends ConfigurableChatModelCallOptions = ConfigurableChatModelCallOptions
>(
  model?: string,
  fields?: InitChatModelFields
): Promise<ConfigurableModel<RunInput, CallOptions>>;

interface InitChatModelFields {
  modelProvider?: ChatModelProvider;
  configurableFields?: string[] | "any";
  configPrefix?: string;
  profile?: ModelProfile;
  [key: string]: any;
}

interface ModelProfile {
  packageName: string;
  className: string;
}

type ChatModelProvider =
  | "openai" | "anthropic" | "azure_openai" | "cohere"
  | "google-vertexai" | "google-vertexai-web" | "google-genai" | "ollama"
  | "mistralai" | "mistral" | "groq" | "cerebras" | "bedrock" | "deepseek"
  | "xai" | "fireworks" | "together" | "perplexity";

Examples:

import { initChatModel } from "langchain";

// Automatic provider inference
const model1 = initChatModel("gpt-4o");
const model2 = initChatModel("claude-3-5-sonnet");

// Explicit provider
const model3 = initChatModel("openai:gpt-4o");

// With configuration
const model4 = initChatModel("gpt-4o", {
  temperature: 0.7,
  maxTokens: 1000,
});

// Azure OpenAI
const model5 = initChatModel("my-deployment", {
  provider: "azure_openai",
  azureOpenAIApiKey: process.env.AZURE_OPENAI_API_KEY,
  azureOpenAIApiInstanceName: "my-instance",
  azureOpenAIApiDeploymentName: "my-deployment",
  azureOpenAIApiVersion: "2024-02-15-preview",
});

// Ollama (local)
const model6 = initChatModel("llama2", {
  provider: "ollama",
  baseUrl: "http://localhost:11434",
});

ConfigurableModel

class ConfigurableModel extends BaseChatModel {
  model: BaseChatModel;

  invoke(
    input: BaseMessage[],
    options?: ConfigurableChatModelCallOptions
  ): Promise<AIMessage>;

  stream(
    input: BaseMessage[],
    options?: ConfigurableChatModelCallOptions
  ): Promise<IterableReadableStream<AIMessageChunk>>;
}

interface ConfigurableChatModelCallOptions extends RunnableConfig {
  temperature?: number;
  maxTokens?: number;
  [key: string]: any;
}

Supported Providers

ProviderPackageClassInference Pattern
openai@langchain/openaiChatOpenAIgpt-3/4/5, o1, o3, o4
anthropic@langchain/anthropicChatAnthropicclaude
azure_openai@langchain/openaiAzureChatOpenAI-
cohere@langchain/cohereChatCoherecommand
google-vertexai@langchain/google-vertexaiChatVertexAIgemini
google-vertexai-web@langchain/google-vertexai-webChatVertexAI-
google-genai@langchain/google-genaiChatGoogleGenerativeAI-
ollama@langchain/ollamaChatOllama-
mistralai@langchain/mistralaiChatMistralAImistral
mistral@langchain/mistralaiChatMistralAI(alias)
groq@langchain/groqChatGroq-
cerebras@langchain/cerebrasChatCerebras-
bedrock@langchain/awsChatBedrockConverseamazon.
deepseek@langchain/deepseekChatDeepSeek-
xai@langchain/xaiChatXAI-
fireworks@langchain/community/chat_models/fireworksChatFireworksaccounts/fireworks
together@langchain/community/chat_models/togetheraiChatTogetherAI-
perplexity@langchain/community/chat_models/perplexityChatPerplexitysonar, pplx

Provider Inference

function _inferModelProvider(model: string): ChatModelProvider | undefined;

async function getChatModelByClassName(className: string): Promise<typeof BaseChatModel | undefined>;

Environment Variables

  • OpenAI: OPENAI_API_KEY
  • Anthropic: ANTHROPIC_API_KEY
  • Google Vertex AI: GOOGLE_APPLICATION_CREDENTIALS
  • Google GenAI: GOOGLE_API_KEY
  • Cohere: COHERE_API_KEY
  • Groq: GROQ_API_KEY
  • Mistral: MISTRAL_API_KEY
  • Deepseek: DEEPSEEK_API_KEY
  • XAI: XAI_API_KEY
// Uses OPENAI_API_KEY from environment
const model = initChatModel("gpt-4o");

// Explicit API key
const model2 = initChatModel("gpt-4o", {
  openAIApiKey: "sk-...",
});

Usage with Agent

import { createAgent, initChatModel } from "langchain";

const model = initChatModel("anthropic:claude-3-5-sonnet", {
  temperature: 0.5,
  maxTokens: 2000,
});

const agent = createAgent({
  model: model,
  tools: [searchTool],
  systemPrompt: "You are a research assistant.",
});

Runtime Configuration

import { initChatModel } from "langchain";

const model = initChatModel("gpt-4o");

// Override temperature at runtime
const creative = await model.invoke(messages, { temperature: 0.9 });
const precise = await model.invoke(messages, { temperature: 0.1 });