docs
LangChain is a comprehensive TypeScript/JavaScript framework for building LLM-powered applications with modular, reusable components. It provides a standardized interface for creating production-ready agents that can reason about tasks, use tools, maintain state, and produce structured outputs.
Fast Lookups:
Getting Started:
Complete Documentation:
npm install langchainRequirements:
import { createAgent, tool } from "langchain";
import { z } from "zod";
// Create a tool
const calculator = tool(
async ({ expression }) => String(eval(expression)),
{
name: "calculator",
description: "Evaluate mathematical expressions",
schema: z.object({
expression: z.string().describe("Mathematical expression to evaluate"),
}),
}
);
// Create an agent
const agent = createAgent({
model: "openai:gpt-4o",
tools: [calculator],
systemPrompt: "You are a helpful math assistant.",
});
// Use the agent
const result = await agent.invoke({
messages: [{ role: "user", content: "What is 15 * 7?" }],
});
console.log(result.messages[result.messages.length - 1].content);Agents follow the ReAct (Reasoning + Acting) pattern, combining language models with tools to iteratively work towards solutions.
function createAgent<TConfig>(params: CreateAgentParams): ReactAgent<TConfig>;
interface CreateAgentParams {
model?: string | ChatModel;
tools?: Tool[] | ToolNode;
systemPrompt?: string | SystemMessage;
responseFormat?: ResponseFormat;
stateSchema?: ZodObject | AnnotationRoot;
contextSchema?: ZodObject | AnnotationRoot;
middleware?: AgentMiddleware[];
checkpointer?: BaseCheckpointSaver;
store?: BaseStore;
name?: string;
description?: string;
includeAgentName?: boolean | "tool_messages";
signal?: AbortSignal;
version?: "v1" | "v2";
}
class ReactAgent<TConfig> {
invoke(input: UserInput, config?: InvokeConfiguration): Promise<State>;
stream(input: UserInput, config?: StreamConfiguration): AsyncGenerator<State>;
streamEvents(input: UserInput, config?: StreamConfiguration): AsyncGenerator<Event>;
batch(inputs: UserInput[], config?: BatchConfiguration): Promise<State[]>;
}Complete Agent Guide | Agent API Reference
Universal model initialization supporting 18+ LLM providers with a consistent interface.
function initChatModel<RunInput = any, CallOptions extends BaseChatModelCallOptions = BaseChatModelCallOptions>(
model?: string | ChatModel,
fields?: InitChatModelFields
): ChatModel<RunInput, CallOptions>;Supported Providers: OpenAI, Azure OpenAI, Anthropic, Google (Vertex AI, Generative AI), Cohere, Mistral AI, AWS Bedrock, Ollama, Groq, Cerebras, DeepSeek, X.AI, Fireworks, Together AI, Perplexity
Model Guide | Model API Reference
Tools give agents the ability to take actions by calling functions.
function tool<T = any>(
func: (input: T) => any | Promise<any>,
fields: {
name: string;
description: string;
schema: ZodType<T>;
}
): StructuredTool<T>;Tool Guide | Tool API Reference
Generate type-safe, validated responses using Zod schemas or JSON schemas.
function toolStrategy<T>(schema: ZodType<T>, options?: ToolStrategyOptions): ToolStrategy<T>;
function providerStrategy<T>(schema: ZodType<T>): ProviderStrategy<T>;
type ResponseFormat =
| ZodType<any>
| ZodType<any>[]
| JsonSchemaFormat
| JsonSchemaFormat[]
| ToolStrategy<any>
| ProviderStrategy<any>;Structured Output Guide | Structured Output API Reference
Standardized message types for communication between users, AI, tools, and system instructions.
class HumanMessage extends BaseMessage { }
class AIMessage extends BaseMessage { }
class SystemMessage extends BaseMessage { }
class ToolMessage extends BaseMessage {
tool_call_id: string;
}
function filterMessages(messages: BaseMessage[], options: FilterMessagesOptions): BaseMessage[];
function trimMessages(messages: BaseMessage[], options: TrimMessagesOptions): BaseMessage[];Message Guide | Message API Reference
Composable middleware for extending agent behavior with cross-cutting concerns.
function createMiddleware<TSchema, TContextSchema, TTools>(
config: MiddlewareConfig<TSchema, TContextSchema, TTools>
): AgentMiddleware<TSchema, TContextSchema, TTools>;Pre-built Middleware: Human-in-the-Loop, Summarization, PII Detection, Tool/Model Retry, Rate Limiting, Prompt Caching, and more
Middleware Guide | Middleware System | Built-in Middleware Catalog
Storage implementations for persisting key-value data.
class InMemoryStore<V = any> {
mget(keys: string[]): Promise<(V | undefined)[]>;
mset(keyValuePairs: [string, V][]): Promise<void>;
mdelete(keys: string[]): Promise<void>;
yieldKeys(prefix?: string): AsyncGenerator<string>;
}Storage Guide | Storage API Reference
import { createAgent } from "langchain";
import { z } from "zod";
const ContactInfo = z.object({
name: z.string(),
email: z.string().email(),
phone: z.string(),
});
const agent = createAgent({
model: "openai:gpt-4o",
tools: [],
responseFormat: ContactInfo,
});
const result = await agent.invoke({
messages: [{ role: "user", content: "Extract: John, john@example.com, 555-1234" }],
});
console.log(result.structuredResponse); // { name: "John", email: "john@example.com", phone: "555-1234" }import { createAgent } from "langchain";
import { z } from "zod";
const StateSchema = z.object({
sessionId: z.string(),
userPreferences: z.object({
theme: z.string(),
language: z.string(),
}).optional(),
});
const agent = createAgent({
model: "openai:gpt-4o",
tools: [],
stateSchema: StateSchema,
});
const result = await agent.invoke({
messages: [{ role: "user", content: "Hello" }],
sessionId: "session-123",
userPreferences: { theme: "dark", language: "en" },
});import { createAgent, humanInTheLoopMiddleware } from "langchain";
const agent = createAgent({
model: "openai:gpt-4o",
tools: [dangerousTool],
middleware: [
humanInTheLoopMiddleware({ interruptOn: "tools" }),
],
checkpointer: myCheckpointer,
});import { createAgent } from "langchain";
const agent = createAgent({
model: "openai:gpt-4o",
tools: [],
});
for await (const state of agent.stream(
{ messages: [{ role: "user", content: "Tell me a story" }] },
{ streamMode: "values" }
)) {
const lastMessage = state.messages[state.messages.length - 1];
console.log(lastMessage.content);
}import { createAgent, initChatModel } from "langchain";
// Switch models based on requirements
const fastModel = initChatModel("openai:gpt-4o-mini");
const powerfulModel = initChatModel("openai:gpt-4o");
const localModel = initChatModel("ollama:llama3.1");
// Use any model with same interface
const agent = createAgent({
model: powerfulModel,
tools: [],
});Task-oriented guides for common use cases:
Complete API documentation:
Middleware system documentation:
Most providers require API keys via environment variables:
# OpenAI
export OPENAI_API_KEY="sk-..."
# Anthropic
export ANTHROPIC_API_KEY="sk-ant-..."
# Google
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/credentials.json"
# Cohere
export COHERE_API_KEY="..."
# Groq
export GROQ_API_KEY="gsk_..."class MultipleToolsBoundError extends Error {
message: "The model already has tools bound to it";
}
class MultipleStructuredOutputsError extends Error {
message: "Multiple structured outputs returned when one was expected";
outputs: any[];
}
class StructuredOutputParsingError extends Error {
message: string;
cause?: Error;
rawOutput?: string;
}
class ToolInvocationError extends Error {
message: string;
toolName: string;
toolInput: any;
cause?: Error;
}