Building applications with LLMs through composability
—
Agents are the core abstraction in LangChain for building autonomous LLM-powered applications. An agent combines a language model, optional tools, and optional middleware into an executable graph that can handle complex interactions with automatic tool calling loops, streaming, persistence, and human-in-the-loop workflows.
LangChain agents are built on top of LangGraph (a low-level agent orchestration framework) and provide a simplified interface for quickly building agents without needing to understand the underlying graph structure.
Here's a minimal example to get started:
from langchain.agents import create_agent
from langchain.messages import HumanMessage
# Create a basic agent
agent = create_agent(model="openai:gpt-4o")
# Execute the agent
result = agent.invoke({
"messages": [HumanMessage(content="Hello! What can you help me with?")]
})
print(result["messages"][-1].content)create_agent() FunctionThe create_agent() function is the primary entry point for building agents. It creates a compiled graph that can be invoked, streamed, or run in batch mode.
def create_agent(
model: str | BaseChatModel,
*,
tools: Sequence[BaseTool | Callable | dict] | None = None,
system_prompt: str | SystemMessage | None = None,
middleware: Sequence[AgentMiddleware] | None = None,
response_format: ResponseFormat | type | None = None,
state_schema: type[AgentState] | None = None,
context_schema: type | None = None,
checkpointer: Checkpointer | None = None,
store: BaseStore | None = None,
interrupt_before: list[str] | None = None,
interrupt_after: list[str] | None = None,
debug: bool = False,
name: str | None = None,
cache: BaseCache | None = None,
) -> CompiledStateGraphmodel (required)The language model to use. Can be a string identifier or a BaseChatModel instance.
String format: "provider:model-name"
Examples:
"openai:gpt-4o""anthropic:claude-3-5-sonnet-20241022""google_vertexai:gemini-1.5-pro""bedrock:anthropic.claude-3-sonnet-20240229-v1:0"Usage:
# Using string identifier (recommended)
agent = create_agent(model="openai:gpt-4o")
# Using model instance
from langchain.chat_models import init_chat_model
model = init_chat_model("openai:gpt-4o", temperature=0.7)
agent = create_agent(model=model)See the Chat Models documentation for the full list of supported providers.
toolsTools available to the agent. Can be BaseTool instances, regular functions decorated with @tool, or dictionaries with tool specifications.
Usage:
from langchain.tools import tool
# Define a simple tool
@tool
def calculator(expression: str) -> float:
"""Evaluate a mathematical expression."""
return eval(expression)
@tool
def get_weather(location: str) -> str:
"""Get the current weather for a location."""
return f"The weather in {location} is sunny and 72°F"
# Create agent with tools
agent = create_agent(
model="openai:gpt-4o",
tools=[calculator, get_weather]
)
# Execute agent
result = agent.invoke({
"messages": [HumanMessage(content="What is 123 * 456? Also, what's the weather in San Francisco?")]
})system_promptSystem instructions for the agent. Can be a string or SystemMessage instance.
Usage:
# Simple string
agent = create_agent(
model="openai:gpt-4o",
system_prompt="You are a helpful math assistant. Always show your work."
)
# SystemMessage for more control
from langchain.messages import SystemMessage
agent = create_agent(
model="openai:gpt-4o",
system_prompt=SystemMessage(
content="You are a helpful math assistant.",
additional_kwargs={"role_name": "Math Tutor"}
)
)response_formatConfiguration for structured output. Can be a Pydantic model, dataclass, TypedDict, or JSON schema.
Usage:
from pydantic import BaseModel
class WeatherReport(BaseModel):
location: str
temperature: float
conditions: str
humidity: int | None = None
agent = create_agent(
model="openai:gpt-4o",
response_format=WeatherReport,
system_prompt="Extract weather information from user queries."
)
result = agent.invoke({
"messages": [HumanMessage(content="It's 72 degrees and sunny in San Francisco")]
})
# Access structured output
weather = result["structured_response"]
print(f"{weather.location}: {weather.temperature}°F, {weather.conditions}")Supported Schema Types:
BaseModel classesdataclass decorated classesTypedDict type hintsResponse Format Types:
# Strategy types
ToolStrategy # Use tool calls for structured output
ProviderStrategy # Use provider's native structured output (JSON mode)
AutoStrategy # Auto-detect best strategy (default)
# Union type
ResponseFormat = ToolStrategy | ProviderStrategy | AutoStrategyError Classes:
class StructuredOutputError(Exception):
"""Base error for structured output failures."""
class MultipleStructuredOutputsError(StructuredOutputError):
"""Raised when multiple output tools are called but only one expected."""
class StructuredOutputValidationError(StructuredOutputError):
"""Raised when structured output fails schema validation."""middlewareMiddleware plugins for customizing agent behavior. See the Middleware documentation for details.
from langchain.agents.middleware import LoggingMiddleware
agent = create_agent(
model="openai:gpt-4o",
middleware=[LoggingMiddleware()]
)state_schemaCustom state schema that extends AgentState. Allows adding custom fields to agent state.
from typing import TypedDict
from langchain.agents import AgentState
class CustomState(AgentState):
user_name: str
conversation_count: int
agent = create_agent(
model="openai:gpt-4o",
state_schema=CustomState
)
result = agent.invoke({
"messages": [HumanMessage(content="Hello")],
"user_name": "Alice",
"conversation_count": 1
})context_schemaSchema for runtime context passed during execution.
from typing import TypedDict
class Context(TypedDict):
user_id: str
session_id: str
agent = create_agent(
model="openai:gpt-4o",
context_schema=Context
)checkpointerState persistence mechanism for resuming agent execution.
from langgraph.checkpoint.memory import MemorySaver
# Create checkpointer for persistence
checkpointer = MemorySaver()
agent = create_agent(
model="openai:gpt-4o",
checkpointer=checkpointer
)
# First conversation
config = {"configurable": {"thread_id": "conversation-1"}}
agent.invoke({
"messages": [HumanMessage(content="My name is Alice")]
}, config=config)
# Continue conversation (agent remembers context)
result = agent.invoke({
"messages": [HumanMessage(content="What's my name?")]
}, config=config)storeCross-thread data storage for sharing data between agent runs.
from langgraph.store import InMemoryStore
store = InMemoryStore()
agent = create_agent(
model="openai:gpt-4o",
store=store
)interrupt_before / interrupt_afterNode names where execution should pause before or after execution. Useful for human-in-the-loop workflows.
agent = create_agent(
model="openai:gpt-4o",
tools=[calculator],
interrupt_before=["tools"] # Pause before executing tools
)
# Execution will pause before calling tools
result = agent.invoke({
"messages": [HumanMessage(content="What is 5 * 10?")]
})debugEnable verbose logging for debugging.
agent = create_agent(
model="openai:gpt-4o",
debug=True # Enable debug logging
)nameOptional name for the CompiledStateGraph. This name will be automatically used when adding the agent graph to another graph as a subgraph node, particularly useful for building multi-agent systems.
agent = create_agent(
model="openai:gpt-4o",
name="research_agent"
)cacheCache for agent execution results.
from langchain.cache import InMemoryCache
cache = InMemoryCache()
agent = create_agent(
model="openai:gpt-4o",
cache=cache
)The create_agent() function returns a CompiledStateGraph - a runnable agent graph with the following methods:
# Synchronous methods
agent.invoke(input: dict) -> dict # Execute agent synchronously
agent.stream(input: dict) -> Iterator[dict] # Stream agent execution
agent.batch(inputs: list[dict]) -> list[dict] # Batch execution
# Asynchronous methods
agent.ainvoke(input: dict) -> dict # Async single execution
agent.astream(input: dict) -> AsyncIterator[dict] # Async streaming
agent.abatch(inputs: list[dict]) -> list[dict] # Async batch executionAgentState SchemaAgentState is the base state schema for agent execution. It's a TypedDict that contains the conversation history and optional structured output.
class AgentState(TypedDict):
"""
Base state schema for agent execution.
Attributes:
messages: List of conversation messages
structured_response: Present when using response_format, contains the structured output
jump_to: Ephemeral field for control flow, used by middleware to redirect execution
"""
messages: list[AnyMessage]
structured_response: Any # Optional
jump_to: str # Optional, ephemeralmessages (required)The conversation history. This is the primary state field and is always present.
result = agent.invoke({
"messages": [HumanMessage(content="Hello")]
})
# Access conversation history
for message in result["messages"]:
print(f"{message.type}: {message.content}")structured_response (optional)Only present when response_format is specified in create_agent(). Contains the parsed structured output from the agent.
from pydantic import BaseModel
class WeatherReport(BaseModel):
location: str
temperature: float
agent = create_agent(
model="openai:gpt-4o",
response_format=WeatherReport
)
result = agent.invoke({
"messages": [HumanMessage(content="It's 72 degrees in San Francisco")]
})
# Access structured output
weather = result["structured_response"]
print(f"{weather.location}: {weather.temperature}°F")jump_to (optional)Ephemeral field used by middleware to control execution flow. Can be set to "tools", "model", or "end" to redirect execution. See the Middleware documentation for details.
You can extend AgentState with custom fields by creating a new TypedDict that inherits from it:
from typing import TypedDict
from langchain.agents import AgentState, create_agent
class CustomState(AgentState):
"""Extended state with custom fields."""
user_name: str
conversation_count: int
preferences: dict
agent = create_agent(
model="openai:gpt-4o",
state_schema=CustomState
)
# Custom fields available throughout execution
result = agent.invoke({
"messages": [HumanMessage(content="Hello")],
"user_name": "Alice",
"conversation_count": 1,
"preferences": {"theme": "dark"}
})
# Access custom fields in result
print(result["user_name"]) # "Alice"
print(result["conversation_count"]) # 1LangChain provides special annotations for controlling state field visibility:
from typing import Annotated
from langchain.agents.middleware.types import OmitFromInput, OmitFromOutput, PrivateStateAttr
class CustomState(AgentState):
# Field excluded from input schema
computed_field: Annotated[int, OmitFromInput]
# Field excluded from output schema
internal_field: Annotated[str, OmitFromOutput]
# Field completely private (not in input or output)
private_field: Annotated[dict, PrivateStateAttr]Use cases:
OmitFromInput: For fields computed by the agent (e.g., token counts, timestamps)OmitFromOutput: For fields needed during execution but not returned (e.g., API keys)PrivateStateAttr: For completely internal fields (e.g., caches, temporary data)The CompiledStateGraph returned by create_agent() provides several execution methods for different use cases.
invoke()Execute the agent synchronously and return the final result.
result = agent.invoke({
"messages": [HumanMessage(content="What is 2 + 2?")]
})
print(result["messages"][-1].content) # "The answer is 4"With config:
config = {
"configurable": {"thread_id": "conversation-1"},
"metadata": {"user_id": "123"}
}
result = agent.invoke({
"messages": [HumanMessage(content="Hello")]
}, config=config)batch()Execute multiple inputs in parallel.
results = agent.batch([
{"messages": [HumanMessage(content="What is 2 + 2?")]},
{"messages": [HumanMessage(content="What is 3 * 3?")]},
{"messages": [HumanMessage(content="What is 5 - 1?")]}
])
for result in results:
print(result["messages"][-1].content)stream()Stream agent execution to receive intermediate steps and final output.
for chunk in agent.stream({
"messages": [HumanMessage(content="What is 2 + 2?")]
}):
print(chunk)Stream modes:
# Stream values (default) - emits full state after each node
for chunk in agent.stream(
{"messages": [HumanMessage(content="Hello")]},
stream_mode="values"
):
print(chunk)
# Stream updates - emits only state updates
for chunk in agent.stream(
{"messages": [HumanMessage(content="Hello")]},
stream_mode="updates"
):
print(chunk)
# Stream messages - emits only message updates
for chunk in agent.stream(
{"messages": [HumanMessage(content="Hello")]},
stream_mode="messages"
):
print(chunk)ainvoke()Execute the agent asynchronously.
result = await agent.ainvoke({
"messages": [HumanMessage(content="What is 2 + 2?")]
})
print(result["messages"][-1].content)astream()Stream agent execution asynchronously.
async for chunk in agent.astream({
"messages": [HumanMessage(content="What is 2 + 2?")]
}):
print(chunk)abatch()Execute multiple inputs in parallel asynchronously.
results = await agent.abatch([
{"messages": [HumanMessage(content="What is 2 + 2?")]},
{"messages": [HumanMessage(content="What is 3 * 3?")]},
{"messages": [HumanMessage(content="What is 5 - 1?")]}
])
for result in results:
print(result["messages"][-1].content)from langchain.agents import create_agent
from langchain.chat_models import init_chat_model
from langchain.messages import HumanMessage
from langchain.tools import tool
# Define a simple tool
@tool
def calculator(expression: str) -> float:
"""Evaluate a mathematical expression."""
return eval(expression)
# Create model
model = init_chat_model("openai:gpt-4o", temperature=0)
# Create agent
agent = create_agent(
model=model,
tools=[calculator],
system_prompt="You are a helpful math assistant. Use the calculator tool for computations.",
debug=True
)
# Execute agent
result = agent.invoke({
"messages": [HumanMessage(content="What is 123 * 456?")]
})
print(result["messages"][-1].content)from pydantic import BaseModel, Field
from langchain.agents import create_agent
from langchain.messages import HumanMessage
class WeatherReport(BaseModel):
location: str = Field(description="The location name")
temperature: float = Field(description="Temperature in Fahrenheit")
conditions: str = Field(description="Weather conditions")
humidity: int | None = Field(default=None, description="Humidity percentage")
agent = create_agent(
model="openai:gpt-4o",
response_format=WeatherReport,
system_prompt="Extract weather information from user queries."
)
result = agent.invoke({
"messages": [HumanMessage(content="It's 72 degrees and sunny in San Francisco with 60% humidity")]
})
# Access structured output
weather = result["structured_response"]
print(f"{weather.location}: {weather.temperature}°F, {weather.conditions}")
if weather.humidity:
print(f"Humidity: {weather.humidity}%")from langgraph.checkpoint.memory import MemorySaver
from langchain.agents import create_agent
from langchain.messages import HumanMessage
# Create checkpointer for persistence
checkpointer = MemorySaver()
agent = create_agent(
model="openai:gpt-4o",
checkpointer=checkpointer
)
# First conversation
config = {"configurable": {"thread_id": "conversation-1"}}
agent.invoke({
"messages": [HumanMessage(content="My name is Alice and I live in San Francisco")]
}, config=config)
# Continue conversation (agent remembers context)
result = agent.invoke({
"messages": [HumanMessage(content="What's my name and where do I live?")]
}, config=config)
print(result["messages"][-1].content) # "Your name is Alice and you live in San Francisco"from typing import TypedDict
from langchain.agents import AgentState, create_agent
from langchain.messages import HumanMessage
class CustomState(AgentState):
user_name: str
conversation_count: int
session_data: dict
agent = create_agent(
model="openai:gpt-4o",
state_schema=CustomState,
system_prompt="Use the user's name when responding."
)
result = agent.invoke({
"messages": [HumanMessage(content="Hello! How can you help me?")],
"user_name": "Alice",
"conversation_count": 1,
"session_data": {"theme": "dark", "language": "en"}
})
# Access custom state in result
print(f"Conversation #{result['conversation_count']} with {result['user_name']}")While you don't need to understand LangGraph to use LangChain agents, the create_agent() function returns a CompiledStateGraph from LangGraph. This means you can leverage all LangGraph features for advanced use cases:
Example: Using agent as a subgraph
from langgraph.graph import StateGraph
from langchain.agents import create_agent
# Create specialized agents
research_agent = create_agent(
model="openai:gpt-4o",
name="researcher"
)
writing_agent = create_agent(
model="openai:gpt-4o",
name="writer"
)
# Compose into larger graph
graph = StateGraph(AgentState)
graph.add_node("research", research_agent)
graph.add_node("write", writing_agent)
graph.add_edge("research", "write")
graph.set_entry_point("research")
graph.set_finish_point("write")
multi_agent = graph.compile()See the LangGraph documentation for advanced usage patterns and multi-agent architectures.
from langchain_core.language_models import BaseChatModel
from langchain_core.tools import BaseTool
from langchain_core.messages import AnyMessage, SystemMessage
from langgraph.checkpoint import Checkpointer
from langgraph.store import BaseStore
from langgraph.graph import CompiledStateGraph
# Agent middleware type
class AgentMiddleware:
"""Base class for middleware plugins."""
pass
# Response format types
ResponseFormat = ToolStrategy | ProviderStrategy | AutoStrategyInstall with Tessl CLI
npx tessl i tessl/pypi-langchain@1.2.1