Building applications with LLMs through composability
—
Get started with LangChain in minutes. This guide walks you through creating your first agent, adding tools, and using common features.
pip install langchainFor specific providers, you may need additional packages:
# OpenAI
pip install langchain-openai
# Anthropic
pip install langchain-anthropic
# Google
pip install langchain-google-vertexaiCreate a simple conversational agent:
from langchain.agents import create_agent
from langchain.messages import HumanMessage
# Create agent
agent = create_agent(
model="openai:gpt-4o",
system_prompt="You are a helpful assistant."
)
# Use the agent - IMPORTANT: Pass messages in dict with "messages" key
result = agent.invoke({
"messages": [HumanMessage(content="Hello! Introduce yourself.")]
})
# Access the AI response - it's the last message in the list
print(result["messages"][-1].content)Output:
Hello! I'm an AI assistant created by OpenAI. I'm here to help answer your questions..."messages" keyresult["messages"]Tools allow agents to perform actions:
from langchain.agents import create_agent
from langchain.tools import tool
from langchain.messages import HumanMessage
# Define a tool - docstring is REQUIRED
@tool
def calculator(expression: str) -> float:
"""Evaluate a mathematical expression.
Args:
expression: Mathematical expression to evaluate (e.g., "2 + 2")
Returns:
Result of the calculation
"""
return eval(expression)
# Create agent with tools
agent = create_agent(
model="openai:gpt-4o",
tools=[calculator],
system_prompt="You are a helpful math assistant. Use the calculator for math operations."
)
# The agent will automatically call the calculator tool when needed
result = agent.invoke({
"messages": [HumanMessage(content="What is 42 * 137?")]
})
print(result["messages"][-1].content)Output:
42 multiplied by 137 equals 5,754.Agents can use multiple tools:
from langchain.tools import tool
@tool
def get_weather(location: str) -> str:
"""Get current weather for a location."""
return f"Weather in {location}: Sunny, 72°F"
@tool
def get_time(timezone: str = "UTC") -> str:
"""Get current time in a timezone."""
from datetime import datetime
return f"Current time: {datetime.now().strftime('%H:%M:%S')}"
# Create agent with multiple tools
agent = create_agent(
model="openai:gpt-4o",
tools=[get_weather, get_time],
system_prompt="You are a helpful assistant with access to weather and time information."
)
result = agent.invoke({
"messages": [HumanMessage(content="What's the weather in Paris and what time is it?")]
})The agent will automatically call both tools as needed.
Stream responses in real-time:
from langchain.agents import create_agent
from langchain.messages import HumanMessage
agent = create_agent(model="openai:gpt-4o")
# Stream the response
for chunk in agent.stream({
"messages": [HumanMessage(content="Write a short poem about coding.")]
}):
# Each chunk contains updates - check for messages
if "messages" in chunk:
for message in chunk["messages"]:
if hasattr(message, 'content'):
print(message.content, end="", flush=True)Save conversation state across invocations:
from langchain.agents import create_agent
from langchain.messages import HumanMessage
from langgraph.checkpoint.memory import MemorySaver
# Create checkpointer
checkpointer = MemorySaver()
# Create agent with persistence
agent = create_agent(
model="openai:gpt-4o",
checkpointer=checkpointer
)
# Configure with thread ID for persistence
config = {"configurable": {"thread_id": "user-123"}}
# First message
agent.invoke({
"messages": [HumanMessage(content="My name is Alice")]
}, config=config)
# Second message - agent remembers context
result = agent.invoke({
"messages": [HumanMessage(content="What's my name?")]
}, config=config)
print(result["messages"][-1].content) # "Your name is Alice"Key point: Use the same thread_id to maintain conversation context.
Learn more about persistence →
Handle tool errors gracefully:
from langchain.tools import tool, ToolException
@tool
def divide(a: float, b: float) -> float:
"""Divide two numbers.
Args:
a: Numerator
b: Denominator
"""
if b == 0:
raise ToolException(
"Cannot divide by zero. Please provide a non-zero denominator."
)
return a / b
agent = create_agent(
model="openai:gpt-4o",
tools=[divide]
)
# If the agent tries to divide by zero, the error message
# is sent back to the LLM, which can retry with different parameters
result = agent.invoke({
"messages": [HumanMessage(content="What is 10 divided by 0?")]
})The agent sees the error and can respond appropriately.
Learn more about error handling →
Use async for concurrent execution:
import asyncio
from langchain.agents import create_agent
from langchain.messages import HumanMessage
agent = create_agent(model="openai:gpt-4o")
async def main():
# Async invocation
result = await agent.ainvoke({
"messages": [HumanMessage(content="Hello!")]
})
print(result["messages"][-1].content)
# Run async function
asyncio.run(main())Learn more about async operations →
Switch between model providers easily:
# OpenAI
agent = create_agent(model="openai:gpt-4o")
# Anthropic
agent = create_agent(model="anthropic:claude-3-5-sonnet-20241022")
# Google
agent = create_agent(model="google_vertexai:gemini-1.5-pro")
# With configuration
agent = create_agent(
model="openai:gpt-4o",
temperature=0.7,
max_tokens=1000
)Here's a complete example combining everything:
from langchain.agents import create_agent
from langchain.tools import tool, ToolException
from langchain.messages import HumanMessage
from langgraph.checkpoint.memory import MemorySaver
# Define tools
@tool
def search_database(query: str) -> list[dict]:
"""Search the company database.
Args:
query: Search query
"""
# Simulate database search
return [
{"id": 1, "name": "Product A", "price": 29.99},
{"id": 2, "name": "Product B", "price": 49.99}
]
@tool
def check_inventory(product_id: int) -> int:
"""Check inventory for a product.
Args:
product_id: Product ID to check
"""
if product_id not in [1, 2]:
raise ToolException(f"Product {product_id} not found")
# Simulate inventory check
return 42 # In stock
# Create checkpointer for persistence
checkpointer = MemorySaver()
# Create agent
agent = create_agent(
model="openai:gpt-4o",
tools=[search_database, check_inventory],
system_prompt="""You are a helpful shopping assistant.
Help users find products and check availability.
Always be friendly and informative.""",
checkpointer=checkpointer
)
# Use the agent with persistence
config = {"configurable": {"thread_id": "customer-session-1"}}
# First query
result1 = agent.invoke({
"messages": [HumanMessage(content="Search for products")]
}, config=config)
print("Response 1:", result1["messages"][-1].content)
# Follow-up query - agent remembers context
result2 = agent.invoke({
"messages": [HumanMessage(content="Check inventory for the first product")]
}, config=config)
print("Response 2:", result2["messages"][-1].content)# WRONG - passing messages directly
agent.invoke([HumanMessage(content="Hello")])
# CORRECT - pass in dict with "messages" key
agent.invoke({"messages": [HumanMessage(content="Hello")]})# WRONG - no docstring
@tool
def my_tool(x: int) -> int:
return x * 2
# CORRECT - has docstring
@tool
def my_tool(x: int) -> int:
"""Multiply a number by 2."""
return x * 2result = agent.invoke({"messages": [...]})
# WRONG - treating result as message
print(result.content)
# CORRECT - access messages list and get last message
print(result["messages"][-1].content)# WRONG - missing provider
agent = create_agent(model="gpt-4o")
# CORRECT - include provider
agent = create_agent(model="openai:gpt-4o")Now that you have the basics, explore more features:
# Create agent
agent = create_agent(
model="provider:model-name",
tools=[tool1, tool2],
system_prompt="...",
checkpointer=MemorySaver() # For persistence
)
# Invoke agent
result = agent.invoke(
{"messages": [HumanMessage(content="...")]},
config={"configurable": {"thread_id": "..."}} # For persistence
)
# Access response
response = result["messages"][-1].content
# Stream responses
for chunk in agent.stream({"messages": [...]}):
print(chunk)
# Async
result = await agent.ainvoke({"messages": [...]})Install with Tessl CLI
npx tessl i tessl/pypi-langchain