Building applications with LLMs through composability
—
LangChain is a comprehensive framework for building agents and applications powered by Large Language Models (LLMs). It provides pre-built agent architecture and seamless integrations with major LLM providers including OpenAI, Anthropic, Google, and many others, enabling developers to create sophisticated LLM-powered applications with minimal code.
pip install langchainGet started with LangChain in 3 simple steps:
pip install langchainfrom langchain.agents import create_agent
from langchain.messages import HumanMessage
# Create an agent with a model
agent = create_agent(
model="openai:gpt-4o",
system_prompt="You are a helpful assistant."
)
# Use the agent
result = agent.invoke({
"messages": [HumanMessage(content="Hello! What can you help me with?")]
})
print(result["messages"][-1].content)from langchain.tools import tool
@tool
def calculator(expression: str) -> float:
"""Evaluate a mathematical expression."""
return eval(expression)
# Create agent with tools
agent = create_agent(
model="openai:gpt-4o",
tools=[calculator],
system_prompt="You are a helpful math assistant."
)
result = agent.invoke({
"messages": [HumanMessage(content="What is 42 * 137?")]
})For a complete walkthrough, see Quickstart Guide.
Agents combine language models, tools, and optional middleware into executable graphs. They handle complex interactions with automatic tool calling, streaming, persistence, and human-in-the-loop workflows.
def create_agent(
model: str | BaseChatModel,
*,
tools: Sequence[BaseTool | Callable | dict] | None = None,
system_prompt: str | SystemMessage | None = None,
middleware: Sequence[AgentMiddleware] | None = None,
checkpointer: Checkpointer | None = None,
**kwargs
) -> CompiledStateGraph: ...Initialize chat models from 20+ providers using simple string identifiers for conversational AI.
def init_chat_model(
model: str | None = None,
**kwargs: Any
) -> BaseChatModel: ...Common providers: OpenAI (openai:gpt-4o), Anthropic (anthropic:claude-3-5-sonnet-20241022), Google (google_vertexai:gemini-1.5-pro)
Learn more about Chat Models →
Generate vector representations of text for semantic search, similarity matching, and RAG applications.
def init_embeddings(
model: str,
**kwargs: Any
) -> Embeddings: ...Common providers: OpenAI (openai:text-embedding-3-small), Cohere (cohere:embed-english-v3.0), HuggingFace (huggingface:sentence-transformers/all-MiniLM-L6-v2)
Structured message types for conversations with support for multimodal content and tool calling.
class HumanMessage(BaseMessage): ... # User input
class AIMessage(BaseMessage): ... # AI response
class SystemMessage(BaseMessage): ... # System instructions
class ToolMessage(BaseMessage): ... # Tool execution resultsDefine executable functions that agents can call to perform actions.
@tool
def function_name(param: type) -> return_type:
"""Tool description for LLM."""
return result
class BaseTool:
"""Base class for complex tools."""
def _run(self, *args, **kwargs) -> Any: ...Quick links to common usage patterns:
Power features for sophisticated applications:
Complete API documentation and important notes:
# Agents
from langchain.agents import create_agent, AgentState
# Chat Models
from langchain.chat_models import init_chat_model
# Embeddings
from langchain.embeddings import init_embeddings
# Messages
from langchain.messages import (
HumanMessage, AIMessage, SystemMessage, ToolMessage,
trim_messages
)
# Tools
from langchain.tools import tool, BaseTool, ToolException
# Middleware
from langchain.agents.middleware import (
before_agent, after_agent, before_model, after_model,
wrap_model_call, wrap_tool_call, dynamic_prompt
)
# Rate Limiting
from langchain.rate_limiters import InMemoryRateLimiterAgent Invocation
# ✅ CORRECT: Pass messages in dict with "messages" key
agent.invoke({"messages": [HumanMessage(content="Hello")]})
# ❌ WRONG: Don't pass messages directly
agent.invoke([HumanMessage(content="Hello")]) # Will fail!Tool Docstrings
@tool
def my_tool(param: str) -> str:
"""This docstring is REQUIRED and sent to the LLM.
Without it, the LLM won't understand when to use this tool.
"""
return resultModel String Format
# Format: "provider:model-name"
model = init_chat_model("openai:gpt-4o") # ✅ Correct
model = init_chat_model("anthropic:claude-3-5") # ✅ Correct
model = init_chat_model("gpt-4o") # ❌ Missing providerAccessing Agent Results
result = agent.invoke({"messages": [...]})
# ✅ Access response content
response = result["messages"][-1].content # Last message is AI response
# ✅ Access full messages list
all_messages = result["messages"]For complete critical notes, see Critical Notes Reference.
Here's a complete example combining agents, tools, and persistence:
from langchain.agents import create_agent
from langchain.tools import tool
from langchain.messages import HumanMessage
from langgraph.checkpoint.memory import MemorySaver
# Define a tool
@tool
def get_weather(location: str) -> str:
"""Get current weather for a location."""
return f"The weather in {location} is sunny and 72°F"
# Create checkpointer for persistence
checkpointer = MemorySaver()
# Create agent
agent = create_agent(
model="openai:gpt-4o",
tools=[get_weather],
system_prompt="You are a helpful weather assistant.",
checkpointer=checkpointer
)
# Use with persistence
config = {"configurable": {"thread_id": "conversation-1"}}
# First message
agent.invoke({
"messages": [HumanMessage(content="What's the weather in Paris?")]
}, config=config)
# Second message (agent remembers context)
result = agent.invoke({
"messages": [HumanMessage(content="How about London?")]
}, config=config)
print(result["messages"][-1].content)Getting Started
Core Concepts (Essential knowledge) 2. Agents - Agent creation and configuration 3. Chat Models - Conversational AI initialization 4. Embeddings - Vector representations for semantic search 5. Messages - Message types and usage 6. Tools - Creating and using tools
Common Patterns (Practical usage) 7. Streaming - Real-time response streaming 8. Persistence - State management 9. Error Handling - Graceful error recovery 10. Async Operations - Concurrent execution
Advanced Features (Power user topics) 11. Middleware - Behavior customization 12. Dependency Injection - Context access 13. Rate Limiting - API throttling 14. Structured Output - Typed responses
Reference (Complete details) 15. Critical Notes - Important gotchas 16. API Reference - Complete API docs 17. Providers - All model providers
LangChain's architecture is built around composable components:
The framework emphasizes composability and reusability, allowing you to build complex agentic workflows by combining simple components.
This documentation covers langchain version 1.2.3. API signatures and behavior may differ in other versions.
Install with Tessl CLI
npx tessl i tessl/pypi-langchain