CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-langchain

Building applications with LLMs through composability

Pending
Overview
Eval results
Files

LangChain

LangChain is a comprehensive framework for building agents and applications powered by Large Language Models (LLMs). It provides pre-built agent architecture and seamless integrations with major LLM providers including OpenAI, Anthropic, Google, and many others, enabling developers to create sophisticated LLM-powered applications with minimal code.

Package Information

  • Package Name: langchain
  • Package Type: pypi
  • Language: Python
  • Installation: pip install langchain
  • Version: 1.2.3

Quick Start

Get started with LangChain in 3 simple steps:

1. Install

pip install langchain

2. Create Your First Agent

from langchain.agents import create_agent
from langchain.messages import HumanMessage

# Create an agent with a model
agent = create_agent(
    model="openai:gpt-4o",
    system_prompt="You are a helpful assistant."
)

# Use the agent
result = agent.invoke({
    "messages": [HumanMessage(content="Hello! What can you help me with?")]
})

print(result["messages"][-1].content)

3. Add Tools

from langchain.tools import tool

@tool
def calculator(expression: str) -> float:
    """Evaluate a mathematical expression."""
    return eval(expression)

# Create agent with tools
agent = create_agent(
    model="openai:gpt-4o",
    tools=[calculator],
    system_prompt="You are a helpful math assistant."
)

result = agent.invoke({
    "messages": [HumanMessage(content="What is 42 * 137?")]
})

For a complete walkthrough, see Quickstart Guide.

Core Concepts

Agents

Agents combine language models, tools, and optional middleware into executable graphs. They handle complex interactions with automatic tool calling, streaming, persistence, and human-in-the-loop workflows.

def create_agent(
    model: str | BaseChatModel,
    *,
    tools: Sequence[BaseTool | Callable | dict] | None = None,
    system_prompt: str | SystemMessage | None = None,
    middleware: Sequence[AgentMiddleware] | None = None,
    checkpointer: Checkpointer | None = None,
    **kwargs
) -> CompiledStateGraph: ...

Learn more about Agents →

Chat Models

Initialize chat models from 20+ providers using simple string identifiers for conversational AI.

def init_chat_model(
    model: str | None = None,
    **kwargs: Any
) -> BaseChatModel: ...

Common providers: OpenAI (openai:gpt-4o), Anthropic (anthropic:claude-3-5-sonnet-20241022), Google (google_vertexai:gemini-1.5-pro)

Learn more about Chat Models →

Embeddings

Generate vector representations of text for semantic search, similarity matching, and RAG applications.

def init_embeddings(
    model: str,
    **kwargs: Any
) -> Embeddings: ...

Common providers: OpenAI (openai:text-embedding-3-small), Cohere (cohere:embed-english-v3.0), HuggingFace (huggingface:sentence-transformers/all-MiniLM-L6-v2)

Learn more about Embeddings →

Messages

Structured message types for conversations with support for multimodal content and tool calling.

class HumanMessage(BaseMessage): ...  # User input
class AIMessage(BaseMessage): ...      # AI response
class SystemMessage(BaseMessage): ...  # System instructions
class ToolMessage(BaseMessage): ...    # Tool execution results

Learn more about Messages →

Tools

Define executable functions that agents can call to perform actions.

@tool
def function_name(param: type) -> return_type:
    """Tool description for LLM."""
    return result

class BaseTool:
    """Base class for complex tools."""
    def _run(self, *args, **kwargs) -> Any: ...

Learn more about Tools →

Common Patterns

Quick links to common usage patterns:

Advanced Features

Power features for sophisticated applications:

Reference

Complete API documentation and important notes:

Essential Imports

# Agents
from langchain.agents import create_agent, AgentState

# Chat Models
from langchain.chat_models import init_chat_model

# Embeddings
from langchain.embeddings import init_embeddings

# Messages
from langchain.messages import (
    HumanMessage, AIMessage, SystemMessage, ToolMessage,
    trim_messages
)

# Tools
from langchain.tools import tool, BaseTool, ToolException

# Middleware
from langchain.agents.middleware import (
    before_agent, after_agent, before_model, after_model,
    wrap_model_call, wrap_tool_call, dynamic_prompt
)

# Rate Limiting
from langchain.rate_limiters import InMemoryRateLimiter

Critical Quick Tips

Agent Invocation

# ✅ CORRECT: Pass messages in dict with "messages" key
agent.invoke({"messages": [HumanMessage(content="Hello")]})

# ❌ WRONG: Don't pass messages directly
agent.invoke([HumanMessage(content="Hello")])  # Will fail!

Tool Docstrings

@tool
def my_tool(param: str) -> str:
    """This docstring is REQUIRED and sent to the LLM.

    Without it, the LLM won't understand when to use this tool.
    """
    return result

Model String Format

# Format: "provider:model-name"
model = init_chat_model("openai:gpt-4o")           # ✅ Correct
model = init_chat_model("anthropic:claude-3-5")    # ✅ Correct
model = init_chat_model("gpt-4o")                  # ❌ Missing provider

Accessing Agent Results

result = agent.invoke({"messages": [...]})

# ✅ Access response content
response = result["messages"][-1].content  # Last message is AI response

# ✅ Access full messages list
all_messages = result["messages"]

For complete critical notes, see Critical Notes Reference.

Example: Complete Agent

Here's a complete example combining agents, tools, and persistence:

from langchain.agents import create_agent
from langchain.tools import tool
from langchain.messages import HumanMessage
from langgraph.checkpoint.memory import MemorySaver

# Define a tool
@tool
def get_weather(location: str) -> str:
    """Get current weather for a location."""
    return f"The weather in {location} is sunny and 72°F"

# Create checkpointer for persistence
checkpointer = MemorySaver()

# Create agent
agent = create_agent(
    model="openai:gpt-4o",
    tools=[get_weather],
    system_prompt="You are a helpful weather assistant.",
    checkpointer=checkpointer
)

# Use with persistence
config = {"configurable": {"thread_id": "conversation-1"}}

# First message
agent.invoke({
    "messages": [HumanMessage(content="What's the weather in Paris?")]
}, config=config)

# Second message (agent remembers context)
result = agent.invoke({
    "messages": [HumanMessage(content="How about London?")]
}, config=config)

print(result["messages"][-1].content)

Documentation Navigation

Getting Started

  1. Quickstart Guide - Fast path to your first agent

Core Concepts (Essential knowledge) 2. Agents - Agent creation and configuration 3. Chat Models - Conversational AI initialization 4. Embeddings - Vector representations for semantic search 5. Messages - Message types and usage 6. Tools - Creating and using tools

Common Patterns (Practical usage) 7. Streaming - Real-time response streaming 8. Persistence - State management 9. Error Handling - Graceful error recovery 10. Async Operations - Concurrent execution

Advanced Features (Power user topics) 11. Middleware - Behavior customization 12. Dependency Injection - Context access 13. Rate Limiting - API throttling 14. Structured Output - Typed responses

Reference (Complete details) 15. Critical Notes - Important gotchas 16. API Reference - Complete API docs 17. Providers - All model providers

Architecture Overview

LangChain's architecture is built around composable components:

  • Agents: High-level abstraction combining models, tools, and middleware
  • Chat Models: Factory-based initialization from 20+ providers
  • Embeddings: Vector representations for semantic search and RAG
  • Messages: Structured conversation representation with multimodal support
  • Tools: Functions or classes that agents can invoke
  • Middleware: Extensible plugin system for customizing behavior
  • State: TypedDict-based state management with custom field support

The framework emphasizes composability and reusability, allowing you to build complex agentic workflows by combining simple components.

Version Compatibility

This documentation covers langchain version 1.2.3. API signatures and behavior may differ in other versions.

Next Steps

Install with Tessl CLI

npx tessl i tessl/pypi-langchain
Workspace
tessl
Visibility
Public
Created
Last updated
Describes
pypipkg:pypi/langchain@1.2.x
Publish Source
CLI
Badge
tessl/pypi-langchain badge