CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/pypi-langchain

Building applications with LLMs through composability

Pending
Overview
Eval results
Files

messages.mddocs/core/

Messages

Core message types for structured conversations between users, AI assistants, and tools. Messages are the fundamental building blocks for LangChain applications, enabling rich conversational interfaces with multimodal support and tool integration.

Package Information

  • Module: langchain.messages
  • Source: Re-exported from langchain-core
  • Language: Python
  • Installation: pip install langchain

Core Imports

from langchain.messages import (
    HumanMessage, AIMessage, SystemMessage, ToolMessage, RemoveMessage,
    ToolCall, UsageMetadata, trim_messages
)

Basic Usage

from langchain.messages import HumanMessage, AIMessage, SystemMessage

# Create a basic conversation
messages = [
    SystemMessage(content="You are a helpful assistant."),
    HumanMessage(content="What is the capital of France?"),
    AIMessage(content="The capital of France is Paris.")
]

# Messages support metadata and IDs
msg = HumanMessage(
    content="Translate this to French",
    id="msg_123",
    name="user_john",
    metadata={"language": "en", "target": "fr"}
)

# AI messages can include usage tracking
ai_msg = AIMessage(
    content="The capital of France is Paris.",
    usage_metadata={
        "input_tokens": 20,
        "output_tokens": 8,
        "total_tokens": 28
    }
)

Architecture

The message system follows a hierarchical structure designed for flexibility and extensibility:

  • BaseMessage: Abstract base class for all message types with common fields (content, id, metadata)
  • Concrete Message Types: Specialized message classes for different conversation participants
  • Content Blocks: Structured representations for multimodal content (text, images, audio, video)
  • Tool Calling Types: Primitives for function/tool invocation and results
  • Metadata Types: Token usage tracking and message annotations
  • Message Chunks: Streaming variants for real-time message delivery

This design enables rich conversational interfaces with full multimodal support, tool integration, and detailed usage tracking across all LLM providers supported by LangChain.

Capabilities

Core Message Types

The fundamental message classes representing different participants in a conversation: human users, AI assistants, system instructions, tool execution results, and message removal directives.

class HumanMessage(BaseMessage):
    """User/human input message."""

class AIMessage(BaseMessage):
    """LLM/assistant response message."""

class SystemMessage(BaseMessage):
    """System instruction message."""

class ToolMessage(BaseMessage):
    """Tool execution result message."""

class RemoveMessage(BaseMessage):
    """Directive to remove a message from context."""

Core Message Types

Message Streaming

Streaming variants of messages for real-time delivery of content as it's generated, enabling responsive user interfaces and progressive content rendering.

class AIMessageChunk(BaseMessage):
    """Streaming chunk of AI message."""

Message Streaming

Content Block Types

Structured content representations supporting multimodal inputs including text, images, audio, video, files, and reasoning traces. Content blocks enable rich, mixed-media conversations.

class TextContentBlock: ...
class PlainTextContentBlock: ...
class ImageContentBlock: ...
class AudioContentBlock: ...
class VideoContentBlock: ...
class DataContentBlock: ...
class FileContentBlock: ...
class ReasoningContentBlock: ...
class NonStandardContentBlock: ...

Content Block Types

Tool Calling Types

Primitives for structured function and tool invocation, including tool call specifications, streaming chunks, invalid call handling, and server-side tool execution support.

class ToolCall: ...
class ToolCallChunk: ...
class InvalidToolCall: ...
class ServerToolCall: ...
class ServerToolCallChunk: ...
class ServerToolResult: ...

Tool Calling Types

Metadata and Annotations

Token usage tracking and message annotation types for monitoring costs, performance, and adding structured metadata to messages throughout the conversation lifecycle.

class UsageMetadata: ...
class InputTokenDetails: ...
class OutputTokenDetails: ...
class Citation: ...
class NonStandardAnnotation: ...
Annotation = Citation | NonStandardAnnotation

Metadata and Annotations

Message Utilities

Helper functions and type unions for managing message lists, trimming conversations by token count, and working with message-like representations.

def trim_messages(...): ...
AnyMessage = Union[HumanMessage, AIMessage, SystemMessage, ToolMessage, RemoveMessage]
MessageLikeRepresentation = Union[...]

Message Utilities


Core Message Types

HumanMessage

Represents input from a human user. This is the primary message type for capturing user queries, commands, and conversational input.

class HumanMessage(BaseMessage):
    """
    User/human input message.

    Attributes:
    - content: str | list[ContentBlock] - Message content (text or multimodal)
    - id: str | None - Unique message identifier
    - name: str | None - Optional sender name
    - metadata: dict - Additional metadata
    - response_metadata: dict - Response-specific metadata

    Example:
    >>> msg = HumanMessage(content="Hello, how are you?")
    >>> msg = HumanMessage(
    ...     content="What is the weather today?",
    ...     name="user_123",
    ...     metadata={"location": "Paris"}
    ... )
    """

Usage Example:

from langchain.messages import HumanMessage

# Simple text message
msg = HumanMessage(content="What is the weather today?")

# Message with metadata
msg = HumanMessage(
    content="Translate this to French",
    name="user_123",
    metadata={"language": "en", "target": "fr"}
)

# Message with ID for tracking
msg = HumanMessage(
    content="Important query",
    id="msg_abc_123"
)

For multimodal content (images, audio, video), see the API Reference for complete content block types.

AIMessage

Represents a response from an AI assistant or language model. This message type includes support for tool calls and usage tracking.

class AIMessage(BaseMessage):
    """
    LLM/assistant response message.

    Attributes:
    - content: str | list[ContentBlock] - Message content
    - id: str | None - Unique message identifier
    - name: str | None - Optional assistant name
    - tool_calls: list[ToolCall] - Tool/function calls made by the AI
    - invalid_tool_calls: list[InvalidToolCall] - Malformed tool calls
    - usage_metadata: UsageMetadata | None - Token usage information
    - metadata: dict - Additional metadata
    - response_metadata: dict - Provider-specific response data

    Example:
    >>> msg = AIMessage(content="The weather is sunny today.")
    >>> msg = AIMessage(
    ...     content="Here is your answer.",
    ...     usage_metadata={
    ...         "input_tokens": 50,
    ...         "output_tokens": 20,
    ...         "total_tokens": 70
    ...     }
    ... )
    """

Usage Example:

from langchain.messages import AIMessage

# Simple response
msg = AIMessage(content="The capital of France is Paris.")

# Response with usage metadata
msg = AIMessage(
    content="Here is your answer.",
    usage_metadata={
        "input_tokens": 50,
        "output_tokens": 20,
        "total_tokens": 70
    }
)

# Response with metadata
msg = AIMessage(
    content="Analysis complete.",
    metadata={"confidence": 0.95, "model": "gpt-4"}
)

For tool calling patterns, see tools.md. For streaming responses, see ../patterns/streaming.md.

SystemMessage

Represents system-level instructions or context that guides the AI's behavior. System messages are typically used to set personality, role, or behavioral constraints.

class SystemMessage(BaseMessage):
    """
    System instruction message.

    Attributes:
    - content: str | list[ContentBlock] - System instruction content
    - id: str | None - Unique message identifier
    - name: str | None - Optional system message name
    - metadata: dict - Additional metadata

    Example:
    >>> msg = SystemMessage(content="You are a helpful assistant.")
    >>> msg = SystemMessage(
    ...     content="You are a Python expert. Answer concisely.",
    ...     name="system_instruction"
    ... )
    """

Usage Example:

from langchain.messages import SystemMessage

# Basic system instruction
msg = SystemMessage(content="You are a helpful customer service agent.")

# Detailed system instruction
msg = SystemMessage(
    content="""You are an expert Python developer.
    - Provide clear, concise answers
    - Include code examples when helpful
    - Explain your reasoning
    - Always consider edge cases"""
)

# Named system message
msg = SystemMessage(
    content="Respond in JSON format only.",
    name="format_instruction"
)

ToolMessage

Represents the result of a tool or function execution. Tool messages are used to provide execution results back to the AI assistant after it has requested a tool call.

class ToolMessage(BaseMessage):
    """
    Tool execution result message.

    Attributes:
    - content: str | list[ContentBlock] - Tool execution result
    - tool_call_id: str - ID linking to the corresponding ToolCall
    - name: str | None - Tool name
    - status: str | None - Execution status ("success", "error")
    - metadata: dict - Additional metadata

    Example:
    >>> msg = ToolMessage(
    ...     content='{"temperature": 72, "condition": "sunny"}',
    ...     tool_call_id="call_123",
    ...     name="get_weather"
    ... )
    """

Usage Example:

from langchain.messages import ToolMessage

# Successful tool execution
msg = ToolMessage(
    content='{"result": "Paris", "country": "France"}',
    tool_call_id="call_abc123",
    name="get_capital"
)

# Tool execution with error
msg = ToolMessage(
    content="Error: API rate limit exceeded",
    tool_call_id="call_xyz789",
    name="web_search",
    status="error"
)

# Tool execution with metadata
msg = ToolMessage(
    content="42",
    tool_call_id="call_123",
    name="calculate",
    metadata={"execution_time_ms": 150}
)

For complete tool calling workflows, see tools.md.

RemoveMessage

A special directive message used to remove a specific message from the conversation context. This is useful for managing context length and removing outdated or irrelevant information.

class RemoveMessage(BaseMessage):
    """
    Directive to remove a message from context.

    Attributes:
    - id: str - ID of the message to remove

    Example:
    >>> msg = RemoveMessage(id="msg_to_remove_123")
    """

Usage Example:

from langchain.messages import RemoveMessage, HumanMessage

# Create a conversation
messages = [
    HumanMessage(content="Hello", id="msg1"),
    HumanMessage(content="What's 2+2?", id="msg2"),
    HumanMessage(content="Never mind", id="msg3")
]

# Remove a specific message
remove_directive = RemoveMessage(id="msg2")

# This signals the system to remove msg2 from context
messages.append(remove_directive)

Message Attributes

All messages share common attributes for tracking and metadata:

Core Attributes

  • content: The message content (string or list of content blocks)
  • id: Unique message identifier for tracking and removal
  • name: Optional sender/source name
  • metadata: Custom metadata dictionary
  • response_metadata: Provider-specific response data

AIMessage-Specific Attributes

  • tool_calls: List of tool/function calls made by the AI
  • invalid_tool_calls: List of malformed tool calls
  • usage_metadata: Token usage information (see below)

ToolMessage-Specific Attributes

  • tool_call_id: Links the result to the original tool call
  • status: Execution status ("success", "error", etc.)

Usage Metadata

Token usage information for tracking model consumption and costs. Available on AIMessage and AIMessageChunk.

class UsageMetadata:
    """
    Token usage information.

    Attributes:
    - input_tokens: int - Number of input tokens
    - output_tokens: int - Number of output tokens
    - total_tokens: int - Total tokens (input + output)
    - input_token_details: InputTokenDetails | None - Detailed input breakdown
    - output_token_details: OutputTokenDetails | None - Detailed output breakdown

    Example:
    >>> usage = {
    ...     "input_tokens": 50,
    ...     "output_tokens": 120,
    ...     "total_tokens": 170
    ... }
    """

Usage Example:

from langchain.messages import AIMessage

# Message with usage metadata
msg = AIMessage(
    content="Here is a detailed explanation...",
    usage_metadata={
        "input_tokens": 50,
        "output_tokens": 120,
        "total_tokens": 170
    }
)

# Access usage information
if msg.usage_metadata:
    print(f"Input tokens: {msg.usage_metadata['input_tokens']}")
    print(f"Output tokens: {msg.usage_metadata['output_tokens']}")
    print(f"Total cost: ${(msg.usage_metadata['total_tokens'] * 0.00001):.6f}")

# Accumulate usage across conversation
total_tokens = 0
for message in messages:
    if hasattr(message, 'usage_metadata') and message.usage_metadata:
        total_tokens += message.usage_metadata['total_tokens']
print(f"Total conversation tokens: {total_tokens}")

trim_messages Utility

Trim a message list by token count or message count to manage context length and stay within model limits.

def trim_messages(
    messages: list[BaseMessage],
    *,
    max_tokens: int | None = None,
    token_counter: Callable | None = None,
    strategy: Literal["first", "last"] = "last",
    allow_partial: bool = False,
    start_on: str | list[str] | None = None,
    end_on: str | list[str] | None = None,
    include_system: bool = True,
) -> list[BaseMessage]:
    """
    Trim message list by token count or message count.

    Parameters:
    - messages: List of messages to trim
    - max_tokens: Maximum token count to keep
    - token_counter: Function to count tokens (defaults to approximate counter)
    - strategy: Keep "first" or "last" messages
    - allow_partial: Allow partial message content
    - start_on: Message type(s) to start on
    - end_on: Message type(s) to end on
    - include_system: Whether to always include system messages

    Returns:
    Trimmed list of messages

    Example:
    >>> from langchain.messages import trim_messages, HumanMessage, AIMessage
    >>> messages = [
    ...     HumanMessage(content="Hi"),
    ...     AIMessage(content="Hello!"),
    ...     HumanMessage(content="How are you?"),
    ...     AIMessage(content="I'm doing well, thanks!")
    ... ]
    >>> trimmed = trim_messages(messages, max_tokens=50, strategy="last")
    """

Usage Example:

from langchain.messages import trim_messages, SystemMessage, HumanMessage, AIMessage

# Create a long conversation
messages = [
    SystemMessage(content="You are a helpful assistant."),
    HumanMessage(content="What is Python?"),
    AIMessage(content="Python is a programming language..."),
    HumanMessage(content="What about Java?"),
    AIMessage(content="Java is another programming language..."),
    HumanMessage(content="Which is better?"),
    AIMessage(content="Both have their strengths...")
]

# Trim to last 100 tokens, keeping system message
trimmed = trim_messages(
    messages,
    max_tokens=100,
    strategy="last",
    include_system=True
)

# Trim to last 2 conversation turns
trimmed = trim_messages(
    messages,
    max_tokens=200,
    start_on="HumanMessage",
    include_system=True
)

# Use with custom token counter
from tiktoken import encoding_for_model

def token_counter(messages):
    encoding = encoding_for_model("gpt-4")
    return sum(len(encoding.encode(msg.content)) for msg in messages)

trimmed = trim_messages(
    messages,
    max_tokens=500,
    token_counter=token_counter
)

Complete Conversation Example

from langchain.messages import (
    SystemMessage, HumanMessage, AIMessage, ToolMessage, trim_messages
)

# Build a conversation with all message types
messages = [
    SystemMessage(content="You are a helpful weather assistant."),
    HumanMessage(content="What's the weather in Paris?"),
    AIMessage(
        content="Let me check that for you.",
        tool_calls=[
            {
                "name": "get_weather",
                "args": {"city": "Paris"},
                "id": "call_123"
            }
        ],
        usage_metadata={
            "input_tokens": 25,
            "output_tokens": 15,
            "total_tokens": 40
        }
    ),
    ToolMessage(
        content='{"temperature": 22, "condition": "sunny"}',
        tool_call_id="call_123",
        name="get_weather"
    ),
    AIMessage(
        content="The weather in Paris is 22°C and sunny.",
        usage_metadata={
            "input_tokens": 45,
            "output_tokens": 12,
            "total_tokens": 57
        }
    )
]

# Trim if needed
if len(messages) > 4:
    messages = trim_messages(messages, max_tokens=500, include_system=True)

# Calculate total usage
total_tokens = sum(
    msg.usage_metadata.get('total_tokens', 0)
    for msg in messages
    if hasattr(msg, 'usage_metadata') and msg.usage_metadata
)
print(f"Total tokens used: {total_tokens}")

Message Streaming

Streaming variants of messages enable real-time delivery of content as it's generated by the language model. This provides responsive user interfaces and allows progressive rendering of responses.

AIMessageChunk

A streamable chunk of an AI message, allowing incremental delivery of content, tool calls, and metadata as they are generated.

class AIMessageChunk(BaseMessage):
    """
    Streaming chunk of AI message.

    Attributes:
    - content: str | list[ContentBlock] - Partial message content
    - tool_call_chunks: list[ToolCallChunk] - Partial tool calls
    - usage_metadata: UsageMetadata | None - Cumulative token usage
    - metadata: dict - Additional metadata
    - response_metadata: dict - Provider-specific response data

    Note:
    - Chunks can be concatenated to form complete messages
    - Tool call chunks accumulate to form complete ToolCall objects
    - Usage metadata may be updated with each chunk

    Example:
    >>> chunk1 = AIMessageChunk(content="The capital")
    >>> chunk2 = AIMessageChunk(content=" of France")
    >>> chunk3 = AIMessageChunk(content=" is Paris.")
    >>> full_message = chunk1 + chunk2 + chunk3
    """

Usage Example:

from langchain.messages import AIMessageChunk

# Streaming content chunks
async for chunk in model.astream(messages):
    if isinstance(chunk, AIMessageChunk):
        print(chunk.content, end="", flush=True)

# Accumulating chunks
chunks = []
async for chunk in model.astream(messages):
    chunks.append(chunk)

# Combine all chunks into final message
final_message = chunks[0]
for chunk in chunks[1:]:
    final_message += chunk

# Streaming with tool calls
async for chunk in model.astream(messages):
    if chunk.tool_call_chunks:
        for tool_chunk in chunk.tool_call_chunks:
            print(f"Tool: {tool_chunk.name}, Args: {tool_chunk.args}")

Content Block Types

Structured content representations supporting multimodal inputs including text, images, audio, video, files, and reasoning traces. Content blocks enable rich, mixed-media conversations across different LLM providers.

TextContentBlock

Standard text content block for structured text representation within messages.

class TextContentBlock:
    """
    Text content block.

    Attributes:
    - type: Literal["text"]
    - text: str - The text content

    Example:
    >>> block = {"type": "text", "text": "Hello world"}
    """

PlainTextContentBlock

Plain text content block for simple, unformatted text.

class PlainTextContentBlock:
    """
    Plain text content block.

    Attributes:
    - type: Literal["plain_text"]
    - text: str - The plain text content

    Example:
    >>> block = {"type": "plain_text", "text": "Simple text"}
    """

ImageContentBlock

Image content block supporting various image sources including URLs, base64 data, and file paths.

class ImageContentBlock:
    """
    Image content block.

    Attributes:
    - type: Literal["image"]
    - source: dict - Image source specification
      - type: "url" | "base64" | "file"
      - url: str (for url type)
      - data: str (for base64 type)
      - path: str (for file type)
      - media_type: str - MIME type (e.g., "image/png")
    - detail: str | None - Detail level ("low", "high", "auto")

    Example:
    >>> block = {
    ...     "type": "image",
    ...     "source": {
    ...         "type": "url",
    ...         "url": "https://example.com/image.jpg"
    ...     }
    ... }
    >>> block = {
    ...     "type": "image",
    ...     "source": {
    ...         "type": "base64",
    ...         "media_type": "image/png",
    ...         "data": "iVBORw0KGgoAAAANS..."
    ...     }
    ... }
    """

Usage Example:

from langchain.messages import HumanMessage

# Image from URL
msg = HumanMessage(
    content=[
        {"type": "text", "text": "What's in this image?"},
        {
            "type": "image",
            "source": {
                "type": "url",
                "url": "https://example.com/photo.jpg"
            }
        }
    ]
)

# Image from base64
msg = HumanMessage(
    content=[
        {"type": "text", "text": "Describe this"},
        {
            "type": "image",
            "source": {
                "type": "base64",
                "media_type": "image/png",
                "data": "iVBORw0KGgo..."
            },
            "detail": "high"
        }
    ]
)

AudioContentBlock

Audio content block for voice and audio data.

class AudioContentBlock:
    """
    Audio content block.

    Attributes:
    - type: Literal["audio"]
    - source: dict - Audio source specification
      - type: "url" | "base64" | "file"
      - url: str (for url type)
      - data: str (for base64 type)
      - path: str (for file type)
      - media_type: str - MIME type (e.g., "audio/mp3", "audio/wav")

    Example:
    >>> block = {
    ...     "type": "audio",
    ...     "source": {
    ...         "type": "base64",
    ...         "media_type": "audio/mp3",
    ...         "data": "SUQzBAAAAAAAI1RTU0UAAAA..."
    ...     }
    ... }
    """

VideoContentBlock

Video content block for video data and multimedia.

class VideoContentBlock:
    """
    Video content block.

    Attributes:
    - type: Literal["video"]
    - source: dict - Video source specification
      - type: "url" | "base64" | "file"
      - url: str (for url type)
      - data: str (for base64 type)
      - path: str (for file type)
      - media_type: str - MIME type (e.g., "video/mp4", "video/webm")

    Example:
    >>> block = {
    ...     "type": "video",
    ...     "source": {
    ...         "type": "url",
    ...         "url": "https://example.com/video.mp4"
    ...     }
    ... }
    """

DataContentBlock

Generic data content block for structured data payloads.

class DataContentBlock:
    """
    Data content block.

    Attributes:
    - type: Literal["data"]
    - data: Any - The data payload
    - format: str | None - Data format specification

    Example:
    >>> block = {
    ...     "type": "data",
    ...     "data": {"key": "value"},
    ...     "format": "json"
    ... }
    """

FileContentBlock

File content block for file attachments and document references.

class FileContentBlock:
    """
    File content block.

    Attributes:
    - type: Literal["file"]
    - source: dict - File source specification
      - type: "url" | "base64" | "file"
      - url: str (for url type)
      - data: str (for base64 type)
      - path: str (for file type)
      - media_type: str - MIME type
    - name: str | None - File name

    Example:
    >>> block = {
    ...     "type": "file",
    ...     "source": {
    ...         "type": "file",
    ...         "path": "/path/to/document.pdf",
    ...         "media_type": "application/pdf"
    ...     },
    ...     "name": "document.pdf"
    ... }
    """

ReasoningContentBlock

Reasoning content block for extended thinking and chain-of-thought responses.

class ReasoningContentBlock:
    """
    Reasoning/thinking content block.

    Used by models that support extended reasoning (e.g., o1, o3 models)
    to expose their internal reasoning process.

    Attributes:
    - type: Literal["reasoning"]
    - reasoning: str - The reasoning/thinking content

    Example:
    >>> block = {
    ...     "type": "reasoning",
    ...     "reasoning": "Let me think through this step by step..."
    ... }
    """

NonStandardContentBlock

Non-standard content block for provider-specific content types that don't fit standard categories.

class NonStandardContentBlock:
    """
    Non-standard content block for provider-specific content.

    Attributes:
    - type: str - Custom content type identifier
    - [additional_fields]: Any - Provider-specific fields

    Example:
    >>> block = {
    ...     "type": "custom_provider_type",
    ...     "custom_field": "value"
    ... }
    """

Tool Calling Types

Primitives for structured function and tool invocation, including tool call specifications, streaming chunks, invalid call handling, and server-side tool execution support.

ToolCall

Specification for a tool or function call made by the AI assistant.

class ToolCall:
    """
    Specification for a tool function call.

    Attributes:
    - name: str - The name of the tool/function to call
    - args: dict - Arguments to pass to the tool
    - id: str - Unique identifier for this tool call
    - type: Literal["tool_call"] - Type discriminator

    Example:
    >>> tool_call = {
    ...     "name": "get_weather",
    ...     "args": {"city": "Paris", "units": "celsius"},
    ...     "id": "call_abc123"
    ... }
    """

Usage Example:

from langchain.messages import AIMessage, ToolMessage

# AI makes a tool call
ai_msg = AIMessage(
    content="Let me check the weather for you.",
    tool_calls=[
        {
            "name": "get_weather",
            "args": {"city": "Paris", "units": "celsius"},
            "id": "call_123"
        }
    ]
)

# Tool execution result
tool_msg = ToolMessage(
    content='{"temperature": 22, "condition": "sunny"}',
    tool_call_id="call_123",
    name="get_weather"
)

# AI responds with result
final_msg = AIMessage(
    content="The weather in Paris is 22°C and sunny."
)

ToolCallChunk

Streaming chunk of a tool call, allowing incremental delivery of tool call information.

class ToolCallChunk:
    """
    Streaming chunk of a tool call.

    Attributes:
    - name: str | None - Partial tool name
    - args: str | None - Partial JSON args string
    - id: str | None - Tool call ID
    - index: int | None - Position in tool calls array
    - type: Literal["tool_call_chunk"] - Type discriminator

    Note:
    - Chunks accumulate to form complete ToolCall objects
    - Args are streamed as JSON string and need parsing when complete

    Example:
    >>> chunk1 = {"name": "get_weather", "id": "call_123", "index": 0}
    >>> chunk2 = {"args": '{"city":', "index": 0}
    >>> chunk3 = {"args": ' "Paris"}', "index": 0}
    """

InvalidToolCall

Represents a malformed or invalid tool call that couldn't be properly parsed.

class InvalidToolCall:
    """
    Malformed or invalid tool call.

    Attributes:
    - name: str | None - Attempted tool name (if parseable)
    - args: str | None - Raw arguments string
    - id: str | None - Tool call ID (if present)
    - error: str | None - Error description
    - type: Literal["invalid_tool_call"] - Type discriminator

    Example:
    >>> invalid_call = {
    ...     "name": "unknown_tool",
    ...     "args": '{"malformed_json": ',
    ...     "id": "call_123",
    ...     "error": "Failed to parse tool arguments as JSON"
    ... }
    """

Usage Example:

from langchain.messages import AIMessage

# AI message with invalid tool call
ai_msg = AIMessage(
    content="I'll try to help with that.",
    invalid_tool_calls=[
        {
            "name": "search_web",
            "args": '{"query": "incomplete json...',
            "id": "call_456",
            "error": "JSON parse error: Unexpected end of input"
        }
    ]
)

# Check for invalid calls
if ai_msg.invalid_tool_calls:
    for invalid_call in ai_msg.invalid_tool_calls:
        print(f"Invalid tool call: {invalid_call['error']}")

ServerToolCall

Server-side tool call executed by the model provider rather than the client.

class ServerToolCall:
    """
    Server-side tool call (model-executed).

    Used for tools that are executed by the model provider's infrastructure
    rather than by the client application.

    Attributes:
    - name: str - The tool name
    - args: dict - Tool arguments
    - id: str - Unique identifier
    - type: Literal["server_tool_call"] - Type discriminator

    Example:
    >>> server_call = {
    ...     "name": "code_interpreter",
    ...     "args": {"code": "print(2 + 2)"},
    ...     "id": "server_call_123"
    ... }
    """

ServerToolCallChunk

Streaming chunk of a server-side tool call.

class ServerToolCallChunk:
    """
    Streaming chunk of server tool call.

    Attributes:
    - name: str | None - Partial tool name
    - args: str | None - Partial JSON args string
    - id: str | None - Tool call ID
    - index: int | None - Position in tool calls array
    - type: Literal["server_tool_call_chunk"] - Type discriminator

    Example:
    >>> chunk = {
    ...     "name": "code_interpreter",
    ...     "id": "server_call_123",
    ...     "index": 0
    ... }
    """

ServerToolResult

Result of a server-side tool execution.

class ServerToolResult:
    """
    Result of server-side tool execution.

    Attributes:
    - tool_call_id: str - ID linking to the ServerToolCall
    - content: str | list[ContentBlock] - Execution result
    - status: str | None - Execution status
    - metadata: dict | None - Additional metadata

    Example:
    >>> result = {
    ...     "tool_call_id": "server_call_123",
    ...     "content": "4",
    ...     "status": "success",
    ...     "metadata": {"execution_time_ms": 45}
    ... }
    """

Metadata and Annotations

Token usage tracking and message annotation types for monitoring costs, performance, and adding structured metadata to messages throughout the conversation lifecycle.

InputTokenDetails

Detailed breakdown of input token usage.

class InputTokenDetails:
    """
    Detailed breakdown of input tokens.

    Attributes:
    - cached_tokens: int | None - Tokens served from cache
    - text_tokens: int | None - Tokens from text content
    - audio_tokens: int | None - Tokens from audio content
    - image_tokens: int | None - Tokens from image content
    - video_tokens: int | None - Tokens from video content

    Example:
    >>> details = {
    ...     "cached_tokens": 20,
    ...     "text_tokens": 30,
    ...     "image_tokens": 500
    ... }
    """

OutputTokenDetails

Detailed breakdown of output token usage.

class OutputTokenDetails:
    """
    Detailed breakdown of output tokens.

    Attributes:
    - text_tokens: int | None - Tokens in text content
    - audio_tokens: int | None - Tokens in audio content
    - reasoning_tokens: int | None - Tokens in reasoning/thinking

    Example:
    >>> details = {
    ...     "text_tokens": 100,
    ...     "reasoning_tokens": 50
    ... }
    """

Citation

Citation annotation for referencing data from documents, particularly useful for RAG (Retrieval-Augmented Generation) applications.

class Citation:
    """
    Citation annotation for citing data from documents.

    Important for RAG applications to track which parts of the response
    are grounded in source documents.

    Attributes:
    - type: Literal["citation"]
    - id: str | None - Unique citation identifier
    - url: str | None - URL to source document
    - title: str | None - Title of source document
    - start_index: int | None - Start position in response text
    - end_index: int | None - End position in response text
    - cited_text: str | None - The text being cited
    - extras: dict | None - Additional citation metadata

    Important Note:
    The start_index and end_index refer to positions in the response text,
    not the source text.

    Example:
    >>> citation = {
    ...     "type": "citation",
    ...     "id": "cite_1",
    ...     "url": "https://example.com/doc.pdf",
    ...     "title": "Research Paper on AI",
    ...     "start_index": 100,
    ...     "end_index": 250,
    ...     "cited_text": "AI has transformed various industries...",
    ...     "extras": {"page": 5, "author": "Smith et al."}
    ... }
    """

Usage Example:

from langchain.messages import AIMessage

# AI message with citation annotations
msg = AIMessage(
    content="AI has transformed various industries by enabling automation and decision-making.",
    annotations=[
        {
            "type": "citation",
            "id": "cite_1",
            "url": "https://example.com/research.pdf",
            "title": "The Impact of AI",
            "start_index": 0,
            "end_index": 84,
            "cited_text": "AI has transformed various industries by enabling automation",
            "extras": {"page": 12}
        }
    ]
)

NonStandardAnnotation

Provider-specific annotation format for custom annotation types that don't fit standard categories.

class NonStandardAnnotation:
    """
    Provider-specific annotation format.

    Used for custom or provider-specific annotation types that are not
    covered by the standard annotation types.

    Attributes:
    - type: str - Custom annotation type identifier
    - id: str | None - Unique annotation identifier
    - value: Any - Provider-specific annotation value

    Example:
    >>> annotation = {
    ...     "type": "custom_provider_annotation",
    ...     "id": "ann_123",
    ...     "value": {"custom_field": "custom_value"}
    ... }
    """

Annotation

Union type representing all possible annotation types.

Annotation = Citation | NonStandardAnnotation

Message Utilities

Helper functions and type unions for managing message lists, trimming conversations by token count, and working with message-like representations.

AnyMessage

Type union representing any message type.

AnyMessage = Union[
    HumanMessage,
    AIMessage,
    SystemMessage,
    ToolMessage,
    RemoveMessage,
    AIMessageChunk
]

Usage Example:

from langchain.messages import AnyMessage, HumanMessage, AIMessage

def process_message(message: AnyMessage) -> str:
    """Process any message type."""
    if isinstance(message, HumanMessage):
        return f"User says: {message.content}"
    elif isinstance(message, AIMessage):
        return f"AI says: {message.content}"
    else:
        return f"System message: {message.content}"

# Works with any message type
msg1 = HumanMessage(content="Hello")
msg2 = AIMessage(content="Hi there")

print(process_message(msg1))
print(process_message(msg2))

MessageLikeRepresentation

Type union for message-like representations including tuples, dicts, and strings.

MessageLikeRepresentation = Union[
    BaseMessage,
    tuple[str, str],  # (role, content)
    str,  # content only
    dict  # message dict
]

Usage Example:

from langchain.messages import MessageLikeRepresentation

# Various message representations
messages: list[MessageLikeRepresentation] = [
    ("system", "You are helpful"),
    ("human", "What is AI?"),
    {"role": "assistant", "content": "AI stands for..."},
    "Continue explaining..."
]

# Convert to proper message objects
from langchain.messages import (
    HumanMessage, AIMessage, SystemMessage
)

def normalize_message(msg: MessageLikeRepresentation):
    if isinstance(msg, tuple):
        role, content = msg
        if role == "system":
            return SystemMessage(content=content)
        elif role == "human":
            return HumanMessage(content=content)
        elif role == "assistant":
            return AIMessage(content=content)
    elif isinstance(msg, dict):
        # Convert dict to message
        pass
    elif isinstance(msg, str):
        return HumanMessage(content=msg)
    return msg

Complete Type Definitions

Message Base Types

from typing import Any, Literal, Union
from pydantic import BaseModel

class BaseMessage(BaseModel):
    """Base class for all message types."""
    content: str | list[dict]
    id: str | None = None
    name: str | None = None
    metadata: dict = {}
    response_metadata: dict = {}

class HumanMessage(BaseMessage):
    """User/human input message."""
    type: Literal["human"] = "human"

class AIMessage(BaseMessage):
    """LLM/assistant response message."""
    type: Literal["ai"] = "ai"
    tool_calls: list[dict] = []
    invalid_tool_calls: list[dict] = []
    usage_metadata: dict | None = None

class SystemMessage(BaseMessage):
    """System instruction message."""
    type: Literal["system"] = "system"

class ToolMessage(BaseMessage):
    """Tool execution result message."""
    type: Literal["tool"] = "tool"
    tool_call_id: str
    status: str | None = None

class RemoveMessage(BaseMessage):
    """Directive to remove a message from context."""
    type: Literal["remove"] = "remove"

class AIMessageChunk(BaseMessage):
    """Streaming chunk of AI message."""
    type: Literal["AIMessageChunk"] = "AIMessageChunk"
    tool_call_chunks: list[dict] = []

All Content Block Types

class TextContentBlock(BaseModel):
    type: Literal["text"] = "text"
    text: str

class PlainTextContentBlock(BaseModel):
    type: Literal["plain_text"] = "plain_text"
    text: str

class ImageContentBlock(BaseModel):
    type: Literal["image"] = "image"
    source: dict
    detail: str | None = None

class AudioContentBlock(BaseModel):
    type: Literal["audio"] = "audio"
    source: dict

class VideoContentBlock(BaseModel):
    type: Literal["video"] = "video"
    source: dict

class DataContentBlock(BaseModel):
    type: Literal["data"] = "data"
    data: Any
    format: str | None = None

class FileContentBlock(BaseModel):
    type: Literal["file"] = "file"
    source: dict
    name: str | None = None

class ReasoningContentBlock(BaseModel):
    type: Literal["reasoning"] = "reasoning"
    reasoning: str

class NonStandardContentBlock(BaseModel):
    type: str
    # Additional provider-specific fields

All Tool Calling Types

class ToolCall(BaseModel):
    name: str
    args: dict
    id: str
    type: Literal["tool_call"] = "tool_call"

class ToolCallChunk(BaseModel):
    name: str | None = None
    args: str | None = None
    id: str | None = None
    index: int | None = None
    type: Literal["tool_call_chunk"] = "tool_call_chunk"

class InvalidToolCall(BaseModel):
    name: str | None = None
    args: str | None = None
    id: str | None = None
    error: str | None = None
    type: Literal["invalid_tool_call"] = "invalid_tool_call"

class ServerToolCall(BaseModel):
    name: str
    args: dict
    id: str
    type: Literal["server_tool_call"] = "server_tool_call"

class ServerToolCallChunk(BaseModel):
    name: str | None = None
    args: str | None = None
    id: str | None = None
    index: int | None = None
    type: Literal["server_tool_call_chunk"] = "server_tool_call_chunk"

class ServerToolResult(BaseModel):
    tool_call_id: str
    content: str | list[dict]
    status: str | None = None
    metadata: dict | None = None

All Metadata Types

class UsageMetadata(BaseModel):
    input_tokens: int
    output_tokens: int
    total_tokens: int
    input_token_details: dict | None = None
    output_token_details: dict | None = None

class InputTokenDetails(BaseModel):
    cached_tokens: int | None = None
    text_tokens: int | None = None
    audio_tokens: int | None = None
    image_tokens: int | None = None
    video_tokens: int | None = None

class OutputTokenDetails(BaseModel):
    text_tokens: int | None = None
    audio_tokens: int | None = None
    reasoning_tokens: int | None = None

class Citation(BaseModel):
    type: Literal["citation"] = "citation"
    id: str | None = None
    url: str | None = None
    title: str | None = None
    start_index: int | None = None
    end_index: int | None = None
    cited_text: str | None = None
    extras: dict | None = None

class NonStandardAnnotation(BaseModel):
    type: str
    id: str | None = None
    value: Any

All Type Unions

AnyMessage = Union[
    HumanMessage,
    AIMessage,
    SystemMessage,
    ToolMessage,
    RemoveMessage,
    AIMessageChunk
]

MessageLikeRepresentation = Union[
    BaseMessage,
    tuple[str, str],
    str,
    dict
]

ContentBlock = Union[
    TextContentBlock,
    PlainTextContentBlock,
    ImageContentBlock,
    AudioContentBlock,
    VideoContentBlock,
    DataContentBlock,
    FileContentBlock,
    ReasoningContentBlock,
    NonStandardContentBlock,
    InvalidToolCall,
    ToolCall,
    ToolCallChunk,
    ServerToolCall,
    ServerToolCallChunk,
    ServerToolResult
]

Annotation = Union[
    Citation,
    NonStandardAnnotation
]

Related Topics

Install with Tessl CLI

npx tessl i tessl/pypi-langchain@1.2.1

docs

index.md

quickstart.md

tile.json