CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/npm-langchain

TypeScript framework for building LLM-powered applications with agents, tools, middleware, and model interoperability

Pending
Quality

Pending

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Pending

The risk profile of this skill

Overview
Eval results
Files

messages.mddocs/guides/

Message Guide

This guide covers working with standardized message types for communication in LangChain conversations.

Message Types

Human/User Messages

import { HumanMessage } from "langchain";

// Simple text message
const msg = new HumanMessage("Hello!");

// With metadata
const msgWithMeta = new HumanMessage("Hello!", {
  source: "web-app",
  userId: "user-123",
});

AI/Assistant Messages

import { AIMessage } from "langchain";

const msg = new AIMessage("Hi there! How can I help you?");

const msgWithMeta = new AIMessage("Response", {
  model: "gpt-4o",
  tokens: 150,
});

System Messages

import { SystemMessage } from "langchain";

const msg = new SystemMessage("You are a helpful assistant.");

Tool Messages

import { ToolMessage } from "langchain";

const msg = new ToolMessage(
  "Search results: ...",
  "call_abc123" // tool_call_id
);

Using Messages with Agents

Simple Format

import { createAgent } from "langchain";

const agent = createAgent({
  model: "openai:gpt-4o",
  tools: [],
});

// Simple strings
await agent.invoke({
  messages: ["Hello!"],
});

// Role-based objects
await agent.invoke({
  messages: [
    { role: "system", content: "You are helpful." },
    { role: "user", content: "Hello" },
  ],
});

Message Objects

import { HumanMessage, AIMessage, SystemMessage } from "langchain";

await agent.invoke({
  messages: [
    new SystemMessage("You are helpful."),
    new HumanMessage("Hello"),
    new AIMessage("Hi!"),
    new HumanMessage("How are you?"),
  ],
});

Multimodal Messages

Text and Images

import { HumanMessage } from "langchain";

const msg = new HumanMessage([
  { type: "text", text: "What's in this image?" },
  { type: "image_url", image_url: "https://example.com/image.jpg" },
]);

With Image Details

const msg = new HumanMessage([
  { type: "text", text: "Describe this image in detail" },
  {
    type: "image_url",
    image_url: {
      url: "https://example.com/image.jpg",
      detail: "high", // "low", "high", or "auto"
    },
  },
]);

Base64 Encoded Images

const msg = new HumanMessage([
  { type: "text", text: "Analyze this image" },
  {
    type: "image",
    source: {
      type: "base64",
      media_type: "image/jpeg",
      data: "base64EncodedData...",
    },
  },
]);

Message Utilities

Filtering Messages

import { filterMessages, HumanMessage, AIMessage, SystemMessage } from "langchain";

const messages = [
  new SystemMessage("You are helpful."),
  new HumanMessage("Hello"),
  new AIMessage("Hi!"),
  new HumanMessage("How are you?"),
  new AIMessage("I'm doing well!"),
];

// Keep only last 2 messages
const recent = filterMessages(messages, { last: 2 });

// Keep only human and AI messages
const conversation = filterMessages(messages, {
  includeTypes: ["human", "ai"],
});

// Exclude system messages
const noSystem = filterMessages(messages, {
  excludeTypes: ["system"],
});

// Keep first 3 messages
const first = filterMessages(messages, { first: 3 });

Trimming Messages

import { trimMessages, HumanMessage, AIMessage } from "langchain";

const messages = [
  new HumanMessage("Message 1"),
  new AIMessage("Response 1"),
  new HumanMessage("Message 2"),
  new AIMessage("Response 2"),
  new HumanMessage("Message 3"),
  new AIMessage("Response 3"),
];

// Trim to fit token limit (keeps last messages)
const trimmed = trimMessages(messages, {
  maxTokens: 100,
  strategy: "last",
  minMessages: 2,
});

// With custom token counter
const customTrimmed = trimMessages(messages, {
  maxTokens: 500,
  tokenCounter: (msgs) => {
    return msgs.reduce((sum, msg) => sum + msg.content.length, 0);
  },
});

// Different trimming strategies
const first = trimMessages(messages, {
  maxTokens: 100,
  strategy: "first", // Keep first messages
});

const middle = trimMessages(messages, {
  maxTokens: 100,
  strategy: "middle", // Keep first and last, remove middle
});

Message History Management

Multi-turn Conversations

import { createAgent } from "langchain";

const agent = createAgent({
  model: "openai:gpt-4o",
  tools: [],
});

// Track history
let messages = [];

// Turn 1
let result = await agent.invoke({
  messages: [...messages, { role: "user", content: "My name is Alice" }],
});
messages = result.messages;

// Turn 2
result = await agent.invoke({
  messages: [...messages, { role: "user", content: "What's my name?" }],
});
messages = result.messages;
// Agent remembers: "Your name is Alice"

With Checkpointer

import { createAgent } from "langchain";
import { MemorySaver } from "@langchain/langgraph";

const checkpointer = new MemorySaver();

const agent = createAgent({
  model: "openai:gpt-4o",
  tools: [],
  checkpointer: checkpointer,
});

// First conversation
await agent.invoke(
  { messages: [{ role: "user", content: "My name is Bob" }] },
  { configurable: { thread_id: "thread-1" } }
);

// Later conversation (state restored automatically)
const result = await agent.invoke(
  { messages: [{ role: "user", content: "What's my name?" }] },
  { configurable: { thread_id: "thread-1" } }
);

Managing Context Window

import { filterMessages, trimMessages } from "langchain";

async function manageConversation(messages, newMessage) {
  // Add new message
  messages.push(newMessage);

  // Filter out old system messages
  messages = filterMessages(messages, {
    excludeTypes: ["system"],
  });

  // Trim to fit context window
  messages = trimMessages(messages, {
    maxTokens: 4000,
    strategy: "last",
    minMessages: 4, // Always keep at least 2 turns
  });

  // Add current system message
  messages.unshift(
    new SystemMessage("You are a helpful assistant.")
  );

  return messages;
}

Message Properties

Content

import { AIMessage } from "langchain";

const msg = new AIMessage("Hello!");
console.log(msg.content); // "Hello!"

// Array content for multimodal
const multimodal = new AIMessage([
  { type: "text", text: "Here's an image" },
  { type: "image_url", image_url: "https://..." },
]);
console.log(multimodal.content); // Array

Metadata

const msg = new AIMessage("Response", {
  model: "gpt-4o",
  temperature: 0.7,
  tokens: 150,
});

console.log(msg.additional_kwargs);
// { model: "gpt-4o", temperature: 0.7, tokens: 150 }

console.log(msg.response_metadata); // {}

Type Checking

import { AIMessage, HumanMessage } from "langchain";

const msg = new AIMessage("Hello");

if (msg instanceof AIMessage) {
  console.log("This is an AI message");
}

if (msg instanceof HumanMessage) {
  console.log("This is a human message");
}

Message Chunks (Streaming)

Message Chunk Types

import {
  AIMessageChunk,
  HumanMessageChunk,
  SystemMessageChunk,
  ToolMessageChunk,
} from "langchain";

// Used internally during streaming
const chunk = new AIMessageChunk("Hello");

Streaming Example

import { createAgent } from "langchain";

const agent = createAgent({
  model: "openai:gpt-4o",
  tools: [],
});

const stream = agent.stream(
  { messages: [{ role: "user", content: "Tell me a story" }] },
  { streamMode: "values" }
);

for await (const state of stream) {
  const lastMessage = state.messages[state.messages.length - 1];
  if (lastMessage.content) {
    process.stdout.write(String(lastMessage.content));
  }
}

Best Practices

Message Types

  • Use HumanMessage for user input
  • Use AIMessage for assistant responses
  • Use SystemMessage for instructions
  • Use ToolMessage for tool results

Content Format

  • Use simple strings for text-only content
  • Use content block arrays for multimodal
  • Always include text with images
  • Optimize image sizes for context window

Message History

  • Include relevant history for context
  • Use filterMessages and trimMessages to manage size
  • Consider summarization for long conversations
  • Store important context in custom state

Metadata

  • Use additional_kwargs for message-specific metadata
  • Use response_metadata for model response information
  • Keep metadata minimal to save tokens
  • Don't rely on metadata for critical information

Performance

  • Limit message history to what's necessary
  • Trim messages before they exceed context window
  • Consider using summarization middleware
  • Cache embeddings for repeated messages

Memory Management

  • Use checkpointer for conversation persistence
  • Implement message cleanup strategies
  • Monitor context window usage
  • Consider external storage for long histories

See Message API Reference for complete API documentation.

docs

glossary.md

index.md

quick-reference.md

task-index.md

tile.json