CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/npm-langchain

TypeScript framework for building LLM-powered applications with agents, tools, middleware, and model interoperability

Pending
Quality

Pending

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Pending

The risk profile of this skill

Overview
Eval results
Files

serialization.mddocs/integrations/

Serialization

Load and deserialize LangChain modules from serialized representations.

Capabilities

Loading Serialized Modules

/**
 * Load serialized LangChain modules
 * @param text - Serialized module text (JSON format)
 * @param secretsMap - Map of secrets for deserialization
 * @param optionalImportsMap - Optional imports map
 * @param additionalImportsMap - Additional imports map
 * @param secretsFromEnv - Environment variable names to load as secrets
 * @returns Promise resolving to deserialized module
 */
function load<T = Runnable>(
  text: string,
  secretsMap?: Record<string, any>,
  optionalImportsMap?: Record<string, any>,
  additionalImportsMap?: Record<string, any>,
  secretsFromEnv?: string[]
): Promise<T>;

Optional Import Entry Points

/**
 * Array of optional import entry points
 */
const optionalImportEntrypoints: string[];

Usage Examples

Basic Loading

import { load } from "langchain/load";

// Serialized LangChain module (JSON string)
const serialized = `{
  "lc": 1,
  "type": "constructor",
  "id": ["langchain", "prompts", "ChatPromptTemplate"],
  "kwargs": {
    "messages": [...]
  }
}`;

// Load the module
const prompt = await load(serialized);

// Use with agent or chain
console.log(await prompt.invoke({ input: "Hello" }));

Loading with Secrets

import { load } from "langchain/load";

const serialized = `{
  "lc": 1,
  "type": "constructor",
  "id": ["langchain", "chat_models", "ChatOpenAI"],
  "kwargs": {
    "openai_api_key": {
      "lc": 1,
      "type": "secret",
      "id": ["OPENAI_API_KEY"]
    }
  }
}`;

// Provide secrets map
const module = await load(serialized, {
  OPENAI_API_KEY: process.env.OPENAI_API_KEY,
});

Loading with Environment Secrets

import { load } from "langchain/load";

// Load secrets from environment variables
const module = await load(
  serializedText,
  {}, // Empty secrets map
  {}, // Empty optional imports
  {}, // Empty additional imports
  ["OPENAI_API_KEY", "ANTHROPIC_API_KEY"] // Env var names
);

Loading with Custom Imports

import { load } from "langchain/load";
import { MyCustomTool } from "./tools";

const serialized = `{
  "lc": 1,
  "type": "constructor",
  "id": ["custom", "tools", "MyCustomTool"],
  "kwargs": {}
}`;

// Provide additional imports
const tool = await load(
  serialized,
  {}, // Secrets
  {}, // Optional imports
  {
    "custom/tools": { MyCustomTool },
  }
);

Serializable Re-exports

From langchain/load/serializable:

All serialization utilities from @langchain/core/load/serializable are re-exported, including:

  • Serializable: Base class for serializable objects
  • Serialization helpers: Utilities for converting objects to/from JSON
  • Load/dump utilities: Functions for saving and loading serialized objects
/**
 * Base class for serializable LangChain objects
 */
abstract class Serializable {
  /**
   * Return a JSON-serializable representation
   */
  toJSON(): SerializedClass;

  /**
   * Return JSON as string
   */
  toJSONString(): string;

  /**
   * Namespace for the serializable class
   */
  static lc_namespace(): string[];
}

/**
 * Serialized class representation
 */
interface SerializedClass {
  lc: number;
  type: "constructor" | "not_implemented";
  id: string[];
  kwargs?: Record<string, any>;
}

Usage Example:

import { Serializable } from "langchain/load/serializable";

// Create custom serializable class
class MyCustomClass extends Serializable {
  constructor(public value: string) {
    super();
  }

  static lc_namespace() {
    return ["custom", "MyCustomClass"];
  }
}

// Serialize
const instance = new MyCustomClass("test");
const json = instance.toJSON();
const jsonString = instance.toJSONString();

Import Maps

The load system provides comprehensive import maps for various LangChain components including:

  • Chat models (OpenAI, Anthropic, Google, etc.)
  • Storage implementations
  • Prompts
  • Messages
  • Runnables
  • Output parsers

These are used internally to resolve module paths during deserialization.

Best Practices

Serialization Format

  • Always use valid JSON format
  • Include lc version field
  • Specify correct module paths
  • Handle secrets appropriately

Security

  • Never serialize secrets directly
  • Use secrets map or environment variables
  • Validate serialized data before loading
  • Sanitize user-provided serialized data

Error Handling

  • Catch and handle load errors
  • Validate loaded modules before use
  • Check for required imports
  • Handle missing secrets gracefully

Performance

  • Cache loaded modules when possible
  • Avoid repeated deserialization
  • Use optional imports for large dependencies
  • Minimize serialized data size

docs

glossary.md

index.md

quick-reference.md

task-index.md

tile.json