or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

advanced

error-handling.mdtype-inference.md
glossary.mdindex.mdquick-reference.mdtask-index.md
tile.json

hub.mddocs/integrations/

LangChain Hub

Push and pull prompts from the LangChain Hub for sharing and reusing prompt templates across applications.

Entry Points

LangChain provides two Hub integrations:

  • langchain/hub - Web-compatible version
  • langchain/hub/node - Node.js version with dynamic imports

Capabilities

Web-Compatible Hub (langchain/hub)

/**
 * Push prompt to LangChain Hub
 * @param repoFullName - Repository name in format "owner/repo-name"
 * @param runnable - Prompt or runnable to push
 * @param options - Optional configuration
 * @returns Promise resolving to hub URL
 */
function push(
  repoFullName: string,
  runnable: Runnable,
  options?: PushOptions
): Promise<string>;

/**
 * Pull prompt from LangChain Hub
 * @param ownerRepoCommit - Repository identifier with optional commit (e.g., "owner/repo:commit")
 * @param options - Optional configuration including modelClass for non-OpenAI models
 * @returns Promise resolving to loaded prompt
 */
function pull<T = Runnable>(
  ownerRepoCommit: string,
  options?: PullOptions
): Promise<T>;

interface PushOptions {
  apiKey?: string;
  apiUrl?: string;
  newRepoIsPublic?: boolean;
  parentCommitHash?: string;
}

interface PullOptions {
  apiKey?: string;
  apiUrl?: string;
  includeModel?: boolean;
  modelClass?: typeof ChatModel; // Required for non-OpenAI models in web version
}

Node.js Hub (langchain/hub/node)

/**
 * Push prompt to LangChain Hub (Node.js)
 * @param repoFullName - Repository name in format "owner/repo-name"
 * @param runnable - Prompt or runnable to push
 * @param options - Optional configuration
 * @returns Promise resolving to hub URL
 */
function push(
  repoFullName: string,
  runnable: Runnable,
  options?: PushOptions
): Promise<string>;

/**
 * Pull prompt from LangChain Hub (Node.js)
 * Automatically imports model classes via dynamic imports
 * @param ownerRepoCommit - Repository identifier with optional commit
 * @param options - Optional configuration
 * @returns Promise resolving to loaded prompt
 */
function pull<T = Runnable>(
  ownerRepoCommit: string,
  options?: PullOptions
): Promise<T>;

Usage Examples

Pulling Prompts

// Web-compatible version
import { pull } from "langchain/hub";

// Pull a prompt (OpenAI models work without modelClass)
const prompt = await pull("owner/my-prompt");

// Pull with specific commit
const specificPrompt = await pull("owner/my-prompt:abc123");
// Node.js version (auto-imports model classes)
import { pull } from "langchain/hub/node";

// Pull any prompt (models auto-imported)
const prompt = await pull("owner/my-anthropic-prompt");

// Use with agent
import { createAgent } from "langchain";

const loadedPrompt = await pull("owner/agent-prompt");
const agent = createAgent({
  model: "openai:gpt-4o",
  tools: [],
  systemPrompt: loadedPrompt,
});

Pushing Prompts

import { push } from "langchain/hub";
import { ChatPromptTemplate } from "@langchain/core/prompts";

// Create a prompt
const prompt = ChatPromptTemplate.fromMessages([
  ["system", "You are a helpful assistant specialized in {domain}."],
  ["human", "{input}"],
]);

// Push to hub
const url = await push("myusername/domain-expert", prompt, {
  apiKey: process.env.LANGCHAIN_API_KEY,
  newRepoIsPublic: true,
});

console.log(`Prompt pushed to: ${url}`);

Using Pulled Prompts

import { pull } from "langchain/hub/node";
import { createAgent } from "langchain";

// Pull and use with agent
const systemPrompt = await pull("owner/coding-assistant");

const agent = createAgent({
  model: "openai:gpt-4o",
  tools: [],
  systemPrompt: systemPrompt,
});

const result = await agent.invoke({
  messages: [{ role: "user", content: "Write a function to reverse a string" }],
});

Pulling with Model Class (Web Version)

import { pull } from "langchain/hub";
import { ChatAnthropic } from "@langchain/anthropic";

// For non-OpenAI models, pass modelClass
const prompt = await pull("owner/anthropic-prompt", {
  modelClass: ChatAnthropic,
});

Versioning with Commits

import { pull, push } from "langchain/hub";

// Pull specific version
const v1 = await pull("owner/my-prompt:v1");
const v2 = await pull("owner/my-prompt:v2");

// Push new version with parent commit
await push("owner/my-prompt", updatedPrompt, {
  parentCommitHash: "abc123",
});

Environment Setup

Set your LangChain API key:

export LANGCHAIN_API_KEY="lsv2_pt_..."

Or pass it explicitly:

import { pull } from "langchain/hub";

const prompt = await pull("owner/prompt", {
  apiKey: "lsv2_pt_...",
});

Best Practices

Pulling Prompts

  • Use specific commit hashes for production
  • Cache pulled prompts to avoid repeated fetches
  • Handle pull errors gracefully
  • Use Node.js version when available for auto-imports

Pushing Prompts

  • Use descriptive repository names
  • Include version tags in commit messages
  • Make public prompts discoverable
  • Test prompts before pushing

Versioning

  • Use semantic versioning in commit tags
  • Keep breaking changes in separate prompts
  • Document prompt changes
  • Reference specific versions in production

Security

  • Keep API keys secure
  • Don't commit API keys to version control
  • Use environment variables
  • Review public prompts before publishing