CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/npm-langfuse--client

Langfuse API client for universal JavaScript environments providing observability, prompt management, datasets, experiments, and scoring capabilities

Moderation error
Malicious code detected in tile.json: This tile.json exhibits signs of a supply chain attack through typosquatting/dependency confusion. The package name 'tessl/npm-langfuse--client' (with double hyphen) mimics the legitimate '@langfuse/client' package. The 'describes' field references the real package 'pkg:npm/%40langfuse/client@4.2.0' but the tile name uses a different format with 'npm-langfuse--client', suggesting an attempt to intercept or impersonate the legitimate Langfuse client package. This pattern is consistent with dependency confusion attacks where malicious packages use similar names to legitimate ones.
Overview
Eval results
Files

prompts.mddocs/

Prompt Management

The Prompt Management system provides comprehensive capabilities for creating, fetching, updating, and compiling prompts with built-in caching, variable substitution, and placeholder support. It supports both text-based and chat-based prompts with seamless LangChain integration.

Capabilities

Get Prompt

Retrieve a prompt by name with intelligent caching and fallback support.

/**
 * Retrieves a prompt by name with intelligent caching
 *
 * Caching behavior:
 * - Fresh prompts are returned immediately from cache
 * - Expired prompts are returned from cache while being refreshed in background
 * - Cache misses trigger immediate fetch with optional fallback support
 *
 * @param name - Name of the prompt to retrieve
 * @param options - Optional retrieval configuration
 * @returns Promise that resolves to TextPromptClient or ChatPromptClient
 */
get(
  name: string,
  options?: {
    /** Specific version to retrieve (defaults to latest) */
    version?: number;
    /** Label to filter by (defaults to "production") */
    label?: string;
    /** Cache TTL in seconds (default: 60, set to 0 to disable caching) */
    cacheTtlSeconds?: number;
    /** Fallback content if prompt fetch fails */
    fallback?: string | ChatMessage[];
    /** Maximum retry attempts for failed requests */
    maxRetries?: number;
    /** Prompt type (auto-detected if not specified) */
    type?: "chat" | "text";
    /** Request timeout in milliseconds */
    fetchTimeoutMs?: number;
  }
): Promise<TextPromptClient | ChatPromptClient>;

Usage Examples:

import { LangfuseClient } from '@langfuse/client';

const langfuse = new LangfuseClient();

// Get latest version with default caching (60 seconds)
const prompt = await langfuse.prompt.get("my-prompt");

// Get specific version
const v2Prompt = await langfuse.prompt.get("my-prompt", {
  version: 2
});

// Get with label filter
const prodPrompt = await langfuse.prompt.get("my-prompt", {
  label: "production"
});

// Get with staging label
const stagingPrompt = await langfuse.prompt.get("my-prompt", {
  label: "staging"
});

// Disable caching (always fetch fresh)
const freshPrompt = await langfuse.prompt.get("my-prompt", {
  cacheTtlSeconds: 0
});

// Custom cache TTL (5 minutes)
const cachedPrompt = await langfuse.prompt.get("my-prompt", {
  cacheTtlSeconds: 300
});

// With text fallback
const textPromptWithFallback = await langfuse.prompt.get("my-prompt", {
  type: "text",
  fallback: "Hello {{name}}! This is a fallback prompt."
});

// With chat fallback
const chatPromptWithFallback = await langfuse.prompt.get("conversation", {
  type: "chat",
  fallback: [
    { role: "system", content: "You are a helpful assistant" },
    { role: "user", content: "Hello {{name}}" }
  ]
});

// With retry configuration and timeout
const robustPrompt = await langfuse.prompt.get("my-prompt", {
  maxRetries: 3,
  fetchTimeoutMs: 5000
});

// Type-specific retrieval (enforces type at compile time)
const textPrompt = await langfuse.prompt.get("greeting", { type: "text" });
// textPrompt is TextPromptClient

const chatPrompt = await langfuse.prompt.get("conversation", { type: "chat" });
// chatPrompt is ChatPromptClient

Caching Behavior:

The prompt cache implements a sophisticated stale-while-revalidate pattern:

  1. Cache Hit (Fresh): Returns cached prompt immediately
  2. Cache Hit (Expired): Returns stale cached prompt immediately and refreshes in background
  3. Cache Miss: Fetches from API immediately (with fallback support if fetch fails)
  4. Concurrent Requests: Multiple concurrent requests for the same expired prompt trigger only one refresh

Cache Keys:

Cache keys are generated based on:

  • Prompt name
  • Version (if specified) or label (defaults to "production")

Examples:

  • my-prompt-label:production (default)
  • my-prompt-version:2 (specific version)
  • my-prompt-label:staging (specific label)

Create Prompt

Create a new prompt or a new version of an existing prompt.

/**
 * Creates a new prompt in Langfuse
 *
 * Supports both text and chat prompts. Chat prompts can include placeholders
 * for dynamic content insertion.
 *
 * @param body - The prompt data to create
 * @returns Promise that resolves to TextPromptClient or ChatPromptClient
 */
create(body: CreatePromptRequest.Text): Promise<TextPromptClient>;
create(body: CreatePromptRequest.Chat): Promise<ChatPromptClient>;
create(body: CreateChatPromptBodyWithPlaceholders): Promise<ChatPromptClient>;

interface CreatePromptRequest.Text {
  /** Unique name for the prompt */
  name: string;
  /** Text content with optional {{variable}} placeholders */
  prompt: string;
  /** Optional type specification (defaults to "text") */
  type?: "text";
  /** Configuration object (e.g., model settings) */
  config?: unknown;
  /** List of deployment labels for this prompt version */
  labels?: string[];
  /** List of tags to apply to all versions of this prompt */
  tags?: string[];
  /** Commit message for this prompt version */
  commitMessage?: string;
}

interface CreatePromptRequest.Chat {
  /** Unique name for the prompt */
  name: string;
  /** Chat prompt type */
  type: "chat";
  /** Array of chat messages and/or placeholders */
  prompt: ChatMessageWithPlaceholders[];
  /** Configuration object (e.g., model settings) */
  config?: unknown;
  /** List of deployment labels for this prompt version */
  labels?: string[];
  /** List of tags to apply to all versions of this prompt */
  tags?: string[];
  /** Commit message for this prompt version */
  commitMessage?: string;
}

interface CreateChatPromptBodyWithPlaceholders {
  type: "chat";
  /** Array mixing regular chat messages and placeholder messages */
  prompt: (ChatMessage | ChatMessageWithPlaceholders)[];
  // ... other properties same as CreatePromptRequest.Chat
}

Usage Examples:

import { LangfuseClient, ChatMessageType } from '@langfuse/client';

const langfuse = new LangfuseClient();

// Create a simple text prompt
const textPrompt = await langfuse.prompt.create({
  name: "greeting",
  prompt: "Hello {{name}}! Welcome to {{location}}.",
  labels: ["production"],
  config: {
    temperature: 0.7,
    model: "gpt-4"
  }
});

// Create text prompt with tags
const taggedPrompt = await langfuse.prompt.create({
  name: "sql-generator",
  prompt: "Generate SQL for: {{task}}",
  tags: ["database", "sql"],
  labels: ["production"],
  commitMessage: "Initial version of SQL generator"
});

// Create a chat prompt
const chatPrompt = await langfuse.prompt.create({
  name: "assistant",
  type: "chat",
  prompt: [
    { role: "system", content: "You are a {{role}} assistant." },
    { role: "user", content: "{{user_message}}" }
  ],
  labels: ["production"],
  config: {
    temperature: 0.8,
    max_tokens: 1000
  }
});

// Create chat prompt with placeholders
const chatWithPlaceholders = await langfuse.prompt.create({
  name: "conversation-with-history",
  type: "chat",
  prompt: [
    { role: "system", content: "You are a helpful assistant." },
    { type: ChatMessageType.Placeholder, name: "conversation_history" },
    { role: "user", content: "{{current_question}}" }
  ],
  labels: ["production"],
  tags: ["conversational", "memory"],
  commitMessage: "Added conversation history placeholder"
});

// Create multi-placeholder chat prompt
const complexChat = await langfuse.prompt.create({
  name: "advanced-assistant",
  type: "chat",
  prompt: [
    { role: "system", content: "You are {{assistant_type}}. Context: {{context}}" },
    { type: ChatMessageType.Placeholder, name: "few_shot_examples" },
    { type: ChatMessageType.Placeholder, name: "conversation_history" },
    { role: "user", content: "{{user_query}}" }
  ],
  labels: ["staging"],
  config: {
    model: "gpt-4-turbo",
    temperature: 0.9
  }
});

// Create versioned prompt
const v2Prompt = await langfuse.prompt.create({
  name: "greeting", // Same name creates new version
  prompt: "Hi {{name}}! Great to see you in {{location}}.",
  labels: ["staging"],
  commitMessage: "v2: Updated greeting style"
});

// Access created prompt properties
console.log(textPrompt.name);        // "greeting"
console.log(textPrompt.version);     // 1
console.log(textPrompt.type);        // "text"
console.log(textPrompt.prompt);      // "Hello {{name}}! ..."
console.log(textPrompt.config);      // { temperature: 0.7, model: "gpt-4" }
console.log(textPrompt.labels);      // ["production"]
console.log(textPrompt.tags);        // []
console.log(textPrompt.isFallback);  // false

Update Prompt

Update the labels of an existing prompt version.

/**
 * Updates the labels of an existing prompt version
 *
 * After updating, the prompt cache is automatically invalidated
 * to ensure fresh data on next fetch.
 *
 * @param params - Update parameters
 * @returns Promise that resolves to the updated Prompt
 */
update(params: {
  /** Name of the prompt to update */
  name: string;
  /** Version number of the prompt to update */
  version: number;
  /** New labels to apply to the prompt version */
  newLabels: string[];
}): Promise<Prompt>;

Usage Examples:

import { LangfuseClient } from '@langfuse/client';

const langfuse = new LangfuseClient();

// Create a prompt first
const prompt = await langfuse.prompt.create({
  name: "my-prompt",
  prompt: "Hello {{name}}",
  labels: ["staging"]
});

// Promote to production
const updatedPrompt = await langfuse.prompt.update({
  name: "my-prompt",
  version: prompt.version,
  newLabels: ["production"]
});

// Add multiple labels
const multiLabelPrompt = await langfuse.prompt.update({
  name: "my-prompt",
  version: prompt.version,
  newLabels: ["production", "stable", "v1.0"]
});

// Move from production to staging
const downgraded = await langfuse.prompt.update({
  name: "my-prompt",
  version: 2,
  newLabels: ["staging"]
});

// Cache invalidation happens automatically
// Next get() call will fetch fresh data
const freshPrompt = await langfuse.prompt.get("my-prompt");

Prompt Clients

TextPromptClient

Client for working with text-based prompts, providing compilation and LangChain conversion.

class TextPromptClient {
  /** The name of the prompt */
  readonly name: string;

  /** The version number of the prompt */
  readonly version: number;

  /** The text content of the prompt with {{variable}} placeholders */
  readonly prompt: string;

  /** Configuration object associated with the prompt */
  readonly config: unknown;

  /** Labels associated with the prompt */
  readonly labels: string[];

  /** Tags associated with the prompt */
  readonly tags: string[];

  /** Whether this prompt client is using fallback content */
  readonly isFallback: boolean;

  /** The type of prompt (always "text") */
  readonly type: "text";

  /** Optional commit message for the prompt version */
  readonly commitMessage: string | null | undefined;

  /** The original prompt response from the API */
  readonly promptResponse: Prompt.Text;

  /** The dependency resolution graph for the current prompt (null if prompt has no dependencies) */
  readonly resolutionGraph?: Record<string, unknown>;

  /**
   * Compiles the text prompt by substituting variables
   *
   * Uses Mustache templating to replace {{variable}} placeholders with provided values.
   *
   * @param variables - Key-value pairs for variable substitution
   * @returns The compiled text with variables substituted
   */
  compile(variables?: Record<string, string>): string;

  /**
   * Converts the prompt to LangChain PromptTemplate format
   *
   * Transforms Mustache-style {{variable}} syntax to LangChain's {variable} format.
   * JSON braces are automatically escaped to avoid conflicts with variables.
   *
   * @returns The prompt string compatible with LangChain PromptTemplate
   */
  getLangchainPrompt(): string;

  /**
   * Serializes the prompt client to JSON
   *
   * @returns JSON string representation of the prompt
   */
  toJSON(): string;
}

Usage Examples:

import { LangfuseClient } from '@langfuse/client';
import { PromptTemplate } from '@langchain/core/prompts';

const langfuse = new LangfuseClient();

// Get a text prompt
const prompt = await langfuse.prompt.get("greeting", { type: "text" });

// Access prompt properties
console.log(prompt.name);           // "greeting"
console.log(prompt.version);        // 1
console.log(prompt.type);           // "text"
console.log(prompt.prompt);         // "Hello {{name}}! ..."
console.log(prompt.config);         // { temperature: 0.7 }
console.log(prompt.labels);         // ["production"]
console.log(prompt.tags);           // ["greeting", "onboarding"]
console.log(prompt.isFallback);     // false
console.log(prompt.commitMessage);  // "Initial version"

// Compile with variable substitution
const compiled = prompt.compile({
  name: "Alice",
  location: "New York"
});
console.log(compiled);
// "Hello Alice! Welcome to New York."

// Compile with partial variables (unmatched remain as {{variable}})
const partial = prompt.compile({ name: "Bob" });
// "Hello Bob! Welcome to {{location}}."

// Convert to LangChain format
const langchainFormat = prompt.getLangchainPrompt();
console.log(langchainFormat);
// "Hello {name}! Welcome to {location}." ({{}} -> {})

// Use with LangChain
const langchainPrompt = PromptTemplate.fromTemplate(
  prompt.getLangchainPrompt()
);
const result = await langchainPrompt.format({
  name: "Alice",
  location: "Paris"
});

// Serialize to JSON
const json = prompt.toJSON();
const parsed = JSON.parse(json);
console.log(parsed);
// {
//   name: "greeting",
//   prompt: "Hello {{name}}! ...",
//   version: 1,
//   type: "text",
//   config: { temperature: 0.7 },
//   labels: ["production"],
//   tags: ["greeting"],
//   isFallback: false
// }

// Handle JSON in prompt content
const jsonPrompt = await langfuse.prompt.create({
  name: "json-template",
  prompt: `Generate JSON for {{task}}:
{
  "user": "{{username}}",
  "task": "{{task}}",
  "metadata": {
    "timestamp": "{{timestamp}}"
  }
}`
});

// LangChain conversion automatically escapes JSON braces
const langchainJson = jsonPrompt.getLangchainPrompt();
// JSON braces are doubled, variable braces become single:
// {{  }} for JSON becomes {{  }}
// {{variable}} becomes {variable}

const langchainTemplate = PromptTemplate.fromTemplate(langchainJson);
const formatted = await langchainTemplate.format({
  task: "analysis",
  username: "alice",
  timestamp: "2024-01-01"
});

ChatPromptClient

Client for working with chat-based prompts, providing compilation, placeholder resolution, and LangChain conversion.

class ChatPromptClient {
  /** The name of the prompt */
  readonly name: string;

  /** The version number of the prompt */
  readonly version: number;

  /** The chat messages that make up the prompt */
  readonly prompt: ChatMessageWithPlaceholders[];

  /** Configuration object associated with the prompt */
  readonly config: unknown;

  /** Labels associated with the prompt */
  readonly labels: string[];

  /** Tags associated with the prompt */
  readonly tags: string[];

  /** Whether this prompt client is using fallback content */
  readonly isFallback: boolean;

  /** The type of prompt (always "chat") */
  readonly type: "chat";

  /** Optional commit message for the prompt version */
  readonly commitMessage: string | null | undefined;

  /** The original prompt response from the API */
  readonly promptResponse: Prompt.Chat;

  /** The dependency resolution graph for the current prompt (null if prompt has no dependencies) */
  readonly resolutionGraph?: Record<string, unknown>;

  /**
   * Compiles the chat prompt by replacing placeholders and variables
   *
   * First resolves placeholders with provided values, then applies variable substitution
   * to message content using Mustache templating. Unresolved placeholders remain
   * as placeholder objects in the output.
   *
   * @param variables - Key-value pairs for Mustache variable substitution in message content
   * @param placeholders - Key-value pairs where keys are placeholder names and values are ChatMessage arrays
   * @returns Array of ChatMessage objects and unresolved placeholder objects
   */
  compile(
    variables?: Record<string, string>,
    placeholders?: Record<string, any>
  ): (ChatMessageOrPlaceholder | any)[];

  /**
   * Converts the prompt to LangChain ChatPromptTemplate format
   *
   * Resolves placeholders with provided values and converts unresolved ones
   * to LangChain MessagesPlaceholder objects. Transforms variables from
   * {{var}} to {var} format without rendering them.
   *
   * @param options - Configuration object
   * @param options.placeholders - Key-value pairs for placeholder resolution
   * @returns Array of ChatMessage objects and LangChain MessagesPlaceholder objects
   */
  getLangchainPrompt(options?: {
    placeholders?: Record<string, any>;
  }): (ChatMessage | LangchainMessagesPlaceholder | any)[];

  /**
   * Serializes the prompt client to JSON
   *
   * @returns JSON string representation of the prompt
   */
  toJSON(): string;
}

Usage Examples:

import { LangfuseClient, ChatMessageType } from '@langfuse/client';
import { ChatPromptTemplate } from '@langchain/core/prompts';

const langfuse = new LangfuseClient();

// Get a chat prompt
const prompt = await langfuse.prompt.get("conversation", { type: "chat" });

// Access prompt properties
console.log(prompt.name);           // "conversation"
console.log(prompt.version);        // 1
console.log(prompt.type);           // "chat"
console.log(prompt.prompt);         // Array of ChatMessageWithPlaceholders
console.log(prompt.config);         // { temperature: 0.8, model: "gpt-4" }
console.log(prompt.labels);         // ["production"]
console.log(prompt.tags);           // ["conversational"]
console.log(prompt.isFallback);     // false

// Compile with variable substitution only
const compiledMessages = prompt.compile({
  user_name: "Alice",
  assistant_type: "helpful"
});
console.log(compiledMessages);
// [
//   { role: "system", content: "You are a helpful assistant." },
//   { type: "placeholder", name: "history" }, // Unresolved placeholder
//   { role: "user", content: "Hello Alice!" }
// ]

// Compile with variables and placeholders
const fullyCompiled = prompt.compile(
  { user_name: "Alice", assistant_type: "helpful" },
  {
    history: [
      { role: "user", content: "Previous question" },
      { role: "assistant", content: "Previous answer" }
    ]
  }
);
console.log(fullyCompiled);
// [
//   { role: "system", content: "You are a helpful assistant." },
//   { role: "user", content: "Previous question" },
//   { role: "assistant", content: "Previous answer" },
//   { role: "user", content: "Hello Alice!" }
// ]

// Empty placeholder array removes placeholder
const noHistory = prompt.compile(
  { user_name: "Bob" },
  { history: [] }  // Empty array - placeholder omitted
);
// Placeholder is removed from output

// Convert to LangChain format (unresolved placeholders)
const langchainMessages = prompt.getLangchainPrompt();
console.log(langchainMessages);
// [
//   { role: "system", content: "You are a {assistant_type} assistant." },
//   ["placeholder", "{history}"],  // LangChain MessagesPlaceholder format
//   { role: "user", content: "Hello {user_name}!" }
// ]

// Convert to LangChain format (with placeholder resolution)
const resolvedLangchain = prompt.getLangchainPrompt({
  placeholders: {
    history: [
      { role: "user", content: "Hi" },
      { role: "assistant", content: "Hello!" }
    ]
  }
});
console.log(resolvedLangchain);
// [
//   { role: "system", content: "You are a {assistant_type} assistant." },
//   { role: "user", content: "Hi" },
//   { role: "assistant", content: "Hello!" },
//   { role: "user", content: "Hello {user_name}!" }
// ]

// Use with LangChain
const langchainPrompt = ChatPromptTemplate.fromMessages(
  prompt.getLangchainPrompt()
);
const formatted = await langchainPrompt.formatMessages({
  assistant_type: "knowledgeable",
  user_name: "Alice",
  history: [
    { role: "user", content: "What is AI?" },
    { role: "assistant", content: "AI stands for Artificial Intelligence." }
  ]
});

// Multi-placeholder example
const complexPrompt = await langfuse.prompt.create({
  name: "multi-placeholder",
  type: "chat",
  prompt: [
    { role: "system", content: "You are {{role}}." },
    { type: ChatMessageType.Placeholder, name: "examples" },
    { type: ChatMessageType.Placeholder, name: "history" },
    { role: "user", content: "{{query}}" }
  ]
});

const compiled = complexPrompt.compile(
  { role: "expert", query: "Help me" },
  {
    examples: [
      { role: "user", content: "Example Q" },
      { role: "assistant", content: "Example A" }
    ],
    history: [
      { role: "user", content: "Previous Q" },
      { role: "assistant", content: "Previous A" }
    ]
  }
);
// All placeholders resolved, variables substituted

// Serialize to JSON
const json = prompt.toJSON();
const parsed = JSON.parse(json);
console.log(parsed);
// {
//   name: "conversation",
//   prompt: [
//     { role: "system", content: "You are {{assistant_type}} assistant." },
//     { type: "placeholder", name: "history" },
//     { role: "user", content: "Hello {{user_name}}!" }
//   ],
//   version: 1,
//   type: "chat",
//   config: { temperature: 0.8 },
//   labels: ["production"],
//   tags: ["conversational"],
//   isFallback: false
// }

Type Definitions

ChatMessageType

Enumeration of chat message types in prompts.

enum ChatMessageType {
  /** Regular chat message with role and content */
  ChatMessage = "chatmessage",

  /** Placeholder for dynamic content insertion */
  Placeholder = "placeholder"
}

Usage Examples:

import { ChatMessageType } from '@langfuse/client';

// Use in prompt creation
const prompt = await langfuse.prompt.create({
  name: "with-placeholder",
  type: "chat",
  prompt: [
    { role: "system", content: "System message" },
    { type: ChatMessageType.Placeholder, name: "dynamic_content" },
    { role: "user", content: "User message" }
  ]
});

// Check message type
for (const message of prompt.prompt) {
  if ('type' in message && message.type === ChatMessageType.Placeholder) {
    console.log(`Found placeholder: ${message.name}`);
  } else if ('type' in message && message.type === ChatMessageType.ChatMessage) {
    console.log(`Found message: ${message.role} - ${message.content}`);
  }
}

ChatMessage

Represents a standard chat message with role and content.

interface ChatMessage {
  /** The role of the message sender (e.g., "system", "user", "assistant") */
  role: string;

  /** The content of the message */
  content: string;
}

Usage Examples:

import type { ChatMessage } from '@langfuse/client';

// Create chat messages
const messages: ChatMessage[] = [
  { role: "system", content: "You are helpful" },
  { role: "user", content: "Hello" },
  { role: "assistant", content: "Hi there!" }
];

// Use as placeholder values
const compiled = chatPrompt.compile(
  { name: "Alice" },
  { history: messages }
);

// Use as fallback
const prompt = await langfuse.prompt.get("chat", {
  type: "chat",
  fallback: messages
});

ChatMessageWithPlaceholders

Union type for chat messages that can include placeholders.

type ChatMessageWithPlaceholders =
  | { type: "chatmessage"; role: string; content: string }
  | { type: "placeholder"; name: string };

ChatMessageOrPlaceholder

Union type representing either a chat message or a placeholder.

type ChatMessageOrPlaceholder =
  | ChatMessage
  | ({ type: ChatMessageType.Placeholder } & PlaceholderMessage);

Usage Examples:

import type { ChatMessageOrPlaceholder } from '@langfuse/client';

// Return type of compile() method
const compiled: ChatMessageOrPlaceholder[] = chatPrompt.compile(
  { user: "Alice" },
  { history: [] }
);

// Filter messages and placeholders
const actualMessages = compiled.filter(
  (item): item is ChatMessage =>
    'role' in item && 'content' in item
);

const placeholders = compiled.filter(
  (item): item is { type: ChatMessageType.Placeholder; name: string } =>
    'type' in item && item.type === ChatMessageType.Placeholder
);

PlaceholderMessage

Represents a placeholder for dynamic content insertion.

interface PlaceholderMessage {
  /** Name of the placeholder variable */
  name: string;
}

Usage Examples:

import { ChatMessageType } from '@langfuse/client';
import type { PlaceholderMessage } from '@langfuse/core';

// Create placeholder in prompt
const placeholder: PlaceholderMessage & { type: ChatMessageType.Placeholder } = {
  type: ChatMessageType.Placeholder,
  name: "conversation_history"
};

// Use in prompt creation
const prompt = await langfuse.prompt.create({
  name: "with-history",
  type: "chat",
  prompt: [
    { role: "system", content: "System" },
    placeholder,
    { role: "user", content: "Query" }
  ]
});

LangchainMessagesPlaceholder

Represents a LangChain MessagesPlaceholder object for unresolved placeholders.

type LangchainMessagesPlaceholder = {
  /** Name of the variable that will provide the messages */
  variableName: string;

  /** Whether the placeholder is optional (defaults to false) */
  optional?: boolean;
};

Usage Examples:

import type { LangchainMessagesPlaceholder } from '@langfuse/client';

// getLangchainPrompt() returns this format for unresolved placeholders
const langchainMessages = chatPrompt.getLangchainPrompt();

// LangChain MessagesPlaceholder is represented as tuple
// ["placeholder", "{variableName}"]
const placeholderTuple = langchainMessages.find(
  item => Array.isArray(item) && item[0] === "placeholder"
);
// ["placeholder", "{history}"]

// Use directly with LangChain
import { ChatPromptTemplate, MessagesPlaceholder } from '@langchain/core/prompts';

const langchainPrompt = ChatPromptTemplate.fromMessages(
  chatPrompt.getLangchainPrompt()
);

// MessagesPlaceholder automatically created for unresolved placeholders
await langchainPrompt.formatMessages({
  history: [
    { role: "user", content: "Question" },
    { role: "assistant", content: "Answer" }
  ]
});

CreateChatPromptBodyWithPlaceholders

Type for creating chat prompts that support both regular messages and placeholders.

type CreateChatPromptBodyWithPlaceholders = {
  /** Specifies this is a chat prompt */
  type: "chat";

  /** Unique name for the prompt */
  name: string;

  /** Array of chat messages and/or placeholders */
  prompt: (ChatMessage | ChatMessageWithPlaceholders)[];

  /** Configuration object (e.g., model settings) */
  config?: unknown;

  /** List of deployment labels for this prompt version */
  labels?: string[];

  /** List of tags to apply to all versions of this prompt */
  tags?: string[];

  /** Commit message for this prompt version */
  commitMessage?: string;
};

Usage Examples:

import type { CreateChatPromptBodyWithPlaceholders } from '@langfuse/client';
import { ChatMessageType } from '@langfuse/client';

// Create prompt with mixed message types
const promptBody: CreateChatPromptBodyWithPlaceholders = {
  name: "flexible-chat",
  type: "chat",
  prompt: [
    // Regular chat message (no type field needed)
    { role: "system", content: "You are {{role}}" },
    // Explicit placeholder
    { type: ChatMessageType.Placeholder, name: "examples" },
    // Another regular message
    { role: "user", content: "{{query}}" }
  ],
  labels: ["production"],
  config: { temperature: 0.7 }
};

const created = await langfuse.prompt.create(promptBody);

// Backwards compatible: ChatMessage objects automatically get type field
const simpleBody = {
  name: "simple-chat",
  type: "chat" as const,
  prompt: [
    { role: "user", content: "Hello" }
    // Automatically converted to { type: "chatmessage", role: "user", content: "Hello" }
  ]
};

LangfusePromptClient

Union type representing either a text or chat prompt client.

type LangfusePromptClient = TextPromptClient | ChatPromptClient;

Usage Examples:

import type { LangfusePromptClient } from '@langfuse/client';

// Return type of get() without type specification
const prompt: LangfusePromptClient = await langfuse.prompt.get("unknown-type");

// Type narrowing
if (prompt.type === "text") {
  // prompt is TextPromptClient
  const compiled = prompt.compile({ name: "Alice" });
} else {
  // prompt is ChatPromptClient
  const compiled = prompt.compile(
    { name: "Alice" },
    { history: [] }
  );
}

// Type guard function
function isTextPrompt(prompt: LangfusePromptClient): prompt is TextPromptClient {
  return prompt.type === "text";
}

function isChatPrompt(prompt: LangfusePromptClient): prompt is ChatPromptClient {
  return prompt.type === "chat";
}

// Use type guards
if (isTextPrompt(prompt)) {
  console.log("Text prompt:", prompt.prompt);
} else if (isChatPrompt(prompt)) {
  console.log("Chat prompt with", prompt.prompt.length, "messages");
}

Advanced Usage

Variable Substitution

Prompts support Mustache-style variable substitution with {{variable}} syntax.

Text Prompts:

const prompt = await langfuse.prompt.create({
  name: "template",
  prompt: "Hello {{name}}! You have {{count}} new messages."
});

// Compile with all variables
const full = prompt.compile({
  name: "Alice",
  count: "5"
});
// "Hello Alice! You have 5 new messages."

// Partial compilation
const partial = prompt.compile({ name: "Bob" });
// "Hello Bob! You have {{count}} new messages."

// No escaping - JSON safe
const jsonPrompt = prompt.compile({
  data: JSON.stringify({ key: "value" })
});
// Special characters are not HTML-escaped

Chat Prompts:

const chatPrompt = await langfuse.prompt.create({
  name: "chat-template",
  type: "chat",
  prompt: [
    { role: "system", content: "You are {{role}}" },
    { role: "user", content: "Help with {{task}}" }
  ]
});

const compiled = chatPrompt.compile({
  role: "expert",
  task: "coding"
});
// [
//   { role: "system", content: "You are expert" },
//   { role: "user", content: "Help with coding" }
// ]

Placeholder Resolution

Chat prompts support placeholders for dynamic message arrays.

Basic Placeholder Usage:

const prompt = await langfuse.prompt.create({
  name: "with-history",
  type: "chat",
  prompt: [
    { role: "system", content: "You are helpful" },
    { type: ChatMessageType.Placeholder, name: "history" },
    { role: "user", content: "{{query}}" }
  ]
});

// Resolve placeholder
const compiled = prompt.compile(
  { query: "What is AI?" },
  {
    history: [
      { role: "user", content: "Previous question" },
      { role: "assistant", content: "Previous answer" }
    ]
  }
);
// Placeholder replaced with provided messages

// Leave placeholder unresolved
const withPlaceholder = prompt.compile({ query: "What is AI?" });
// Placeholder remains in output as { type: "placeholder", name: "history" }

// Remove placeholder with empty array
const noHistory = prompt.compile(
  { query: "What is AI?" },
  { history: [] }
);
// Placeholder is omitted from output

Multiple Placeholders:

const multiPlaceholder = await langfuse.prompt.create({
  name: "multi",
  type: "chat",
  prompt: [
    { role: "system", content: "System" },
    { type: ChatMessageType.Placeholder, name: "examples" },
    { type: ChatMessageType.Placeholder, name: "history" },
    { role: "user", content: "{{query}}" }
  ]
});

const compiled = multiPlaceholder.compile(
  { query: "Help me" },
  {
    examples: [
      { role: "user", content: "Example Q" },
      { role: "assistant", content: "Example A" }
    ],
    history: [
      { role: "user", content: "Previous Q" },
      { role: "assistant", content: "Previous A" }
    ]
  }
);
// Both placeholders resolved in order

Invalid Placeholder Values:

// Non-array placeholder values are stringified
const invalid = prompt.compile(
  { query: "Test" },
  { history: "not an array" }  // Invalid type
);
// Invalid value is JSON.stringified: '"not an array"'

LangChain Integration

Seamless integration with LangChain prompt templates.

Text Prompts with LangChain:

import { PromptTemplate } from '@langchain/core/prompts';

const textPrompt = await langfuse.prompt.get("greeting", { type: "text" });

// Convert to LangChain format ({{var}} -> {var})
const langchainFormat = textPrompt.getLangchainPrompt();

// Create LangChain template
const template = PromptTemplate.fromTemplate(langchainFormat);

// Format with LangChain
const result = await template.format({
  name: "Alice",
  location: "Paris"
});

// JSON handling - braces are automatically escaped
const jsonPrompt = await langfuse.prompt.create({
  name: "json-template",
  prompt: `{
  "user": "{{username}}",
  "metadata": {
    "timestamp": "{{timestamp}}"
  }
}`
});

const langchainJson = jsonPrompt.getLangchainPrompt();
// JSON braces doubled {{}}, variable braces single {variable}
const jsonTemplate = PromptTemplate.fromTemplate(langchainJson);
const formatted = await jsonTemplate.format({
  username: "alice",
  timestamp: "2024-01-01"
});
// Valid JSON with variables substituted

Chat Prompts with LangChain:

import { ChatPromptTemplate, MessagesPlaceholder } from '@langchain/core/prompts';

const chatPrompt = await langfuse.prompt.get("conversation", { type: "chat" });

// Convert to LangChain format
const langchainMessages = chatPrompt.getLangchainPrompt();

// Create LangChain chat template
const template = ChatPromptTemplate.fromMessages(langchainMessages);

// Format with LangChain
const formatted = await template.formatMessages({
  role: "helpful",
  query: "What is AI?",
  history: [
    { role: "user", content: "Previous Q" },
    { role: "assistant", content: "Previous A" }
  ]
});

// Unresolved placeholders become MessagesPlaceholder
const withPlaceholder = await langfuse.prompt.create({
  name: "with-placeholder",
  type: "chat",
  prompt: [
    { role: "system", content: "System" },
    { type: ChatMessageType.Placeholder, name: "history" },
    { role: "user", content: "{query}" }
  ]
});

const langchainFormat = withPlaceholder.getLangchainPrompt();
// [
//   { role: "system", content: "System" },
//   ["placeholder", "{history}"],  // LangChain MessagesPlaceholder tuple
//   { role: "user", content: "{query}" }
// ]

const promptTemplate = ChatPromptTemplate.fromMessages(langchainFormat);
// MessagesPlaceholder automatically created for "history" variable

Resolved vs Unresolved Placeholders:

// Option 1: Resolve before LangChain conversion
const resolved = chatPrompt.getLangchainPrompt({
  placeholders: {
    history: [
      { role: "user", content: "Hi" },
      { role: "assistant", content: "Hello" }
    ]
  }
});
// Placeholder replaced with messages

// Option 2: Leave unresolved, let LangChain handle it
const unresolved = chatPrompt.getLangchainPrompt();
// Placeholder becomes MessagesPlaceholder

const template = ChatPromptTemplate.fromMessages(unresolved);
await template.formatMessages({
  history: [/* messages */],  // Provided at format time
  // other variables
});

Caching Strategies

Optimize performance with intelligent caching.

Default Caching (60 seconds):

// First call fetches from API and caches
const prompt1 = await langfuse.prompt.get("my-prompt");

// Second call within 60 seconds uses cache (no API call)
const prompt2 = await langfuse.prompt.get("my-prompt");

Custom Cache TTL:

// Cache for 5 minutes
const longCached = await langfuse.prompt.get("my-prompt", {
  cacheTtlSeconds: 300
});

// Cache for 1 hour
const veryLongCached = await langfuse.prompt.get("my-prompt", {
  cacheTtlSeconds: 3600
});

Disable Caching:

// Always fetch fresh (no caching)
const fresh = await langfuse.prompt.get("my-prompt", {
  cacheTtlSeconds: 0
});

Stale-While-Revalidate Pattern:

// After cache expires, returns stale cache while fetching fresh in background
const prompt = await langfuse.prompt.get("my-prompt");
// If cache is expired:
// 1. Returns old cached version immediately
// 2. Fetches fresh version in background
// 3. Updates cache for next request

// Concurrent requests to expired cache trigger only one refresh
const promises = [
  langfuse.prompt.get("my-prompt"),
  langfuse.prompt.get("my-prompt"),
  langfuse.prompt.get("my-prompt")
];
const results = await Promise.all(promises);
// Only one API call made, all get the same result

Cache Invalidation:

// Cache is automatically invalidated on update
await langfuse.prompt.update({
  name: "my-prompt",
  version: 1,
  newLabels: ["production"]
});

// Next get() fetches fresh data
const fresh = await langfuse.prompt.get("my-prompt");

Cache Keys:

// Different cache keys for different retrieval options

// Key: "my-prompt-label:production"
await langfuse.prompt.get("my-prompt");

// Key: "my-prompt-label:staging"  (different key)
await langfuse.prompt.get("my-prompt", { label: "staging" });

// Key: "my-prompt-version:2"  (different key)
await langfuse.prompt.get("my-prompt", { version: 2 });

// Each has independent cache

Fallback Handling

Provide fallback content when prompt fetch fails.

Text Fallback:

const prompt = await langfuse.prompt.get("my-prompt", {
  type: "text",
  fallback: "Default greeting: Hello {{name}}!"
});

// If "my-prompt" doesn't exist or fetch fails:
// - Returns TextPromptClient with fallback content
// - isFallback property is true
// - version is 0
// - labels reflect the provided label option or default

if (prompt.isFallback) {
  console.log("Using fallback content");
}

Chat Fallback:

const chatPrompt = await langfuse.prompt.get("conversation", {
  type: "chat",
  fallback: [
    { role: "system", content: "You are a helpful assistant" },
    { role: "user", content: "Hello {{name}}" }
  ]
});

// If "conversation" doesn't exist or fetch fails:
// - Returns ChatPromptClient with fallback messages
// - isFallback property is true
// - version is 0

if (chatPrompt.isFallback) {
  console.warn("Prompt fetch failed, using fallback");
}

Fallback Best Practices:

// Production safety: always provide fallback
const productionPrompt = await langfuse.prompt.get("critical-prompt", {
  type: "text",
  fallback: "Safe default prompt",
  maxRetries: 3,
  fetchTimeoutMs: 5000
});

// Development: fail fast without fallback
try {
  const devPrompt = await langfuse.prompt.get("test-prompt");
} catch (error) {
  console.error("Prompt not found:", error);
  // Handle error explicitly
}

// Conditional fallback
async function getPromptWithFallback(name: string) {
  const isProd = process.env.NODE_ENV === 'production';

  return await langfuse.prompt.get(name, {
    type: "text",
    fallback: isProd ? "Default safe prompt" : undefined,
    maxRetries: isProd ? 3 : 1
  });
}

Error Handling

Handle various error scenarios gracefully.

Prompt Not Found:

try {
  const prompt = await langfuse.prompt.get("non-existent");
} catch (error) {
  console.error("Prompt not found:", error.message);
  // Use fallback or default behavior
}

Network Errors:

try {
  const prompt = await langfuse.prompt.get("my-prompt", {
    fetchTimeoutMs: 1000,  // 1 second timeout
    maxRetries: 2
  });
} catch (error) {
  if (error.message.includes("timeout")) {
    console.error("Request timeout");
  } else if (error.message.includes("network")) {
    console.error("Network error");
  }
  // Fallback logic
}

Version/Label Not Found:

try {
  // Version doesn't exist
  const prompt = await langfuse.prompt.get("my-prompt", { version: 999 });
} catch (error) {
  console.error("Version not found:", error.message);
}

try {
  // Label doesn't exist
  const prompt = await langfuse.prompt.get("my-prompt", {
    label: "non-existent"
  });
} catch (error) {
  console.error("Label not found:", error.message);
}

Update Errors:

try {
  await langfuse.prompt.update({
    name: "non-existent",
    version: 1,
    newLabels: ["production"]
  });
} catch (error) {
  console.error("Update failed:", error.message);
}

try {
  await langfuse.prompt.update({
    name: "my-prompt",
    version: 999,  // Invalid version
    newLabels: ["production"]
  });
} catch (error) {
  console.error("Invalid version:", error.message);
}

TypeScript Support

Full type safety and inference for prompt operations.

Type Inference:

// Type inferred based on 'type' option
const textPrompt = await langfuse.prompt.get("greeting", { type: "text" });
// textPrompt: TextPromptClient

const chatPrompt = await langfuse.prompt.get("conversation", { type: "chat" });
// chatPrompt: ChatPromptClient

// Without type specification
const prompt = await langfuse.prompt.get("unknown");
// prompt: TextPromptClient | ChatPromptClient

Type Guards:

import { TextPromptClient, ChatPromptClient } from '@langfuse/client';

const prompt = await langfuse.prompt.get("unknown");

if (prompt instanceof TextPromptClient) {
  const text: string = prompt.compile({ name: "Alice" });
}

if (prompt instanceof ChatPromptClient) {
  const messages = prompt.compile(
    { name: "Alice" },
    { history: [] }
  );
}

Generic Types:

import type {
  LangfusePromptClient,
  TextPromptClient,
  ChatPromptClient,
  ChatMessage,
  ChatMessageWithPlaceholders,
  CreatePromptRequest
} from '@langfuse/client';

// Function with generic prompt client
function processPrompt(prompt: LangfusePromptClient) {
  if (prompt.type === "text") {
    return prompt.compile({ var: "value" });
  } else {
    return prompt.compile({ var: "value" }, {});
  }
}

// Type-safe prompt creation
const textRequest: CreatePromptRequest.Text = {
  name: "test",
  prompt: "Hello {{name}}",
  labels: ["production"]
};

const chatRequest: CreatePromptRequest.Chat = {
  name: "test-chat",
  type: "chat",
  prompt: [
    { role: "system", content: "System" }
  ]
};

Performance Considerations

Caching

  • Default TTL: 60 seconds strikes balance between freshness and performance
  • Production: Use longer TTL (300-3600s) for stable prompts
  • Development: Use shorter TTL (10-30s) or disable (0) for rapid iteration
  • Stale-While-Revalidate: Returns immediately even on expired cache

Concurrent Requests

  • Multiple concurrent requests to same expired prompt trigger only one refresh
  • Reduces API load and improves response times
  • Automatic deduplication of in-flight refresh requests

Compilation Performance

  • Variable substitution is fast (Mustache rendering)
  • Placeholder resolution is O(n) where n is number of messages
  • LangChain conversion adds minimal overhead

Best Practices

// ✅ Good: Reuse prompt client
const prompt = await langfuse.prompt.get("my-prompt");
const result1 = prompt.compile({ name: "Alice" });
const result2 = prompt.compile({ name: "Bob" });

// ❌ Bad: Fetch repeatedly
const result1 = (await langfuse.prompt.get("my-prompt")).compile({ name: "Alice" });
const result2 = (await langfuse.prompt.get("my-prompt")).compile({ name: "Bob" });

// ✅ Good: Cache for appropriate duration
const stablePrompt = await langfuse.prompt.get("stable", {
  cacheTtlSeconds: 3600  // 1 hour for stable prompts
});

// ✅ Good: Batch operations
const [prompt1, prompt2, prompt3] = await Promise.all([
  langfuse.prompt.get("prompt-1"),
  langfuse.prompt.get("prompt-2"),
  langfuse.prompt.get("prompt-3")
]);

// ✅ Good: Production safety with fallback
const prompt = await langfuse.prompt.get("critical", {
  type: "text",
  fallback: "Default prompt",
  maxRetries: 3,
  cacheTtlSeconds: 300
});

Migration Examples

From Hardcoded Prompts

Before:

const systemMessage = "You are a helpful assistant.";
const userMessage = `Hello ${userName}! How can I help?`;

After:

const prompt = await langfuse.prompt.get("greeting", { type: "chat" });
const messages = prompt.compile(
  { user_name: userName },
  {}
);

From Template Strings

Before:

function generatePrompt(task: string, context: string) {
  return `Generate code for: ${task}

Context: ${context}`;
}

After:

const prompt = await langfuse.prompt.get("code-generator", { type: "text" });
const generated = prompt.compile({ task, context });

From LangChain Direct Usage

Before:

import { ChatPromptTemplate } from '@langchain/core/prompts';

const template = ChatPromptTemplate.fromMessages([
  ["system", "You are a {role} assistant"],
  ["user", "{query}"]
]);

After:

const prompt = await langfuse.prompt.get("assistant", { type: "chat" });
const template = ChatPromptTemplate.fromMessages(
  prompt.getLangchainPrompt()
);
// Prompt now managed in Langfuse UI with versioning

Install with Tessl CLI

npx tessl i tessl/npm-langfuse--client@4.2.1

docs

autoevals-adapter.md

client.md

datasets.md

experiments.md

index.md

media.md

prompts.md

scores.md

tile.json