or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

advanced.mdannotation-queues.mdanonymizer.mdclient-api.mddatasets.mdevaluation.mdfeedback.mdgetting-started.mdindex.mdjest.mdlangchain.mdopentelemetry.mdprompts.mdrun-trees.mdschemas.mdtesting.mdtracing.mdvercel.mdvitest.mdworkflows.mdwrappers.md
tile.json

workflows.mddocs/

Common Workflows

Practical examples and patterns for common LangSmith use cases.

Production Monitoring

Monitor LLM applications in production with automatic tracing and feedback collection.

import { traceable } from "langsmith/traceable";
import { Client } from "langsmith";

const client = new Client();

// Trace production application
const productionBot = traceable(
  async (input: string) => {
    const response = await processWithLLM(input);
    return response;
  },
  {
    name: "production-bot",
    projectName: "production",
    metadata: { environment: "prod", version: "1.0" },
    tags: ["production", "monitored"]
  }
);

// Monitor with feedback
const result = await productionBot("user query");

// Collect user feedback
await client.createFeedback({
  run_id: result.runId,
  key: "user_satisfaction",
  score: 1,  // positive
  feedbackSourceType: "app"
});

Testing and Evaluation

Create datasets and evaluate model performance systematically.

import { evaluate } from "langsmith/evaluation";
import { Client } from "langsmith";

const client = new Client();

// Create test dataset
const dataset = await client.createDataset({
  datasetName: "qa-test-set",
  description: "QA pairs for testing"
});

await client.createExamples({
  datasetId: dataset.id,
  inputs: [
    { question: "What is 2+2?" },
    { question: "What is the capital of France?" }
  ],
  outputs: [
    { answer: "4" },
    { answer: "Paris" }
  ]
});

// Define evaluator
const correctnessEvaluator = ({ run, example }) => ({
  key: "correctness",
  score: run.outputs?.answer === example?.outputs?.answer ? 1 : 0
});

// Run evaluation
const results = await evaluate(myBot, {
  data: "qa-test-set",
  evaluators: [correctnessEvaluator],
  experimentName: "qa-bot-v1"
});

console.log(`Accuracy: ${results.results.filter(r => r.score === 1).length / results.results.length}`);

A/B Testing

Compare different models or configurations to determine which performs better.

import { evaluate, evaluateComparative } from "langsmith/evaluation";

// Run two experiments
const experimentA = await evaluate(modelA, {
  data: "test-dataset",
  experimentPrefix: "model-a",
  metadata: { model: "gpt-4", temperature: 0.7 }
});

const experimentB = await evaluate(modelB, {
  data: "test-dataset",
  experimentPrefix: "model-b",
  metadata: { model: "gpt-3.5-turbo", temperature: 0.7 }
});

// Compare experiments
const comparison = await evaluateComparative(
  [experimentA.experimentName, experimentB.experimentName],
  {
    comparativeEvaluators: [
      (runs, example) => {
        // Compare runs side-by-side
        const scores = runs.map(r => scoreQuality(r.outputs));
        return {
          key: "quality_winner",
          scores,
          value: scores[0] > scores[1] ? "A" : "B"
        };
      }
    ]
  }
);

Prompt Development

Version control and manage prompts in the LangSmith prompt hub.

import { Client } from "langsmith";

const client = new Client();

// Create and version prompts
await client.createPrompt("customer-support", {
  description: "Customer support bot prompt",
  tags: ["support", "v1"]
});

// Push initial version
await client.pushPrompt("customer-support", {
  object: {
    type: "chat",
    messages: [
      { role: "system", content: "You are a helpful customer support agent." },
      { role: "user", content: "{user_query}" }
    ]
  },
  description: "Initial version"
});

// Pull and use
const prompt = await client.pullPrompt({
  promptName: "customer-support"
});

// Update with new version
await client.pushPrompt("customer-support", {
  object: {
    type: "chat",
    messages: [
      { role: "system", content: "You are a friendly and efficient customer support agent." },
      { role: "user", content: "{user_query}" }
    ]
  },
  description: "Made tone friendlier"
});

// View version history
for await (const commit of client.listCommits({
  promptName: "customer-support"
})) {
  console.log(`${commit.created_at}: ${commit.commit_message}`);
}

Key Features by Use Case

For Development and Debugging

  • Automatic tracing with traceable() decorator
  • Hierarchical trace visualization
  • Input/output capture with transformations
  • Error tracking and stack traces
  • Distributed tracing across services

For Testing and Evaluation

  • Dataset management (create, version, share)
  • Evaluation framework with custom evaluators
  • Comparative experiments for A/B testing
  • Summary evaluators for aggregate metrics
  • Test framework integration (Jest, Vitest)

For Production Monitoring

  • Real-time trace collection
  • Feedback collection from users and models
  • Presigned tokens for secure feedback
  • Project-based organization
  • Filtering and search capabilities

For Team Collaboration

  • Shared projects and datasets
  • Prompt versioning and sharing
  • Annotation queues for human review
  • Public dataset sharing
  • Run sharing and permalinks

Utility Functions

Global utility functions for configuring LangSmith SDK behavior.

Override Fetch Implementation

/**
 * Override the fetch implementation used by the client
 * @param fetch - Custom fetch function (e.g., for proxies or mocking)
 */
function overrideFetchImplementation(fetch: typeof globalThis.fetch): void;

Usage Example:

import { overrideFetchImplementation } from "langsmith";

// Use custom fetch (e.g., for proxy or testing)
const customFetch = (url: string, init?: RequestInit) => {
  console.log("Fetching:", url);
  return fetch(url, init);
};
overrideFetchImplementation(customFetch);

Get Default Project Name

/**
 * Get the default project name from environment variables
 * @returns Project name from LANGCHAIN_PROJECT or LANGCHAIN_SESSION env vars
 */
function getDefaultProjectName(): string;

Usage Example:

import { getDefaultProjectName } from "langsmith";

// Get default project name from environment
const projectName = getDefaultProjectName();
console.log("Using project:", projectName);

UUID Generation

/**
 * Generate a random UUID v7 string
 * @returns A UUID v7 string
 */
function uuid7(): string;

/**
 * Generate a UUID v7 from a timestamp
 * @param timestamp - The timestamp in milliseconds or ISO string
 * @returns A UUID v7 string
 */
function uuid7FromTime(timestamp: number | string): string;

Usage Examples:

import { uuid7, uuid7FromTime } from "langsmith";

// Generate UUID v7
const runId = uuid7();
console.log("Run ID:", runId);

// Generate UUID v7 from timestamp
const timestampId = uuid7FromTime(Date.now());
const dateId = uuid7FromTime("2024-01-01T00:00:00Z");

Prompt Cache

LangSmith provides a built-in caching mechanism for prompts to reduce latency and API calls.

Cache Class

/**
 * Cache class for storing and retrieving prompts with TTL and refresh capabilities
 */
class Cache {
  constructor(config?: CacheConfig);

  /** Get cached value or fetch if missing/stale */
  get(key: string): Promise<PromptCommit | undefined>;

  /** Store value in cache */
  set(key: string, value: PromptCommit): void;

  /** Clear all cached entries */
  clear(): void;

  /** Stop background refresh timers */
  stop(): void;
}

interface CacheConfig {
  /** Maximum entries in cache (LRU eviction when exceeded). Default: 100 */
  maxSize?: number;
  /** Time in seconds before entry is stale. null = infinite TTL. Default: 3600 */
  ttlSeconds?: number | null;
  /** How often to check for stale entries in seconds. Default: 60 */
  refreshIntervalSeconds?: number;
  /** Function to fetch fresh data when cache miss or stale */
  fetchFunc?: (key: string) => Promise<PromptCommit>;
}

Cache Usage Example

import { Cache, Client } from "langsmith";

const client = new Client();

// Use prompt cache
const cache = new Cache({
  maxSize: 100,
  ttlSeconds: 3600,
  fetchFunc: async (key) => {
    // Fetch prompt from LangSmith
    return await client.pullPromptCommit(key);
  },
});

const prompt = await cache.get("my-prompt:latest");

// Stop cache when done
cache.stop();

Related Documentation

  • Getting Started - Installation and first steps
  • Tracing - Comprehensive tracing guide
  • Evaluation - Evaluation framework
  • Client API - Complete API reference
  • Prompts - Prompt management
  • Datasets - Dataset operations
  • Feedback - Feedback collection