or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

docs

advanced.mdannotation-queues.mdanonymizer.mdclient-api.mddatasets.mdevaluation.mdfeedback.mdgetting-started.mdindex.mdjest.mdlangchain.mdopentelemetry.mdprompts.mdrun-trees.mdschemas.mdtesting.mdtracing.mdvercel.mdvitest.mdworkflows.mdwrappers.md
tile.json

index.mddocs/

LangSmith

LangSmith is a comprehensive TypeScript SDK for the LangSmith platform, enabling developers to trace, debug, evaluate, and monitor LLM applications and intelligent agents. It provides seamless integration with LangChain and standalone capabilities through decorators, wrapper functions, and a full-featured client API.

Package Information

  • Package Name: langsmith
  • Package Type: npm
  • Language: TypeScript
  • Installation: npm install langsmith

Core Imports

IMPORTANT: Subpath Exports

LangSmith uses subpath exports for optimal tree-shaking and module organization. The main export (langsmith) provides core classes and utilities. Specialized features like traceable(), evaluation, and wrappers are available through subpath imports.

// Core classes and utilities from main export
import {
  Client,
  RunTree,
  uuid7,
  Cache,
  __version__,
  type ClientConfig,
  type LangSmithTracingClientInterface,
} from "langsmith";

// Traceable decorator from subpath export
import { traceable } from "langsmith/traceable";

// Evaluation functions from subpath export
import { evaluate } from "langsmith/evaluation";

// Wrappers from subpath exports
import { wrapOpenAI } from "langsmith/wrappers/openai";
import { wrapAnthropic } from "langsmith/wrappers/anthropic";

// Anonymization from subpath export
import { createAnonymizer } from "langsmith/anonymizer";

For CommonJS:

const { Client, RunTree, uuid7, Cache, __version__ } = require("langsmith");
const { traceable } = require("langsmith/traceable");
const { evaluate } = require("langsmith/evaluation");
const { wrapOpenAI } = require("langsmith/wrappers/openai");
const { createAnonymizer } = require("langsmith/anonymizer");

Installation

Install via npm:

npm install langsmith

Or with yarn:

yarn add langsmith

Environment Setup

Configure LangSmith with environment variables:

export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=your_api_key
export LANGCHAIN_PROJECT=your_project_name  # Optional: defaults to "default"

Alternative configuration in code:

import { Client } from "langsmith";

const client = new Client({
  apiKey: "your_api_key",
  apiUrl: "https://api.smith.langchain.com"
});

Quick Reference

Core APIs

APIImport PathPurpose
traceable()langsmith/traceableWrap functions for automatic tracing
ClientlangsmithMain client for API operations
RunTreelangsmithManual trace tree construction
evaluate()langsmith/evaluationRun dataset evaluations
wrapOpenAI()langsmith/wrappers/openaiTrace OpenAI SDK calls
wrapAISDK()langsmith/experimental/vercelTrace Vercel AI SDK
createAnonymizer()langsmith/anonymizerRedact sensitive data
test()langsmith/jest or langsmith/vitestLangSmith-tracked testing
CachelangsmithPrompt caching system

Client Methods (Common)

Projects: createProject(), readProject(), listProjects(), updateProject(), deleteProject(), hasProject(), getProjectUrl() Runs: createRun(), updateRun(), readRun(), listRuns(), shareRun(), unshareRun(), getRunUrl(), listGroupRuns(), getRunStats() Datasets: createDataset(), readDataset(), listDatasets(), updateDataset(), deleteDataset(), hasDataset(), shareDataset(), indexDataset(), similarExamples() Examples: createExample(), createExamples(), updateExample(), listExamples(), deleteExample(), deleteExamples() Feedback: createFeedback(), updateFeedback(), readFeedback(), listFeedback(), deleteFeedback(), createPresignedFeedbackToken() Prompts: createPrompt(), pullPrompt(), pushPrompt(), listPrompts(), deletePrompt(), likePrompt(), unlikePrompt() Annotation Queues: createAnnotationQueue(), readAnnotationQueue(), listAnnotationQueues(), updateAnnotationQueue(), deleteAnnotationQueue(), addRunsToAnnotationQueue(), getRunFromAnnotationQueue(), deleteRunFromAnnotationQueue(), getSizeFromAnnotationQueue() Utility: awaitPendingTraceBatches(), flush(), cleanup(), uuid7(), getDefaultProjectName()

Tracing Methods

traceable(): Wrap functions for automatic tracing getCurrentRunTree(): Access current run context withRunTree(): Execute function with run context isTraceableFunction(): Check if function is traceable RunTree class: createChild(), end(), postRun(), patchRun(), toHeaders(), addEvent(), fromHeaders(), fromDottedOrder()

Key Interfaces

Client: ClientConfig, TracerSession, TracerSessionResult, Run, RunCreate, RunUpdate Datasets: Dataset, Example, ExampleCreate, DatasetShareSchema, DatasetDiffInfo Evaluation: EvaluateOptions, EvaluationResult, EvaluationResults, RunEvaluator, StringEvaluator Tracing: TraceableConfig, TraceableFunction, RunTreeConfig, RunEvent, InvocationParamsSchema Feedback: FeedbackCreate, Feedback, FeedbackIngestToken, FeedbackConfig Caching: Cache, CacheConfig, CacheMetrics Anonymization: Anonymizer, StringNodeRule, StringNodeProcessor, AnonymizerOptions

Complete API Reference →

Quick Start

Automatic Tracing with traceable

The simplest way to add tracing to your application:

import { traceable } from "langsmith/traceable";

// Wrap any function for automatic tracing
const chatbot = traceable(
  async (userInput: string) => {
    // Your LLM application logic
    const response = await yourLLMCall(userInput);
    return response;
  },
  { name: "chatbot", run_type: "chain" }
);

// Call the function - traces are automatically sent to LangSmith
const result = await chatbot("Hello, how are you?");

Quick Dataset Evaluation

import { evaluate } from "langsmith/evaluation";

// Define your target function
async function myBot(input: { question: string }) {
  return { answer: await generateAnswer(input.question) };
}

// Run evaluation
const results = await evaluate(myBot, {
  data: "my-qa-dataset",  // Dataset name or examples array
  evaluators: [
    (run, example) => ({
      key: "correctness",
      score: run.outputs.answer === example?.outputs?.answer ? 1 : 0
    })
  ]
});

Client API Usage

import { Client } from "langsmith";

const client = new Client();

// Create a project
const project = await client.createProject({
  projectName: "my-chatbot",
  description: "Production chatbot"
});

// List runs
for await (const run of client.listRuns({ projectName: "my-chatbot" })) {
  console.log(run.name, run.status);
}

// Create feedback
await client.createFeedback({
  run_id: runId,
  key: "user_rating",
  score: 1,  // thumbs up
  comment: "Great response!"
});

SDK Wrappers

Automatic tracing for popular AI SDKs:

import { wrapOpenAI } from "langsmith/wrappers/openai";
import OpenAI from "openai";

// Wrap OpenAI client
const openai = wrapOpenAI(new OpenAI(), {
  projectName: "openai-project"
});

// All calls are automatically traced
const response = await openai.chat.completions.create({
  model: "gpt-4",
  messages: [{ role: "user", content: "Hello!" }]
});

Core Concepts

Tracing and Observability

LangSmith provides comprehensive tracing for LLM applications:

  • Automatic Tracing: Use traceable() decorator to wrap functions
  • Manual Tracing: Use RunTree for fine-grained control
  • Distributed Tracing: Propagate traces across services via headers
  • Framework Integration: Built-in support for LangChain, OpenAI, Anthropic, Vercel AI

Traces capture:

  • Inputs and outputs
  • Execution time and token usage
  • Hierarchical call structure
  • Errors and exceptions
  • Custom metadata and tags

Projects and Runs

Projects (also called TracerSessions) organize your traces:

  • Group related runs together
  • Set project-level metadata
  • Filter and search runs by project
  • Compare experiments across projects

Runs represent individual executions:

  • Each run has a unique ID
  • Can be hierarchical (parent-child relationships)
  • Support types: llm, chain, tool, retriever, embedding, prompt, parser
  • Capture full execution context

Datasets and Evaluation

Datasets store examples for testing and evaluation:

  • Create from production data or manually
  • Version and tag datasets
  • Share datasets across teams
  • Support multiple data types: key-value, LLM format, chat format

Evaluation framework tests applications systematically:

  • Run tests against datasets
  • Custom evaluators for your metrics
  • Comparative evaluation across experiments
  • Summary statistics and aggregations

Feedback System

Collect and analyze feedback on runs:

  • Human feedback: Manual ratings and corrections
  • Model feedback: LLM-as-judge evaluations
  • API feedback: Automated quality checks
  • Presigned tokens: Secure feedback collection without API keys

Prompt Management

Version control for prompts in the Prompt Hub:

  • Create and version prompts
  • Pull specific versions or latest
  • Share prompts across teams
  • Track usage and popularity

Navigation Guide

I want to add tracing to my application

Start with these docs:

I need to use the Client API

  • Client API - Complete API reference for projects, runs, and configuration
  • Datasets - Create and manage datasets
  • Feedback - Collect and query feedback

I want to evaluate my LLM application

I'm using LangChain

I need manual tracing or distributed tracing

I'm using Vercel AI SDK

I want to see practical examples and workflows

  • Common Workflows - Production monitoring, A/B testing, prompt development, utilities

I need advanced features

Common Workflows and Examples

For detailed workflow examples and patterns, see:

  • Common Workflows - Production monitoring, testing, A/B testing, prompt development
  • Getting Started - Step-by-step guide for your first traces and evaluations

The workflows documentation includes complete examples for:

  • Production monitoring with feedback collection
  • Testing and evaluation with datasets
  • A/B testing with comparative experiments
  • Prompt development and versioning
  • Utility functions (UUID generation, fetch override, project name helpers)
  • Prompt caching system

TypeScript Types

The SDK is fully typed for TypeScript:

import type {
  Run,
  Dataset,
  Example,
  Feedback,
  Prompt,
  TracerSession,
  EvaluationResult,
  Client as ClientType
} from "langsmith/schemas";

All APIs include complete type definitions for inputs and outputs. See Schemas for full type reference.

Package Version

/**
 * Package version constant
 */
const __version__: string;

Next Steps

Additional Resources

  • LangSmith Documentation
  • LangChain Documentation
  • GitHub Repository
  • API Reference