or run

tessl search
Log in

Version

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
npmpkg:npm/langsmith@0.4.x
tile.json

tessl/npm-langsmith

tessl install tessl/npm-langsmith@0.4.3

TypeScript client SDK for the LangSmith LLM tracing, evaluation, and monitoring platform.

index.mddocs/

LangSmith TypeScript SDK

TypeScript SDK for the LangSmith platform - trace, debug, evaluate, and monitor LLM applications and intelligent agents.

Package Information

  • Package: langsmith (npm)
  • Language: TypeScript
  • Installation: npm install langsmith
  • Version: 0.4.6

Core Capabilities

LangSmith provides four primary capabilities for LLM application development:

  1. Tracing: Capture execution traces of LLM applications with hierarchical call structures
  2. Evaluation: Test applications against datasets with custom evaluators
  3. Monitoring: Track production behavior with feedback collection
  4. Management: Version control prompts and manage datasets

Essential Imports

// Core tracing
import { traceable } from "langsmith/traceable";
import { Client, RunTree } from "langsmith";

// Evaluation
import { evaluate } from "langsmith/evaluation";

// SDK wrappers
import { wrapOpenAI } from "langsmith/wrappers/openai";
import { wrapAnthropic } from "langsmith/wrappers/anthropic";
import { wrapAISDK } from "langsmith/experimental/vercel";

// Utilities
import { createAnonymizer } from "langsmith/anonymizer";

Note: LangSmith uses subpath exports. Import specialized features from their subpaths (langsmith/traceable, langsmith/evaluation, etc.), not from the main langsmith export.

For CommonJS:

const { Client, RunTree, uuid7, Cache, __version__ } = require("langsmith");
const { traceable } = require("langsmith/traceable");
const { evaluate } = require("langsmith/evaluation");
const { wrapOpenAI } = require("langsmith/wrappers/openai");
const { createAnonymizer } = require("langsmith/anonymizer");

Quick Start

5-Minute Setup

import { traceable } from "langsmith/traceable";
import { evaluate } from "langsmith/evaluation";

// 1. Set environment variables
// LANGCHAIN_API_KEY=your_api_key
// LANGCHAIN_PROJECT=your_project_name

// 2. Trace any function
const chatbot = traceable(
  async (input: string) => {
    // Your LLM logic here
    return { response: "Hello!" };
  },
  { name: "chatbot", run_type: "chain" }
);

// 3. Use it - traces automatically sent to LangSmith
await chatbot("Hi there");

// 4. Evaluate against a dataset
const results = await evaluate(chatbot, {
  data: "my-dataset",
  evaluators: [
    ({ run, example }) => ({
      key: "correct",
      score: run.outputs?.response === example?.outputs?.response ? 1 : 0
    })
  ]
});

I Want To...

Quick navigation by task:

πŸš€ Get Started

  • Install and configure LangSmith β†’ Setup guide
  • See code examples β†’ Quick reference
  • Understand key concepts β†’ Projects, runs, datasets
  • Choose the right API β†’ Decision trees for API selection

πŸ“Š Trace My Application

πŸ§ͺ Test & Evaluate

πŸ” Protect Sensitive Data

🚒 Deploy to Production

πŸ“ˆ Monitor & Analyze

πŸ› οΈ Advanced Use Cases

πŸ”§ Troubleshooting & Help

Documentation Structure

πŸ“š Main Sections

Guides - Step-by-step tutorials and best practices

API Reference - Complete API documentation

Integrations - Framework & SDK wrappers

Advanced Topics - Specialized features

Core Concepts - Understanding LangSmith

  • Projects β€’ Runs β€’ Datasets β€’ Examples β€’ Feedback

🎯 Learning Paths

Path 1: Beginner (15 min)

  1. Setup & Installation
  2. Your First Trace
  3. Core Concepts

Path 2: Evaluation (30 min)

  1. Create Dataset
  2. Run Evaluation
  3. Compare Models

Path 3: Production (45 min)

  1. Production Setup
  2. SDK Wrappers
  3. Monitoring Workflows

Quick Reference

Core APIs

APIImport PathPurpose
traceable()langsmith/traceableWrap functions for automatic tracing
ClientlangsmithMain client for API operations
RunTreelangsmithManual trace tree construction
evaluate()langsmith/evaluationRun dataset evaluations
wrapOpenAI()langsmith/wrappers/openaiTrace OpenAI SDK calls
wrapAISDK()langsmith/experimental/vercelTrace Vercel AI SDK
createAnonymizer()langsmith/anonymizerRedact sensitive data
test()langsmith/jest or langsmith/vitestLangSmith-tracked testing
CachelangsmithPrompt caching system

Client Methods (Common)

Projects: createProject(), readProject(), listProjects(), updateProject(), deleteProject(), hasProject(), getProjectUrl() Runs: createRun(), updateRun(), readRun(), listRuns(), shareRun(), unshareRun(), getRunUrl(), listGroupRuns(), getRunStats() Datasets: createDataset(), readDataset(), listDatasets(), updateDataset(), deleteDataset(), hasDataset(), shareDataset(), indexDataset(), similarExamples() Examples: createExample(), createExamples(), updateExample(), listExamples(), deleteExample(), deleteExamples() Feedback: createFeedback(), updateFeedback(), readFeedback(), listFeedback(), deleteFeedback(), createPresignedFeedbackToken() Prompts: createPrompt(), pullPrompt(), pushPrompt(), listPrompts(), deletePrompt(), likePrompt(), unlikePrompt() Annotation Queues: createAnnotationQueue(), readAnnotationQueue(), listAnnotationQueues(), updateAnnotationQueue(), deleteAnnotationQueue(), addRunsToAnnotationQueue(), getRunFromAnnotationQueue(), deleteRunFromAnnotationQueue(), getSizeFromAnnotationQueue() Utility: awaitPendingTraceBatches(), flush(), cleanup(), uuid7(), getDefaultProjectName()

Tracing Methods

traceable(): Wrap functions for automatic tracing getCurrentRunTree(): Access current run context withRunTree(): Execute function with run context isTraceableFunction(): Check if function is traceable RunTree class: createChild(), end(), postRun(), patchRun(), toHeaders(), addEvent(), fromHeaders(), fromDottedOrder()

Key Interfaces

Client: ClientConfig, TracerSession, TracerSessionResult, Run, RunCreate, RunUpdate Datasets: Dataset, Example, ExampleCreate, DatasetShareSchema, DatasetDiffInfo Evaluation: EvaluateOptions, EvaluationResult, EvaluationResults, RunEvaluator, StringEvaluator Tracing: TraceableConfig, TraceableFunction, RunTreeConfig, RunEvent, InvocationParamsSchema Feedback: FeedbackCreate, Feedback, FeedbackIngestToken, FeedbackConfig Caching: Cache, CacheConfig, CacheMetrics Anonymization: Anonymizer, StringNodeRule, StringNodeProcessor, AnonymizerOptions

Key Interfaces

Client Configuration

interface ClientConfig {
  apiUrl?: string;                // Default: https://api.smith.langchain.com
  apiKey?: string;                // Default: LANGCHAIN_API_KEY env var
  timeout_ms?: number;            // Default: 120000
  autoBatchTracing?: boolean;     // Default: true
  hideInputs?: boolean | ((inputs: KVMap) => KVMap);
  hideOutputs?: boolean | ((outputs: KVMap) => KVMap);
  tracingSamplingRate?: number;   // 0.0 to 1.0
}

Traceable Configuration

interface TraceableConfig {
  name?: string;                  // Run name
  run_type?: string;              // "llm" | "chain" | "tool" | "retriever" | "embedding"
  metadata?: Record<string, any>; // Additional metadata
  tags?: string[];                // Tags for filtering
  client?: Client;                // Custom client instance
  project_name?: string;          // Project name override
}

Evaluation Options

interface EvaluateOptions {
  data: string | Example[];       // Dataset name or examples array
  evaluators: EvaluatorT[];       // Evaluator functions
  summary_evaluators?: SummaryEvaluatorT[];
  experiment_name?: string;       // Explicit experiment name
  max_concurrency?: number;       // Default: 10
  metadata?: Record<string, any>; // Experiment metadata
}

Environment Variables

# Required
LANGCHAIN_API_KEY=lsv2_pt_...    # Your LangSmith API key

# Optional
LANGCHAIN_TRACING_V2=true       # Enable tracing (recommended)
LANGCHAIN_PROJECT=my-project     # Default project name (defaults to "default")
LANGCHAIN_ENDPOINT=https://...   # Custom API endpoint
LANGCHAIN_TRACING=true          # Enable/disable tracing globally

Alternative configuration in code:

import { Client } from "langsmith";

const client = new Client({
  apiKey: "your_api_key",
  apiUrl: "https://api.smith.langchain.com"
});

Core Concepts

Tracing and Observability

LangSmith provides comprehensive tracing for LLM applications:

  • Automatic Tracing: Use traceable() decorator to wrap functions
  • Manual Tracing: Use RunTree for fine-grained control
  • Distributed Tracing: Propagate traces across services via headers
  • Framework Integration: Built-in support for LangChain, OpenAI, Anthropic, Vercel AI

Traces capture:

  • Inputs and outputs
  • Execution time and token usage
  • Hierarchical call structure
  • Errors and exceptions
  • Custom metadata and tags

Projects and Runs

Projects (also called TracerSessions) organize your traces:

  • Group related runs together
  • Set project-level metadata
  • Filter and search runs by project
  • Compare experiments across projects

Runs represent individual executions:

  • Each run has a unique ID
  • Can be hierarchical (parent-child relationships)
  • Support types: llm, chain, tool, retriever, embedding, prompt, parser
  • Capture full execution context

Datasets and Evaluation

Datasets store examples for testing and evaluation:

  • Create from production data or manually
  • Version and tag datasets
  • Share datasets across teams
  • Support multiple data types: key-value, LLM format, chat format

Evaluation framework tests applications systematically:

  • Run tests against datasets
  • Custom evaluators for your metrics
  • Comparative evaluation across experiments
  • Summary statistics and aggregations

Feedback System

Collect and analyze feedback on runs:

  • Human feedback: Manual ratings and corrections
  • Model feedback: LLM-as-judge evaluations
  • API feedback: Automated quality checks
  • Presigned tokens: Secure feedback collection without API keys

Prompt Management

Version control for prompts in the Prompt Hub:

  • Create and version prompts
  • Pull specific versions or latest
  • Share prompts across teams
  • Track usage and popularity

Common Patterns

Pattern: Trace with Nested Calls

import { traceable } from "langsmith/traceable";

const retrieveDocs = traceable(
  async (query: string) => {
    return await vectorDB.search(query);
  },
  { name: "retrieve", run_type: "retriever" }
);

const generateAnswer = traceable(
  async (query: string, docs: string[]) => {
    return await llm.generate({ query, context: docs.join("\n") });
  },
  { name: "generate", run_type: "llm" }
);

const ragPipeline = traceable(
  async (query: string) => {
    const docs = await retrieveDocs(query);  // Traced as child
    const answer = await generateAnswer(query, docs);  // Traced as child
    return answer;
  },
  { name: "rag-pipeline", run_type: "chain" }
);

// Creates hierarchical trace: rag-pipeline > retrieve > generate
await ragPipeline("What is LangSmith?");

Pattern: Evaluate with Custom Evaluator

import { evaluate } from "langsmith/evaluation";
import { Client } from "langsmith";

const client = new Client();

// Create dataset
const dataset = await client.createDataset({
  datasetName: "qa-eval",
  description: "QA evaluation dataset"
});

await client.createExamples({
  datasetId: dataset.id,
  inputs: [{ question: "What is 2+2?" }],
  outputs: [{ answer: "4" }]
});

// Define target function
async function myBot(input: { question: string }) {
  return { answer: await generateAnswer(input.question) };
}

// Run evaluation
const results = await evaluate(myBot, {
  data: "qa-eval",
  evaluators: [
    ({ run, example }) => ({
      key: "correctness",
      score: run.outputs?.answer === example?.outputs?.answer ? 1 : 0
    })
  ]
});

console.log(`Accuracy: ${results.results.filter(r => r.evaluation_results[0].score === 1).length / results.results.length}`);

Pattern: Wrapper for Automatic Tracing

import { wrapOpenAI } from "langsmith/wrappers/openai";
import OpenAI from "openai";

// Wrap OpenAI client
const openai = wrapOpenAI(new OpenAI(), {
  projectName: "openai-project"
});

// All calls automatically traced
const response = await openai.chat.completions.create({
  model: "gpt-4",
  messages: [{ role: "user", content: "Hello!" }]
});

Pattern: Collect User Feedback

import { Client } from "langsmith";

const client = new Client();

// Create presigned token for public feedback
const token = await client.createPresignedFeedbackToken({
  run_id: runId,
  feedback_key: "user_rating",
  expires_in: 86400  // 24 hours
});

// Users can POST feedback to token.url without API key
// { score: 1, comment: "Great response!" }

Run Types

Use appropriate run types for better filtering and analytics:

  • llm: Direct language model API call
  • chain: Sequence of operations or high-level workflow
  • tool: Individual tool/function execution
  • retriever: Document or data retrieval
  • embedding: Text embedding generation
  • prompt: Prompt formatting operation
  • parser: Output parsing from LLM responses

Best Practices

For Development

  1. Use traceable() decorator for automatic tracing
  2. Add descriptive name and appropriate run_type
  3. Include relevant metadata and tags
  4. Test with real data to ensure traces capture expected information

For Production

  1. Set tracingSamplingRate to control trace volume
  2. Use hideInputs/hideOutputs for sensitive data
  3. Call await client.awaitPendingTraceBatches() before shutdown
  4. Monitor feedback and error rates

For Evaluation

  1. Create versioned datasets for reproducible testing
  2. Use multiple evaluators to measure different aspects
  3. Run comparative evaluations when comparing models
  4. Store evaluation results for historical comparison

For Privacy

  1. Use processInputs/processOutputs to redact sensitive data
  2. Configure hideInputs: true for client-level hiding
  3. Use createAnonymizer() for pattern-based PII removal
  4. Review traces before sharing publicly

Troubleshooting

Traces Not Appearing

  • Verify LANGCHAIN_API_KEY is set correctly
  • Check project name matches in environment and code
  • Call await client.awaitPendingTraceBatches() before app shutdown
  • Ensure network connectivity to api.smith.langchain.com

Import Errors

// βœ“ Correct - use subpath exports
import { traceable } from "langsmith/traceable";
import { evaluate } from "langsmith/evaluation";

// βœ— Incorrect - won't work
import { traceable } from "langsmith";

TypeScript Type Errors

// Import types from langsmith/schemas
import type { Run, Example, Feedback } from "langsmith/schemas";

Additional Resources

  • LangSmith Documentation
  • GitHub Repository
  • API Reference
  • LangChain Documentation

Package Version

/**
 * Package version constant
 */
const __version__: string;

Usage:

import { __version__ } from "langsmith";
console.log("LangSmith SDK version:", __version__);