tessl install tessl/npm-langsmith@0.4.3TypeScript client SDK for the LangSmith LLM tracing, evaluation, and monitoring platform.
TypeScript SDK for the LangSmith platform - trace, debug, evaluate, and monitor LLM applications and intelligent agents.
langsmith (npm)npm install langsmithLangSmith provides four primary capabilities for LLM application development:
// Core tracing
import { traceable } from "langsmith/traceable";
import { Client, RunTree } from "langsmith";
// Evaluation
import { evaluate } from "langsmith/evaluation";
// SDK wrappers
import { wrapOpenAI } from "langsmith/wrappers/openai";
import { wrapAnthropic } from "langsmith/wrappers/anthropic";
import { wrapAISDK } from "langsmith/experimental/vercel";
// Utilities
import { createAnonymizer } from "langsmith/anonymizer";Note: LangSmith uses subpath exports. Import specialized features from their subpaths (langsmith/traceable, langsmith/evaluation, etc.), not from the main langsmith export.
For CommonJS:
const { Client, RunTree, uuid7, Cache, __version__ } = require("langsmith");
const { traceable } = require("langsmith/traceable");
const { evaluate } = require("langsmith/evaluation");
const { wrapOpenAI } = require("langsmith/wrappers/openai");
const { createAnonymizer } = require("langsmith/anonymizer");import { traceable } from "langsmith/traceable";
import { evaluate } from "langsmith/evaluation";
// 1. Set environment variables
// LANGCHAIN_API_KEY=your_api_key
// LANGCHAIN_PROJECT=your_project_name
// 2. Trace any function
const chatbot = traceable(
async (input: string) => {
// Your LLM logic here
return { response: "Hello!" };
},
{ name: "chatbot", run_type: "chain" }
);
// 3. Use it - traces automatically sent to LangSmith
await chatbot("Hi there");
// 4. Evaluate against a dataset
const results = await evaluate(chatbot, {
data: "my-dataset",
evaluators: [
({ run, example }) => ({
key: "correct",
score: run.outputs?.response === example?.outputs?.response ? 1 : 0
})
]
});Quick navigation by task:
traceable() decoratorwrapOpenAI()wrapAnthropic()wrapAISDK()wrapSDK()evaluate()Guides - Step-by-step tutorials and best practices
API Reference - Complete API documentation
Integrations - Framework & SDK wrappers
Advanced Topics - Specialized features
Core Concepts - Understanding LangSmith
Path 1: Beginner (15 min)
Path 2: Evaluation (30 min)
Path 3: Production (45 min)
| API | Import Path | Purpose |
|---|---|---|
traceable() | langsmith/traceable | Wrap functions for automatic tracing |
Client | langsmith | Main client for API operations |
RunTree | langsmith | Manual trace tree construction |
evaluate() | langsmith/evaluation | Run dataset evaluations |
wrapOpenAI() | langsmith/wrappers/openai | Trace OpenAI SDK calls |
wrapAISDK() | langsmith/experimental/vercel | Trace Vercel AI SDK |
createAnonymizer() | langsmith/anonymizer | Redact sensitive data |
test() | langsmith/jest or langsmith/vitest | LangSmith-tracked testing |
Cache | langsmith | Prompt caching system |
Projects: createProject(), readProject(), listProjects(), updateProject(), deleteProject(), hasProject(), getProjectUrl()
Runs: createRun(), updateRun(), readRun(), listRuns(), shareRun(), unshareRun(), getRunUrl(), listGroupRuns(), getRunStats()
Datasets: createDataset(), readDataset(), listDatasets(), updateDataset(), deleteDataset(), hasDataset(), shareDataset(), indexDataset(), similarExamples()
Examples: createExample(), createExamples(), updateExample(), listExamples(), deleteExample(), deleteExamples()
Feedback: createFeedback(), updateFeedback(), readFeedback(), listFeedback(), deleteFeedback(), createPresignedFeedbackToken()
Prompts: createPrompt(), pullPrompt(), pushPrompt(), listPrompts(), deletePrompt(), likePrompt(), unlikePrompt()
Annotation Queues: createAnnotationQueue(), readAnnotationQueue(), listAnnotationQueues(), updateAnnotationQueue(), deleteAnnotationQueue(), addRunsToAnnotationQueue(), getRunFromAnnotationQueue(), deleteRunFromAnnotationQueue(), getSizeFromAnnotationQueue()
Utility: awaitPendingTraceBatches(), flush(), cleanup(), uuid7(), getDefaultProjectName()
traceable(): Wrap functions for automatic tracing
getCurrentRunTree(): Access current run context
withRunTree(): Execute function with run context
isTraceableFunction(): Check if function is traceable
RunTree class: createChild(), end(), postRun(), patchRun(), toHeaders(), addEvent(), fromHeaders(), fromDottedOrder()
Client: ClientConfig, TracerSession, TracerSessionResult, Run, RunCreate, RunUpdate
Datasets: Dataset, Example, ExampleCreate, DatasetShareSchema, DatasetDiffInfo
Evaluation: EvaluateOptions, EvaluationResult, EvaluationResults, RunEvaluator, StringEvaluator
Tracing: TraceableConfig, TraceableFunction, RunTreeConfig, RunEvent, InvocationParamsSchema
Feedback: FeedbackCreate, Feedback, FeedbackIngestToken, FeedbackConfig
Caching: Cache, CacheConfig, CacheMetrics
Anonymization: Anonymizer, StringNodeRule, StringNodeProcessor, AnonymizerOptions
interface ClientConfig {
apiUrl?: string; // Default: https://api.smith.langchain.com
apiKey?: string; // Default: LANGCHAIN_API_KEY env var
timeout_ms?: number; // Default: 120000
autoBatchTracing?: boolean; // Default: true
hideInputs?: boolean | ((inputs: KVMap) => KVMap);
hideOutputs?: boolean | ((outputs: KVMap) => KVMap);
tracingSamplingRate?: number; // 0.0 to 1.0
}interface TraceableConfig {
name?: string; // Run name
run_type?: string; // "llm" | "chain" | "tool" | "retriever" | "embedding"
metadata?: Record<string, any>; // Additional metadata
tags?: string[]; // Tags for filtering
client?: Client; // Custom client instance
project_name?: string; // Project name override
}interface EvaluateOptions {
data: string | Example[]; // Dataset name or examples array
evaluators: EvaluatorT[]; // Evaluator functions
summary_evaluators?: SummaryEvaluatorT[];
experiment_name?: string; // Explicit experiment name
max_concurrency?: number; // Default: 10
metadata?: Record<string, any>; // Experiment metadata
}# Required
LANGCHAIN_API_KEY=lsv2_pt_... # Your LangSmith API key
# Optional
LANGCHAIN_TRACING_V2=true # Enable tracing (recommended)
LANGCHAIN_PROJECT=my-project # Default project name (defaults to "default")
LANGCHAIN_ENDPOINT=https://... # Custom API endpoint
LANGCHAIN_TRACING=true # Enable/disable tracing globallyAlternative configuration in code:
import { Client } from "langsmith";
const client = new Client({
apiKey: "your_api_key",
apiUrl: "https://api.smith.langchain.com"
});LangSmith provides comprehensive tracing for LLM applications:
traceable() decorator to wrap functionsRunTree for fine-grained controlTraces capture:
Projects (also called TracerSessions) organize your traces:
Runs represent individual executions:
llm, chain, tool, retriever, embedding, prompt, parserDatasets store examples for testing and evaluation:
Evaluation framework tests applications systematically:
Collect and analyze feedback on runs:
Version control for prompts in the Prompt Hub:
import { traceable } from "langsmith/traceable";
const retrieveDocs = traceable(
async (query: string) => {
return await vectorDB.search(query);
},
{ name: "retrieve", run_type: "retriever" }
);
const generateAnswer = traceable(
async (query: string, docs: string[]) => {
return await llm.generate({ query, context: docs.join("\n") });
},
{ name: "generate", run_type: "llm" }
);
const ragPipeline = traceable(
async (query: string) => {
const docs = await retrieveDocs(query); // Traced as child
const answer = await generateAnswer(query, docs); // Traced as child
return answer;
},
{ name: "rag-pipeline", run_type: "chain" }
);
// Creates hierarchical trace: rag-pipeline > retrieve > generate
await ragPipeline("What is LangSmith?");import { evaluate } from "langsmith/evaluation";
import { Client } from "langsmith";
const client = new Client();
// Create dataset
const dataset = await client.createDataset({
datasetName: "qa-eval",
description: "QA evaluation dataset"
});
await client.createExamples({
datasetId: dataset.id,
inputs: [{ question: "What is 2+2?" }],
outputs: [{ answer: "4" }]
});
// Define target function
async function myBot(input: { question: string }) {
return { answer: await generateAnswer(input.question) };
}
// Run evaluation
const results = await evaluate(myBot, {
data: "qa-eval",
evaluators: [
({ run, example }) => ({
key: "correctness",
score: run.outputs?.answer === example?.outputs?.answer ? 1 : 0
})
]
});
console.log(`Accuracy: ${results.results.filter(r => r.evaluation_results[0].score === 1).length / results.results.length}`);import { wrapOpenAI } from "langsmith/wrappers/openai";
import OpenAI from "openai";
// Wrap OpenAI client
const openai = wrapOpenAI(new OpenAI(), {
projectName: "openai-project"
});
// All calls automatically traced
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: "Hello!" }]
});import { Client } from "langsmith";
const client = new Client();
// Create presigned token for public feedback
const token = await client.createPresignedFeedbackToken({
run_id: runId,
feedback_key: "user_rating",
expires_in: 86400 // 24 hours
});
// Users can POST feedback to token.url without API key
// { score: 1, comment: "Great response!" }Use appropriate run types for better filtering and analytics:
llm: Direct language model API callchain: Sequence of operations or high-level workflowtool: Individual tool/function executionretriever: Document or data retrievalembedding: Text embedding generationprompt: Prompt formatting operationparser: Output parsing from LLM responsestraceable() decorator for automatic tracingname and appropriate run_typemetadata and tagstracingSamplingRate to control trace volumehideInputs/hideOutputs for sensitive dataawait client.awaitPendingTraceBatches() before shutdownprocessInputs/processOutputs to redact sensitive datahideInputs: true for client-level hidingcreateAnonymizer() for pattern-based PII removalLANGCHAIN_API_KEY is set correctlyawait client.awaitPendingTraceBatches() before app shutdown// β Correct - use subpath exports
import { traceable } from "langsmith/traceable";
import { evaluate } from "langsmith/evaluation";
// β Incorrect - won't work
import { traceable } from "langsmith";// Import types from langsmith/schemas
import type { Run, Example, Feedback } from "langsmith/schemas";/**
* Package version constant
*/
const __version__: string;Usage:
import { __version__ } from "langsmith";
console.log("LangSmith SDK version:", __version__);