TypeScript client SDK for the LangSmith LLM tracing, evaluation, and monitoring platform.
npx @tessl/cli install tessl/npm-langsmith@0.4.0LangSmith is a comprehensive TypeScript SDK for the LangSmith platform, enabling developers to trace, debug, evaluate, and monitor LLM applications and intelligent agents. It provides seamless integration with LangChain and standalone capabilities through decorators, wrapper functions, and a full-featured client API.
npm install langsmithIMPORTANT: Subpath Exports
LangSmith uses subpath exports for optimal tree-shaking and module organization. The main export (langsmith) provides core classes and utilities. Specialized features like traceable(), evaluation, and wrappers are available through subpath imports.
// Core classes and utilities from main export
import {
Client,
RunTree,
uuid7,
Cache,
__version__,
type ClientConfig,
type LangSmithTracingClientInterface,
} from "langsmith";
// Traceable decorator from subpath export
import { traceable } from "langsmith/traceable";
// Evaluation functions from subpath export
import { evaluate } from "langsmith/evaluation";
// Wrappers from subpath exports
import { wrapOpenAI } from "langsmith/wrappers/openai";
import { wrapAnthropic } from "langsmith/wrappers/anthropic";
// Anonymization from subpath export
import { createAnonymizer } from "langsmith/anonymizer";For CommonJS:
const { Client, RunTree, uuid7, Cache, __version__ } = require("langsmith");
const { traceable } = require("langsmith/traceable");
const { evaluate } = require("langsmith/evaluation");
const { wrapOpenAI } = require("langsmith/wrappers/openai");
const { createAnonymizer } = require("langsmith/anonymizer");Install via npm:
npm install langsmithOr with yarn:
yarn add langsmithConfigure LangSmith with environment variables:
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=your_api_key
export LANGCHAIN_PROJECT=your_project_name # Optional: defaults to "default"Alternative configuration in code:
import { Client } from "langsmith";
const client = new Client({
apiKey: "your_api_key",
apiUrl: "https://api.smith.langchain.com"
});| API | Import Path | Purpose |
|---|---|---|
traceable() | langsmith/traceable | Wrap functions for automatic tracing |
Client | langsmith | Main client for API operations |
RunTree | langsmith | Manual trace tree construction |
evaluate() | langsmith/evaluation | Run dataset evaluations |
wrapOpenAI() | langsmith/wrappers/openai | Trace OpenAI SDK calls |
wrapAISDK() | langsmith/experimental/vercel | Trace Vercel AI SDK |
createAnonymizer() | langsmith/anonymizer | Redact sensitive data |
test() | langsmith/jest or langsmith/vitest | LangSmith-tracked testing |
Cache | langsmith | Prompt caching system |
Projects: createProject(), readProject(), listProjects(), updateProject(), deleteProject(), hasProject(), getProjectUrl()
Runs: createRun(), updateRun(), readRun(), listRuns(), shareRun(), unshareRun(), getRunUrl(), listGroupRuns(), getRunStats()
Datasets: createDataset(), readDataset(), listDatasets(), updateDataset(), deleteDataset(), hasDataset(), shareDataset(), indexDataset(), similarExamples()
Examples: createExample(), createExamples(), updateExample(), listExamples(), deleteExample(), deleteExamples()
Feedback: createFeedback(), updateFeedback(), readFeedback(), listFeedback(), deleteFeedback(), createPresignedFeedbackToken()
Prompts: createPrompt(), pullPrompt(), pushPrompt(), listPrompts(), deletePrompt(), likePrompt(), unlikePrompt()
Annotation Queues: createAnnotationQueue(), readAnnotationQueue(), listAnnotationQueues(), updateAnnotationQueue(), deleteAnnotationQueue(), addRunsToAnnotationQueue(), getRunFromAnnotationQueue(), deleteRunFromAnnotationQueue(), getSizeFromAnnotationQueue()
Utility: awaitPendingTraceBatches(), flush(), cleanup(), uuid7(), getDefaultProjectName()
traceable(): Wrap functions for automatic tracing
getCurrentRunTree(): Access current run context
withRunTree(): Execute function with run context
isTraceableFunction(): Check if function is traceable
RunTree class: createChild(), end(), postRun(), patchRun(), toHeaders(), addEvent(), fromHeaders(), fromDottedOrder()
Client: ClientConfig, TracerSession, TracerSessionResult, Run, RunCreate, RunUpdate
Datasets: Dataset, Example, ExampleCreate, DatasetShareSchema, DatasetDiffInfo
Evaluation: EvaluateOptions, EvaluationResult, EvaluationResults, RunEvaluator, StringEvaluator
Tracing: TraceableConfig, TraceableFunction, RunTreeConfig, RunEvent, InvocationParamsSchema
Feedback: FeedbackCreate, Feedback, FeedbackIngestToken, FeedbackConfig
Caching: Cache, CacheConfig, CacheMetrics
Anonymization: Anonymizer, StringNodeRule, StringNodeProcessor, AnonymizerOptions
traceableThe simplest way to add tracing to your application:
import { traceable } from "langsmith/traceable";
// Wrap any function for automatic tracing
const chatbot = traceable(
async (userInput: string) => {
// Your LLM application logic
const response = await yourLLMCall(userInput);
return response;
},
{ name: "chatbot", run_type: "chain" }
);
// Call the function - traces are automatically sent to LangSmith
const result = await chatbot("Hello, how are you?");import { evaluate } from "langsmith/evaluation";
// Define your target function
async function myBot(input: { question: string }) {
return { answer: await generateAnswer(input.question) };
}
// Run evaluation
const results = await evaluate(myBot, {
data: "my-qa-dataset", // Dataset name or examples array
evaluators: [
(run, example) => ({
key: "correctness",
score: run.outputs.answer === example?.outputs?.answer ? 1 : 0
})
]
});import { Client } from "langsmith";
const client = new Client();
// Create a project
const project = await client.createProject({
projectName: "my-chatbot",
description: "Production chatbot"
});
// List runs
for await (const run of client.listRuns({ projectName: "my-chatbot" })) {
console.log(run.name, run.status);
}
// Create feedback
await client.createFeedback({
run_id: runId,
key: "user_rating",
score: 1, // thumbs up
comment: "Great response!"
});Automatic tracing for popular AI SDKs:
import { wrapOpenAI } from "langsmith/wrappers/openai";
import OpenAI from "openai";
// Wrap OpenAI client
const openai = wrapOpenAI(new OpenAI(), {
projectName: "openai-project"
});
// All calls are automatically traced
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: "Hello!" }]
});LangSmith provides comprehensive tracing for LLM applications:
traceable() decorator to wrap functionsRunTree for fine-grained controlTraces capture:
Projects (also called TracerSessions) organize your traces:
Runs represent individual executions:
llm, chain, tool, retriever, embedding, prompt, parserDatasets store examples for testing and evaluation:
Evaluation framework tests applications systematically:
Collect and analyze feedback on runs:
Version control for prompts in the Prompt Hub:
Start with these docs:
traceable() decoratorFor detailed workflow examples and patterns, see:
The workflows documentation includes complete examples for:
The SDK is fully typed for TypeScript:
import type {
Run,
Dataset,
Example,
Feedback,
Prompt,
TracerSession,
EvaluationResult,
Client as ClientType
} from "langsmith/schemas";All APIs include complete type definitions for inputs and outputs. See Schemas for full type reference.
/**
* Package version constant
*/
const __version__: string;