CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/npm-langsmith

TypeScript client SDK for the LangSmith LLM tracing, evaluation, and monitoring platform.

Overview
Eval results
Files

index.mddocs/guides/

Guides

Comprehensive guides for using LangSmith effectively, from setup to production deployment.

Overview

This section provides step-by-step guides for all major LangSmith features, best practices, and common workflows.

Quick Navigation

Getting Started

  • Setup Guide - Installation, API keys, first trace
  • Quick Reference - Code snippets and patterns
  • Decision Trees - Choosing the right API and approach

Core Features

Advanced Workflows

Best Practices & Troubleshooting

Learning Paths

🎯 Path 1: First Steps (15 minutes)

  1. Setup - Install SDK and configure environment
  2. Quick Reference - Your first traced function
  3. Tracing - Understand automatic tracing

Next Steps: Explore Evaluation or Workflows

📊 Path 2: Testing & Evaluation (30 minutes)

  1. Evaluation - Create datasets and evaluators
  2. Comparative Evaluation - Compare experiments
  3. Testing - Integrate with Jest/Vitest

Next Steps: Deploy with Production Workflows

🚀 Path 3: Production Deployment (45 minutes)

  1. Tracing Best Practices - Production patterns
  2. Workflows - Monitoring, testing, A/B testing
  3. Setup - Environment configuration

Next Steps: Explore Advanced Features

Guide Details

Setup Guide

📖 Full Guide

Learn how to:

  • Install the LangSmith SDK
  • Configure API keys and environment variables
  • Verify your setup
  • Choose the right configuration for your environment

Quick Start:

npm install langsmith
export LANGCHAIN_API_KEY=your_api_key
export LANGCHAIN_PROJECT=my-project

Related: Client ConfigurationUtilities


Quick Reference

📖 Full Guide

Code snippets for:

  • Essential imports and setup
  • Basic tracing patterns
  • Nested tracing
  • Client operations
  • Common configurations

Use When: You need a fast code example without explanation

Related: TracingAPI Reference


Tracing

📖 Full Guide

Complete tracing documentation:

  • traceable() decorator - Automatic function tracing
  • getCurrentRunTree() - Access current trace context
  • Nested tracing patterns
  • Privacy and security (hiding inputs/outputs)
  • Performance optimization
  • Distributed tracing

Key Sections:

Related: Run TreesIntegrationsAPI Runs


Evaluation

📖 Full Guide

Dataset-based evaluation:

  • Creating and managing datasets
  • Writing evaluators (custom, LLM-as-judge)
  • Running experiments
  • Analyzing results
  • Summary evaluators and aggregation

Key Sections:

Related: Comparative EvaluationDatasets APITesting


Comparative Evaluation

📖 Full Guide

Compare multiple experiments:

  • Side-by-side model comparison
  • A/B testing
  • Prompt optimization
  • Pairwise evaluators
  • Statistical significance

Use When: Comparing different models, prompts, or implementations

Related: EvaluationWorkflows


Testing Frameworks

📖 Full Guide

Integration with test frameworks:

  • Overview of Jest and Vitest integration
  • Test-driven evaluation
  • Choosing the right framework

Specific Integrations:

Related: Evaluation


Common Workflows

📖 Full Guide

Production-ready patterns:

  • Production Monitoring - Real-time observability
  • Testing & Evaluation - Continuous testing
  • A/B Testing - Comparative experiments
  • Prompt Development - Iterative improvement
  • Utilities - Helper functions and caching

Key Workflows:

Related: EvaluationAPI Reference

Common Tasks

How do I...

Trace my application?

→ Start with Tracing Quick Start → See examples in Quick Reference

Evaluate my LLM app?

→ Follow Evaluation Guide → Create dataset with Datasets API

Compare two models?

→ Use Comparative Evaluation → See A/B Testing Workflow

Integrate with OpenAI/Anthropic?

→ Check SDK Wrappers → See OpenAI Wrapper or Anthropic Wrapper

Set up production monitoring?

→ Follow Production Workflow → Configure Client for Production

Hide sensitive data?

→ Use Privacy Features → Or Data Anonymization

Test with Jest/Vitest?

→ Check Testing Overview → Follow Jest or Vitest guide

Best Practices Summary

Tracing

  • Use descriptive names for runs
  • Choose appropriate run types
  • Add meaningful metadata and tags
  • Flush traces before serverless function ends

Evaluation

  • Create versioned datasets
  • Use multiple evaluators
  • Run comparative evaluations when comparing
  • Store results for historical analysis

Production

  • Set sampling rate to control volume
  • Use hideInputs/hideOutputs for sensitive data
  • Monitor feedback and error rates
  • Call awaitPendingTraceBatches() before shutdown

Privacy

  • Use processInputs/processOutputs to redact data
  • Configure hideInputs: true for client-level hiding
  • Use createAnonymizer() for pattern-based PII removal
  • Review traces before sharing publicly

Related Documentation

Concepts

API Reference

Integrations

Advanced Features

Install with Tessl CLI

npx tessl i tessl/npm-langsmith

docs

index.md

tile.json