or run

tessl search
Log in

Version

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
npmpkg:npm/langsmith@0.4.x

docs

index.md
tile.json

tessl/npm-langsmith

tessl install tessl/npm-langsmith@0.4.3

TypeScript client SDK for the LangSmith LLM tracing, evaluation, and monitoring platform.

index.mddocs/guides/

Guides

Comprehensive guides for using LangSmith effectively, from setup to production deployment.

Overview

This section provides step-by-step guides for all major LangSmith features, best practices, and common workflows.

Quick Navigation

Getting Started

  • Setup Guide - Installation, API keys, first trace
  • Quick Reference - Code snippets and patterns
  • Decision Trees - Choosing the right API and approach

Core Features

Advanced Workflows

Best Practices & Troubleshooting

Learning Paths

🎯 Path 1: First Steps (15 minutes)

  1. Setup - Install SDK and configure environment
  2. Quick Reference - Your first traced function
  3. Tracing - Understand automatic tracing

Next Steps: Explore Evaluation or Workflows

πŸ“Š Path 2: Testing & Evaluation (30 minutes)

  1. Evaluation - Create datasets and evaluators
  2. Comparative Evaluation - Compare experiments
  3. Testing - Integrate with Jest/Vitest

Next Steps: Deploy with Production Workflows

πŸš€ Path 3: Production Deployment (45 minutes)

  1. Tracing Best Practices - Production patterns
  2. Workflows - Monitoring, testing, A/B testing
  3. Setup - Environment configuration

Next Steps: Explore Advanced Features

Guide Details

Setup Guide

πŸ“– Full Guide

Learn how to:

  • Install the LangSmith SDK
  • Configure API keys and environment variables
  • Verify your setup
  • Choose the right configuration for your environment

Quick Start:

npm install langsmith
export LANGCHAIN_API_KEY=your_api_key
export LANGCHAIN_PROJECT=my-project

Related: Client Configuration β€’ Utilities

Quick Reference

πŸ“– Full Guide

Code snippets for:

  • Essential imports and setup
  • Basic tracing patterns
  • Nested tracing
  • Client operations
  • Common configurations

Use When: You need a fast code example without explanation

Related: Tracing β€’ API Reference

Tracing

πŸ“– Full Guide

Complete tracing documentation:

  • traceable() decorator - Automatic function tracing
  • getCurrentRunTree() - Access current trace context
  • Nested tracing patterns
  • Privacy and security (hiding inputs/outputs)
  • Performance optimization
  • Distributed tracing

Key Sections:

Related: Run Trees β€’ Integrations β€’ API Runs

Evaluation

πŸ“– Full Guide

Dataset-based evaluation:

  • Creating and managing datasets
  • Writing evaluators (custom, LLM-as-judge)
  • Running experiments
  • Analyzing results
  • Summary evaluators and aggregation

Key Sections:

Related: Comparative Evaluation β€’ Datasets API β€’ Testing

Comparative Evaluation

πŸ“– Full Guide

Compare multiple experiments:

  • Side-by-side model comparison
  • A/B testing
  • Prompt optimization
  • Pairwise evaluators
  • Statistical significance

Use When: Comparing different models, prompts, or implementations

Related: Evaluation β€’ Workflows

Testing Frameworks

πŸ“– Full Guide

Integration with test frameworks:

  • Overview of Jest and Vitest integration
  • Test-driven evaluation
  • Choosing the right framework

Specific Integrations:

Related: Evaluation

Common Workflows

πŸ“– Full Guide

Production-ready patterns:

  • Production Monitoring - Real-time observability
  • Testing & Evaluation - Continuous testing
  • A/B Testing - Comparative experiments
  • Prompt Development - Iterative improvement
  • Utilities - Helper functions and caching

Key Workflows:

Related: Evaluation β€’ API Reference

Common Tasks

How do I...

Trace my application?

β†’ Start with Tracing Quick Start β†’ See examples in Quick Reference

Evaluate my LLM app?

β†’ Follow Evaluation Guide β†’ Create dataset with Datasets API

Compare two models?

β†’ Use Comparative Evaluation β†’ See A/B Testing Workflow

Integrate with OpenAI/Anthropic?

β†’ Check SDK Wrappers β†’ See OpenAI Wrapper or Anthropic Wrapper

Set up production monitoring?

β†’ Follow Production Workflow β†’ Configure Client for Production

Hide sensitive data?

β†’ Use Privacy Features β†’ Or Data Anonymization

Test with Jest/Vitest?

β†’ Check Testing Overview β†’ Follow Jest or Vitest guide

Best Practices Summary

Tracing

  • Use descriptive names for runs
  • Choose appropriate run types
  • Add meaningful metadata and tags
  • Flush traces before serverless function ends

Evaluation

  • Create versioned datasets
  • Use multiple evaluators
  • Run comparative evaluations when comparing
  • Store results for historical analysis

Production

  • Set sampling rate to control volume
  • Use hideInputs/hideOutputs for sensitive data
  • Monitor feedback and error rates
  • Call awaitPendingTraceBatches() before shutdown

Privacy

  • Use processInputs/processOutputs to redact data
  • Configure hideInputs: true for client-level hiding
  • Use createAnonymizer() for pattern-based PII removal
  • Review traces before sharing publicly

Related Documentation

Concepts

API Reference

Integrations

Advanced Features