or run

tessl search
Log in

Version

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
pypipkg:pypi/langsmith@0.6.x

docs

getting-started

index.md
tile.json

tessl/pypi-langsmith

tessl install tessl/pypi-langsmith@0.6.1

Python SDK for LangSmith Observability and Evaluation Platform

quickstart.mddocs/getting-started/

Quick Start Guide

Get started with LangSmith in minutes. This guide walks you through installation, configuration, and your first traces.

Installation

Install LangSmith using pip:

pip install langsmith

Set Up Authentication

LangSmith requires an API key for authentication. Get your API key from smith.langchain.com.

Option 1: Environment Variable (Recommended)

export LANGSMITH_API_KEY="your-api-key-here"

Option 2: Direct Configuration

from langsmith import Client

client = Client(api_key="your-api-key-here")

Your First Trace

Step 1: Configure Tracing

Configure LangSmith at application startup:

import langsmith as ls

ls.configure(
    enabled=True,
    project_name="my-first-project"
)

Step 2: Trace a Function

Use the @traceable decorator to automatically trace a function:

from langsmith import traceable

@traceable
def greet(name: str) -> str:
    """Greet someone by name."""
    return f"Hello, {name}!"

# Call the function
result = greet("World")
print(result)  # "Hello, World!"

Step 3: View Your Trace

  1. Go to smith.langchain.com
  2. Navigate to your project "my-first-project"
  3. You'll see your trace with inputs, outputs, and timing

Tracing a Chain of Operations

Trace nested function calls to see the full execution tree:

from langsmith import traceable

@traceable(run_type="tool")
def fetch_data(query: str) -> dict:
    """Fetch data for a query."""
    return {"data": f"Results for {query}"}

@traceable(run_type="tool")
def process_data(data: dict) -> str:
    """Process the fetched data."""
    return f"Processed: {data['data']}"

@traceable(run_type="chain")
def pipeline(query: str) -> str:
    """Complete processing pipeline."""
    # Child runs are automatically created
    data = fetch_data(query)
    result = process_data(data)
    return result

# Run the pipeline
output = pipeline("test query")

This creates a trace tree where pipeline is the parent and fetch_data and process_data are children.

Adding Metadata

Enrich your traces with metadata:

from langsmith import traceable, set_run_metadata

@traceable(
    tags=["production", "v1.0"],
    metadata={"model": "gpt-4", "temperature": 0.7}
)
def llm_call(prompt: str) -> str:
    """Call an LLM."""
    result = call_llm_api(prompt)

    # Add metadata during execution
    set_run_metadata(
        tokens_used=150,
        cost=0.003
    )

    return result

Configuration Options

import langsmith as ls

ls.configure(
    # Required
    enabled=True,  # Enable tracing
    project_name="my-project",  # Project to log to

    # Optional
    tags=["production", "api-v2"],  # Global tags
    metadata={"version": "2.0.1"},  # Global metadata
)

Async Support

LangSmith fully supports async code:

from langsmith import traceable

@traceable
async def async_process(input_data: str) -> str:
    """Process data asynchronously."""
    result = await async_operation(input_data)
    return result

# Use with async/await
import asyncio

async def main():
    result = await async_process("test")
    print(result)

asyncio.run(main())

Environment-Based Configuration

Configure LangSmith based on your environment:

import os
import langsmith as ls

# Only enable in production
if os.getenv("ENV") == "production":
    ls.configure(
        enabled=True,
        project_name="prod-app",
        tags=["production"]
    )
else:
    # Disable in development/testing
    ls.configure(enabled=False)

Error Handling

LangSmith is designed to be non-blocking. Tracing errors won't crash your application:

from langsmith import traceable

@traceable
def my_function(data):
    # Even if tracing fails, your function still runs
    result = process(data)
    return result

# This always succeeds even if LangSmith is unavailable
output = my_function(input_data)

To explicitly handle tracing errors:

from langsmith import Client

def handle_trace_error(error: Exception):
    print(f"Tracing error: {error}")
    # Log to your monitoring system
    log_error(error)

client = Client(tracing_error_callback=handle_trace_error)

Next Steps

Now that you have basic tracing working, explore more features:

  • Tracing Guide - Learn advanced tracing techniques
  • Manual Tracing - Use trace context manager for fine control
  • Evaluation - Evaluate your LLM applications
  • Testing - Integrate with pytest
  • Data Management - Work with projects, datasets, and feedback

Common Issues

API Key Not Found

Error: "No API key found"

Solution: Set the LANGSMITH_API_KEY environment variable or pass it directly to the Client:

from langsmith import Client

client = Client(api_key="your-key")

Traces Not Appearing

Check:

  1. Tracing is enabled: ls.configure(enabled=True)
  2. Project name is set: project_name="my-project"
  3. API key is valid
  4. Network connectivity to smith.langchain.com

Import Errors

Error: "No module named 'langsmith'"

Solution: Install the package:

pip install langsmith

Complete Example

Here's a complete example putting it all together:

import langsmith as ls
from langsmith import traceable, set_run_metadata

# Configure once at startup
ls.configure(
    enabled=True,
    project_name="my-app",
    tags=["production"],
    metadata={"version": "1.0.0"}
)

@traceable(run_type="llm")
def call_llm(prompt: str) -> str:
    """Call LLM with a prompt."""
    # Your LLM call here
    response = llm.invoke(prompt)

    # Add metadata
    set_run_metadata(
        model="gpt-4",
        tokens=len(response.split())
    )

    return response

@traceable(run_type="tool")
def search(query: str) -> list:
    """Search for documents."""
    # Your search logic
    return search_db(query)

@traceable(run_type="chain")
def rag_pipeline(question: str) -> str:
    """RAG pipeline: search + generate."""
    # Search for context
    docs = search(question)

    # Generate answer with context
    prompt = f"Context: {docs}\n\nQuestion: {question}"
    answer = call_llm(prompt)

    return answer

# Use the pipeline
if __name__ == "__main__":
    question = "What is LangSmith?"
    answer = rag_pipeline(question)
    print(f"Answer: {answer}")

This creates a trace tree showing:

  • Parent: rag_pipeline
  • Children: search and call_llm

All traces are sent to LangSmith automatically.