tessl install tessl/pypi-langsmith@0.6.1Python SDK for LangSmith Observability and Evaluation Platform
Get started with LangSmith in minutes. This guide walks you through installation, configuration, and your first traces.
Install LangSmith using pip:
pip install langsmithLangSmith requires an API key for authentication. Get your API key from smith.langchain.com.
export LANGSMITH_API_KEY="your-api-key-here"from langsmith import Client
client = Client(api_key="your-api-key-here")Configure LangSmith at application startup:
import langsmith as ls
ls.configure(
enabled=True,
project_name="my-first-project"
)Use the @traceable decorator to automatically trace a function:
from langsmith import traceable
@traceable
def greet(name: str) -> str:
"""Greet someone by name."""
return f"Hello, {name}!"
# Call the function
result = greet("World")
print(result) # "Hello, World!"Trace nested function calls to see the full execution tree:
from langsmith import traceable
@traceable(run_type="tool")
def fetch_data(query: str) -> dict:
"""Fetch data for a query."""
return {"data": f"Results for {query}"}
@traceable(run_type="tool")
def process_data(data: dict) -> str:
"""Process the fetched data."""
return f"Processed: {data['data']}"
@traceable(run_type="chain")
def pipeline(query: str) -> str:
"""Complete processing pipeline."""
# Child runs are automatically created
data = fetch_data(query)
result = process_data(data)
return result
# Run the pipeline
output = pipeline("test query")This creates a trace tree where pipeline is the parent and fetch_data and process_data are children.
Enrich your traces with metadata:
from langsmith import traceable, set_run_metadata
@traceable(
tags=["production", "v1.0"],
metadata={"model": "gpt-4", "temperature": 0.7}
)
def llm_call(prompt: str) -> str:
"""Call an LLM."""
result = call_llm_api(prompt)
# Add metadata during execution
set_run_metadata(
tokens_used=150,
cost=0.003
)
return resultimport langsmith as ls
ls.configure(
# Required
enabled=True, # Enable tracing
project_name="my-project", # Project to log to
# Optional
tags=["production", "api-v2"], # Global tags
metadata={"version": "2.0.1"}, # Global metadata
)LangSmith fully supports async code:
from langsmith import traceable
@traceable
async def async_process(input_data: str) -> str:
"""Process data asynchronously."""
result = await async_operation(input_data)
return result
# Use with async/await
import asyncio
async def main():
result = await async_process("test")
print(result)
asyncio.run(main())Configure LangSmith based on your environment:
import os
import langsmith as ls
# Only enable in production
if os.getenv("ENV") == "production":
ls.configure(
enabled=True,
project_name="prod-app",
tags=["production"]
)
else:
# Disable in development/testing
ls.configure(enabled=False)LangSmith is designed to be non-blocking. Tracing errors won't crash your application:
from langsmith import traceable
@traceable
def my_function(data):
# Even if tracing fails, your function still runs
result = process(data)
return result
# This always succeeds even if LangSmith is unavailable
output = my_function(input_data)To explicitly handle tracing errors:
from langsmith import Client
def handle_trace_error(error: Exception):
print(f"Tracing error: {error}")
# Log to your monitoring system
log_error(error)
client = Client(tracing_error_callback=handle_trace_error)Now that you have basic tracing working, explore more features:
trace context manager for fine controlError: "No API key found"
Solution: Set the LANGSMITH_API_KEY environment variable or pass it directly to the Client:
from langsmith import Client
client = Client(api_key="your-key")Check:
ls.configure(enabled=True)project_name="my-project"Error: "No module named 'langsmith'"
Solution: Install the package:
pip install langsmithHere's a complete example putting it all together:
import langsmith as ls
from langsmith import traceable, set_run_metadata
# Configure once at startup
ls.configure(
enabled=True,
project_name="my-app",
tags=["production"],
metadata={"version": "1.0.0"}
)
@traceable(run_type="llm")
def call_llm(prompt: str) -> str:
"""Call LLM with a prompt."""
# Your LLM call here
response = llm.invoke(prompt)
# Add metadata
set_run_metadata(
model="gpt-4",
tokens=len(response.split())
)
return response
@traceable(run_type="tool")
def search(query: str) -> list:
"""Search for documents."""
# Your search logic
return search_db(query)
@traceable(run_type="chain")
def rag_pipeline(question: str) -> str:
"""RAG pipeline: search + generate."""
# Search for context
docs = search(question)
# Generate answer with context
prompt = f"Context: {docs}\n\nQuestion: {question}"
answer = call_llm(prompt)
return answer
# Use the pipeline
if __name__ == "__main__":
question = "What is LangSmith?"
answer = rag_pipeline(question)
print(f"Answer: {answer}")This creates a trace tree showing:
rag_pipelinesearch and call_llmAll traces are sent to LangSmith automatically.