LangSmith provides native integration with the Vercel AI SDK through wrapper functions that enable automatic tracing and monitoring of AI model invocations. The integration wraps Vercel AI SDK functions to provide observability for generateText, streamText, generateObject, and streamObject operations.
npm install langsmithimport { wrapAISDK, createLangSmithProviderOptions } from "langsmith/experimental/vercel";For CommonJS:
const { wrapAISDK, createLangSmithProviderOptions } = require("langsmith/experimental/vercel");import { wrapAISDK } from "langsmith/experimental/vercel";
import { wrapLanguageModel, generateText } from "ai";
import { openai } from "@ai-sdk/openai";
// Wrap the AI SDK - wrapLanguageModel is REQUIRED
const wrappedAI = wrapAISDK({ wrapLanguageModel, generateText });
// Use wrapped functions - automatic tracing enabled
const { text } = await wrappedAI.generateText({
model: openai("gpt-4"),
prompt: "What is the capital of France?",
});The Vercel AI SDK integration is built around the following components:
generateText, streamText, generateObject, streamObject) with automatic LangSmith tracingproviderOptions.langsmithWraps Vercel AI SDK functions with automatic LangSmith tracing.
/**
* Wraps Vercel AI SDK 6 or AI SDK 5 functions with LangSmith tracing capabilities
* @param ai - Object containing AI SDK methods to wrap (MUST include wrapLanguageModel)
* @param baseLsConfig - Optional base configuration for all traced calls
* @returns Object containing wrapped versions of AI SDK functions
*/
function wrapAISDK<T>(ai: T, baseLsConfig?: WrapAISDKConfig): T;IMPORTANT: The wrapAISDK function requires that the ai parameter includes wrapLanguageModel from the AI SDK. This function takes the AI SDK module and returns a wrapped version where generateText, streamText, generateObject, and streamObject are automatically traced to LangSmith.
Parameters:
ai: T - The Vercel AI SDK module (typically import * as ai from "ai")baseLsConfig?: WrapAISDKConfig - Optional base configuration applied to all traced operationsReturns:
import { wrapAISDK } from "langsmith/experimental/vercel";
import { wrapLanguageModel, generateText, streamText } from "ai";
import { openai } from "@ai-sdk/openai";
// CRITICAL: wrapLanguageModel must be included in the AI SDK module
const wrappedAI = wrapAISDK({ wrapLanguageModel, generateText, streamText });
// With base configuration
const wrappedAI = wrapAISDK(
{ wrapLanguageModel, generateText, streamText },
{
project_name: "my-ai-app",
metadata: {
environment: "production",
},
tags: ["vercel-ai", "production"],
}
);
// Use wrapped functions
const result = await wrappedAI.generateText({
model: openai("gpt-4"),
prompt: "Hello!",
});
const stream = await wrappedAI.streamText({
model: openai("gpt-4"),
prompt: "Tell me a story",
});Creates provider options for runtime configuration of LangSmith tracing.
/**
* Wraps LangSmith config in a way that matches AI SDK provider types
* @param lsConfig - Optional LangSmith configuration
* @returns Provider options object that can be passed to AI SDK functions
*/
function createLangSmithProviderOptions<T>(
lsConfig?: WrapAISDKConfig<T>
): Record<string, JSONValue>;This function creates a configuration object that can be passed via the providerOptions.langsmith parameter to override or extend the base configuration for specific calls.
Parameters:
lsConfig?: WrapAISDKConfig<T> - Optional LangSmith-specific configurationReturns:
providerOptions.langsmithimport { wrapAISDK, createLangSmithProviderOptions } from "langsmith/experimental/vercel";
import { wrapLanguageModel, generateText } from "ai";
import { openai } from "@ai-sdk/openai";
const wrappedAI = wrapAISDK({ wrapLanguageModel, generateText });
// Create runtime configuration
const lsConfig = createLangSmithProviderOptions({
name: "summarization",
metadata: {
userId: "user-123",
feature: "summarize",
},
processInputs: (inputs) => ({
...inputs,
prompt: "REDACTED", // Hide sensitive input
}),
});
// Use with provider options
const result = await wrappedAI.generateText({
model: openai("gpt-4"),
prompt: "Sensitive data here",
providerOptions: {
langsmith: lsConfig,
},
});Utility function to convert Vercel AI SDK messages to LangSmith trace format.
/**
* Convert Vercel AI SDK message format to LangSmith trace format
* @param message - Message to convert
* @param metadata - Optional additional metadata to include
* @returns Converted message in LangSmith format
*/
function convertMessageToTracedFormat(
message: any,
metadata?: Record<string, unknown>
): Record<string, unknown>;This utility function is used internally by wrapAISDK but is also exported for advanced use cases where you need to manually format messages for tracing.
import { convertMessageToTracedFormat } from "langsmith/experimental/vercel";
const message = {
role: "assistant",
content: "Hello!",
};
const traced = convertMessageToTracedFormat(message, {
model: "gpt-4",
tokens: 50,
});Configuration interface for Vercel AI SDK tracing behavior.
interface WrapAISDKConfig<T = any> {
/** Custom name for the traced operation */
name?: string;
/** Custom LangSmith client instance */
client?: Client;
/** Project name for organizing traces */
project_name?: string;
/** Additional metadata to attach to traces */
metadata?: KVMap;
/** Tags for categorizing traces */
tags?: string[];
/**
* Transform inputs before logging
* Receives the raw inputs from the AI SDK call
* @param inputs - Function inputs
* @returns Transformed key-value map
*/
processInputs?: (inputs: Parameters<T>[0]) => Record<string, unknown>;
/**
* Transform outputs before logging
* Receives the outputs from the AI SDK call
* @param outputs - Function outputs wrapped in { outputs: ... }
* @returns Transformed key-value map
*/
processOutputs?: (
outputs: { outputs: Awaited<ReturnType<T>> }
) => Record<string, unknown> | Promise<Record<string, unknown>>;
/**
* Transform child LLM run inputs before logging
* @param inputs - Child LLM run inputs
* @returns Transformed key-value map
*/
processChildLLMRunInputs?: (inputs: any) => Record<string, unknown>;
/**
* Transform child LLM run outputs before logging
* @param outputs - Child LLM run outputs
* @returns Transformed key-value map
*/
processChildLLMRunOutputs?: (outputs: any) => Record<string, unknown>;
/**
* Whether to include response metadata (steps, etc.) in traces
* @default false
*/
traceResponseMetadata?: boolean;
/**
* Whether to include raw HTTP request/response details in traces
* @default false
*/
traceRawHttp?: boolean;
}import { wrapAISDK, type WrapAISDKConfig } from "langsmith/experimental/vercel";
import { wrapLanguageModel, generateText } from "ai";
const config: WrapAISDKConfig = {
name: "my-ai-operation",
project_name: "production-app",
metadata: {
version: "1.0.0",
},
tags: ["production", "gpt-4"],
processInputs: (inputs) => ({
...inputs,
prompt: inputs.prompt?.substring(0, 100), // Truncate long prompts
}),
traceResponseMetadata: true,
};
const wrappedAI = wrapAISDK({ wrapLanguageModel, generateText }, config);import { wrapAISDK } from "langsmith/experimental/vercel";
import { wrapLanguageModel, generateText } from "ai";
import { openai } from "@ai-sdk/openai";
const wrappedAI = wrapAISDK({ wrapLanguageModel, generateText });
const result = await wrappedAI.generateText({
model: openai("gpt-4"),
prompt: "Explain quantum computing in simple terms",
});
console.log(result.text);
// Traces are automatically sent to LangSmithimport { wrapAISDK } from "langsmith/experimental/vercel";
import { wrapLanguageModel, streamText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
const wrappedAI = wrapAISDK(
{ wrapLanguageModel, streamText },
{
project_name: "streaming-demo",
tags: ["claude", "streaming"],
}
);
const { textStream } = await wrappedAI.streamText({
model: anthropic("claude-3-opus-20240229"),
prompt: "Write a short story",
});
for await (const chunk of textStream) {
process.stdout.write(chunk);
}import { wrapAISDK, createLangSmithProviderOptions } from "langsmith/experimental/vercel";
import { wrapLanguageModel, generateText } from "ai";
import { openai } from "@ai-sdk/openai";
// Base configuration for all calls
const wrappedAI = wrapAISDK(
{ wrapLanguageModel, generateText },
{
project_name: "my-app",
tags: ["base-config"],
}
);
// Override for specific call
const lsConfig = createLangSmithProviderOptions({
name: "special-operation",
metadata: {
userId: "user-456",
priority: "high",
},
tags: ["special", "high-priority"],
});
const result = await wrappedAI.generateText({
model: openai("gpt-4"),
prompt: "Important query",
providerOptions: {
langsmith: lsConfig,
},
});import { wrapAISDK, createLangSmithProviderOptions } from "langsmith/experimental/vercel";
import { wrapLanguageModel, generateText } from "ai";
import { openai } from "@ai-sdk/openai";
const wrappedAI = wrapAISDK({ wrapLanguageModel, generateText });
// Redact sensitive information from traces
const lsConfig = createLangSmithProviderOptions({
processInputs: (inputs) => ({
prompt: "[REDACTED]",
model: inputs.model,
// Keep non-sensitive fields
}),
processOutputs: (outputs) => ({
text: outputs.outputs.text.substring(0, 100) + "...",
// Truncate output for privacy
}),
});
const result = await wrappedAI.generateText({
model: openai("gpt-4"),
prompt: "Sensitive PII data here",
providerOptions: {
langsmith: lsConfig,
},
});import { wrapAISDK } from "langsmith/experimental/vercel";
import { wrapLanguageModel, generateObject } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
const wrappedAI = wrapAISDK(
{ wrapLanguageModel, generateObject },
{
project_name: "structured-generation",
metadata: {
outputType: "structured",
},
}
);
const result = await wrappedAI.generateObject({
model: openai("gpt-4"),
schema: z.object({
name: z.string(),
age: z.number(),
email: z.string().email(),
}),
prompt: "Generate a sample user profile",
});
console.log(result.object);
// { name: "...", age: ..., email: "..." }import { wrapAISDK, createLangSmithProviderOptions } from "langsmith/experimental/vercel";
import { wrapLanguageModel, generateText } from "ai";
import { openai } from "@ai-sdk/openai";
import { anthropic } from "@ai-sdk/anthropic";
const wrappedAI = wrapAISDK({ wrapLanguageModel, generateText });
const prompt = "What are the key benefits of TypeScript?";
// Compare GPT-4 and Claude
const [gpt4Result, claudeResult] = await Promise.all([
wrappedAI.generateText({
model: openai("gpt-4"),
prompt,
providerOptions: {
langsmith: createLangSmithProviderOptions({
name: "gpt4-comparison",
metadata: { model: "gpt-4", experiment: "model-comparison" },
}),
},
}),
wrappedAI.generateText({
model: anthropic("claude-3-opus-20240229"),
prompt,
providerOptions: {
langsmith: createLangSmithProviderOptions({
name: "claude-comparison",
metadata: { model: "claude-3-opus", experiment: "model-comparison" },
}),
},
}),
]);
console.log("GPT-4:", gpt4Result.text);
console.log("Claude:", claudeResult.text);
// Both traced separately in LangSmith with comparison metadataimport { wrapAISDK, createLangSmithProviderOptions } from "langsmith/experimental/vercel";
import { wrapLanguageModel, generateText } from "ai";
import { openai } from "@ai-sdk/openai";
import { NextRequest, NextResponse } from "next/server";
const wrappedAI = wrapAISDK(
{ wrapLanguageModel, generateText },
{
project_name: "nextjs-api",
tags: ["api-route"],
}
);
export async function POST(req: NextRequest) {
const { prompt, userId } = await req.json();
const lsConfig = createLangSmithProviderOptions({
metadata: {
userId,
endpoint: "/api/generate",
timestamp: new Date().toISOString(),
},
});
try {
const result = await wrappedAI.generateText({
model: openai("gpt-4"),
prompt,
providerOptions: {
langsmith: lsConfig,
},
});
return NextResponse.json({ text: result.text });
} catch (error) {
console.error("Generation failed:", error);
return NextResponse.json(
{ error: "Failed to generate text" },
{ status: 500 }
);
}
}import { wrapAISDK } from "langsmith/experimental/vercel";
import { wrapLanguageModel, generateText } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
const wrappedAI = wrapAISDK(
{ wrapLanguageModel, generateText },
{
project_name: "tool-calling",
}
);
const result = await wrappedAI.generateText({
model: openai("gpt-4"),
prompt: "What's the weather in San Francisco?",
tools: {
getWeather: {
description: "Get the weather for a location",
parameters: z.object({
location: z.string(),
}),
execute: async ({ location }) => {
// Tool execution is also traced
return { temperature: 72, condition: "sunny", location };
},
},
},
});
console.log(result.text);
// Both the main call and tool executions are tracedSet common configuration (project name, tags, client) at the wrapper level, and override only when needed:
const wrappedAI = wrapAISDK(
{ wrapLanguageModel, generateText, streamText },
{
project_name: "my-app",
tags: ["production"],
client: customClient,
}
);Use processInputs and processOutputs to redact sensitive information:
const config = {
processInputs: (inputs) => ({
...inputs,
prompt: inputs.prompt?.includes("password") ? "[REDACTED]" : inputs.prompt,
}),
};Include relevant context in metadata for easier debugging and analysis:
const lsConfig = createLangSmithProviderOptions({
metadata: {
userId: user.id,
feature: "chat",
sessionId: session.id,
timestamp: new Date().toISOString(),
},
});Provide meaningful names for operations to improve trace organization:
const lsConfig = createLangSmithProviderOptions({
name: "user-onboarding-welcome-message",
tags: ["onboarding", "automated"],
});When debugging complex flows, enable traceResponseMetadata to capture additional details:
const wrappedAI = wrapAISDK(
{ wrapLanguageModel, generateText },
{
traceResponseMetadata: true, // Includes steps and intermediate data
}
);