tessl install github:jezweb/claude-skills --skill elevenlabs-agentsgithub.com/jezweb/claude-skills
Build conversational AI voice agents with ElevenLabs Platform. Configure agents, tools, RAG knowledge bases, agent versioning with A/B testing, and MCP security. React, React Native, or Swift SDKs. Prevents 34 documented errors. Use when: building voice agents, AI phone systems, agent versioning/branching, MCP security, or troubleshooting @11labs deprecated, webhook errors, CSP violations, localhost allowlist, tool parsing errors.
Review Score
87%
Validation Score
12/16
Implementation Score
77%
Activation Score
100%
ElevenLabs Agents Platform is a comprehensive solution for building production-ready conversational AI voice agents. The platform coordinates four core components:
ElevenLabs migrated to new scoped packages in August 2025. Current packages:
npm install @elevenlabs/react@0.12.3 # React SDK (Dec 2025: localization, Scribe fixes)
npm install @elevenlabs/client@0.12.2 # JavaScript SDK (Dec 2025: localization)
npm install @elevenlabs/react-native@0.5.7 # React Native SDK (Dec 2025: mic fixes, speed param)
npm install @elevenlabs/elevenlabs-js@2.30.0 # Base SDK (Jan 2026: latest)
npm install -g @elevenlabs/agents-cli@0.6.1 # CLIDEPRECATED: @11labs/react, @11labs/client (uninstall if present)
β οΈ CRITICAL: v1 TTS models were removed on 2025-12-15. Use Turbo v2/v2.5 only.
Widget Improvements (v0.5.5):
chat_mode: true) no longer requires microphone accessend_call system tool fix (no longer omits last message)SDK Fixes:
Which ElevenLabs package should I use?
| Package | Environment | Use Case |
|---|---|---|
@elevenlabs/elevenlabs-js | Server only (Node.js) | Full API access, TTS, voices, models |
@elevenlabs/client | Browser + Server | Agents SDK, WebSocket, lightweight |
@elevenlabs/react | React apps | Conversational AI hooks |
@elevenlabs/react-native | Mobile | iOS/Android agents |
β οΈ Why elevenlabs-js doesn't work in browser:
child_process module (by design)Module not found: Can't resolve 'child_process'elevenlabs-js, call proxy from browserAffected Frameworks:
Source: GitHub Issue #293
npm install @elevenlabs/react zodimport { useConversation } from '@elevenlabs/react';
const { startConversation, stopConversation, status } = useConversation({
agentId: 'your-agent-id',
signedUrl: '/api/elevenlabs/auth', // Recommended (secure)
// OR apiKey: process.env.NEXT_PUBLIC_ELEVENLABS_API_KEY,
clientTools: { /* browser-side tools */ },
onEvent: (event) => { /* transcript, agent_response, tool_call */ },
serverLocation: 'us' // 'eu-residency' | 'in-residency' | 'global'
});npm install -g @elevenlabs/agents-cli
elevenlabs auth login
elevenlabs agents init # Creates agents.json, tools.json, tests.json
elevenlabs agents add "Bot" --template customer-service
elevenlabs agents push --env dev # Deploy
elevenlabs agents test "Bot" # Testimport { ElevenLabsClient } from 'elevenlabs';
const client = new ElevenLabsClient({ apiKey: process.env.ELEVENLABS_API_KEY });
const agent = await client.agents.create({
name: 'Support Bot',
conversation_config: {
agent: { prompt: { prompt: "...", llm: "gpt-4o" }, language: "en" },
tts: { model_id: "eleven_turbo_v2_5", voice_id: "your-voice-id" }
}
});CRITICAL: The JS SDK uses camelCase for parameters while the Python SDK and API use snake_case. Using snake_case in JS causes silent failures where parameters are ignored.
Common Parameters:
| API/Python (snake_case) | JS SDK (camelCase) |
|---|---|
model_id | modelId |
voice_id | voiceId |
output_format | outputFormat |
voice_settings | voiceSettings |
Example:
// β WRONG - parameter ignored (snake_case):
const stream = await elevenlabs.textToSpeech.convert(voiceId, {
model_id: "eleven_v3", // Silently ignored!
text: "Hello"
});
// β
CORRECT - use camelCase:
const stream = await elevenlabs.textToSpeech.convert(voiceId, {
modelId: "eleven_v3", // Works!
text: "Hello"
});Tip: Always check TypeScript types for correct parameter names. This is the most common error when migrating from Python SDK.
Source: GitHub Issue #300
1. Personality - Identity, role, character traits 2. Environment - Communication context (phone, web, video) 3. Tone - Formality, speech patterns, verbosity 4. Goal - Objectives and success criteria 5. Guardrails - Boundaries, prohibited topics, ethical constraints 6. Tools - Available capabilities and when to use them
Template:
{
"agent": {
"prompt": {
"prompt": "Personality:\n[Agent identity and role]\n\nEnvironment:\n[Communication context]\n\nTone:\n[Speech style]\n\nGoal:\n[Primary objectives]\n\nGuardrails:\n[Boundaries and constraints]\n\nTools:\n[Available tools and usage]",
"llm": "gpt-4o", // gpt-5.1, claude-sonnet-4-5, gemini-3-pro-preview
"temperature": 0.7
}
}
}2025 LLM Models:
gpt-5.1, gpt-5.1-2025-11-13 (Oct 2025)claude-sonnet-4-5, claude-sonnet-4-5@20250929 (Oct 2025)gemini-3-pro-preview (2025)gemini-2.5-flash-preview-09-2025 (Oct 2025)| Mode | Behavior | Best For |
|---|---|---|
| Eager | Responds quickly | Fast-paced support, quick orders |
| Normal | Balanced (default) | General customer service |
| Patient | Waits longer | Information collection, therapy |
{ "conversation_config": { "turn": { "mode": "patient" } } }Workflow Features:
edge_order (determinism, Oct 2025){
"workflow": {
"nodes": [
{ "id": "node_1", "type": "subagent", "config": { "system_prompt": "...", "turn_eagerness": "patient" } },
{ "id": "node_2", "type": "tool", "tool_name": "transfer_to_human" }
],
"edges": [{ "from": "node_1", "to": "node_2", "condition": "escalation", "edge_order": 1 }]
}
}Agent Management (2025):
archived: true field (Oct 2025)Use {{var_name}} syntax in prompts, messages, and tool parameters.
System Variables:
{{system__agent_id}}, {{system__conversation_id}}{{system__caller_id}}, {{system__called_number}} (telephony){{system__call_duration_secs}}, {{system__time_utc}}{{system__call_sid}} (Twilio only)Custom Variables:
await client.conversations.create({
agent_id: "agent_123",
dynamic_variables: { user_name: "John", account_tier: "premium" }
});Secret Variables: {{secret__api_key}} (headers only, never sent to LLM)
β οΈ Error: Missing variables cause "Missing required dynamic variables" - always provide all referenced variables.
Multi-Voice - Switch voices dynamically (adds ~200ms latency per switch):
{ "prompt": "When speaking as customer, use voice_id 'voice_abc'. As agent, use 'voice_def'." }Pronunciation Dictionary - IPA, CMU, word substitutions (Turbo v2/v2.5 only):
{
"pronunciation_dictionary": [
{ "word": "API", "pronunciation": "ey-pee-ay", "format": "cmu" },
{ "word": "AI", "substitution": "artificial intelligence" }
]
}PATCH Support (Aug 2025) - Update dictionaries without replacement
Speed Control - 0.7x-1.2x (use 0.9x-1.1x for natural sound):
{ "voice_settings": { "speed": 1.0 } }Voice Cloning Best Practices:
32+ Languages with automatic detection and in-conversation switching.
Multi-Language Presets:
{
"language_presets": [
{ "language": "en", "voice_id": "en_voice", "first_message": "Hello!" },
{ "language": "es", "voice_id": "es_voice", "first_message": "Β‘Hola!" }
]
}Enable agents to access large knowledge bases without loading entire documents into context.
Workflow:
Configuration:
{
"agent": { "prompt": { "knowledge_base": ["doc_id_1", "doc_id_2"] } },
"knowledge_base_config": {
"max_chunks": 5,
"vector_distance_threshold": 0.8
}
}API Upload:
const doc = await client.knowledgeBase.upload({ file: fs.createReadStream('docs.pdf'), name: 'Docs' });
await client.knowledgeBase.computeRagIndex({ document_id: doc.id, embedding_model: 'e5_mistral_7b' });β οΈ Gotchas: RAG adds ~500ms latency. Check index status before use - indexing can take minutes.
The legacy prompt.tools array was removed on July 23, 2025. All agent configurations must use the new format.
Migration Timeline:
tools fieldprompt.tools (active now)Old Format (no longer works):
{
agent: {
prompt: {
tools: [{ name: "get_weather", url: "...", method: "GET" }]
}
}
}New Format (required):
{
agent: {
prompt: {
tool_ids: ["tool_abc123"], // Client/server tools
built_in_tools: ["end_call"] // System tools (new field)
}
}
}Error if both used: "A request must include either prompt.tool_ids or the legacy prompt.tools array β never both"
Note: All tools from legacy format were auto-migrated to standalone tool records.
Source: Official Migration Guide
Execute in browser or mobile app. Tool names case-sensitive.
clientTools: {
updateCart: {
description: "Update shopping cart",
parameters: z.object({ item: z.string(), quantity: z.number() }),
handler: async ({ item, quantity }) => {
// Client-side logic
return { success: true };
}
}
}HTTP requests to external APIs. PUT support added Apr 2025.
{
"name": "get_weather",
"url": "https://api.weather.com/{{user_id}}",
"method": "GET",
"headers": { "Authorization": "Bearer {{secret__api_key}}" },
"parameters": { "type": "object", "properties": { "city": { "type": "string" } } }
}β οΈ Secret variables only in headers (not URL/body)
2025 Features:
β οΈ Historical Issue (Fixed Feb 2025):
Tool calling was broken with gpt-4o-mini due to an OpenAI API change. This was fixed in SDK v2.25.0+ (Feb 17, 2025). If using older SDK versions, upgrade to avoid silent tool execution failures on that model.
Source: Changelog Feb 17, 2025
Connect to MCP servers for databases, IDEs, data sources.
Configuration: Dashboard β Add Custom MCP Server β Configure SSE/HTTP endpoint
Approval Modes: Always Ask | Fine-Grained | No Approval
2025 Updates:
β οΈ Limitations: SSE/HTTP only. Not available for Zero Retention or HIPAA.
Built-in conversation control (no external APIs):
end_call, detect_language, transfer_agenttransfer_to_number (telephony)dtmf_playpad, voicemail_detection (telephony)2025: use_out_of_band_dtmf flag for telephony integration
const { startConversation, stopConversation, status, isSpeaking } = useConversation({
agentId: 'your-agent-id',
signedUrl: '/api/auth', // OR apiKey: process.env.NEXT_PUBLIC_ELEVENLABS_API_KEY
clientTools: { /* ... */ },
onEvent: (event) => { /* transcript, agent_response, tool_call, agent_tool_request (Oct 2025) */ },
onConnect/onDisconnect/onError,
serverLocation: 'us' // 'eu-residency' | 'in-residency' | 'global'
});2025 Events:
agent_chat_response_part - Streaming responses (Oct 2025)agent_tool_request - Tool interaction tracking (Oct 2025)| Feature | WebSocket | WebRTC (Jul 2025 rollout) |
|---|---|---|
| Auth | signedUrl | conversationToken |
| Audio | Configurable (16k/24k/48k) | PCM_48000 (hardcoded) |
| Latency | Standard | Lower |
| Best For | Flexibility | Low-latency |
β οΈ WebRTC: Hardcoded PCM_48000, limited device switching
@elevenlabs/react@0.12.3@elevenlabs/client@0.12.2 - new Conversation({...})@elevenlabs/react-native@0.5.7 - Expo SDK 47+, iOS/macOS (custom build required, no Expo Go)<script src="https://elevenlabs.io/convai-widget/index.js"></script>@elevenlabs/convai-widget-embed@0.5.5 - For embedding in existing apps@elevenlabs/convai-widget-core@0.5.5 - Core widget functionalityReal-time transcription with word-level timestamps. Single-use tokens, not API keys.
const { connect, startRecording, stopRecording, transcript, partialTranscript } = useScribe({
token: async () => (await fetch('/api/scribe/token')).json().then(d => d.token),
commitStrategy: 'vad', // 'vad' (auto on silence) | 'manual' (explicit .commit())
sampleRate: 16000, // 16000 or 24000
onPartialTranscript/onFinalTranscript/onError
});Events: PARTIAL_TRANSCRIPT, FINAL_TRANSCRIPT_WITH_TIMESTAMPS, SESSION_STARTED, ERROR
β οΈ Closed Beta - requires sales contact. For agents, use Agents Platform instead (LLM + TTS + two-way interaction).
β οΈ Webhook Mode Issue:
Using speechToText.convert() with webhook: true causes SDK parsing errors. The API returns only { request_id } for webhook mode, but the SDK expects the full transcription schema.
Error Message:
ParseError: response: Missing required key "language_code"; Missing required key "text"; ...Workaround - Use direct fetch API instead of SDK:
const formData = new FormData();
formData.append('file', audioFile);
formData.append('model_id', 'scribe_v1');
formData.append('webhook', 'true');
formData.append('webhook_id', webhookId);
const response = await fetch('https://api.elevenlabs.io/v1/speech-to-text', {
method: 'POST',
headers: { 'xi-api-key': apiKey },
body: formData,
});
const result = await response.json(); // { request_id: 'xxx' }
// Actual transcription delivered to webhook endpointSource: GitHub Issue #232 (confirmed by maintainer)
Comprehensive automated testing with 9 new API endpoints for creating, managing, and executing tests.
Test Types:
CLI Workflow:
# Create test
elevenlabs tests add "Refund Test" --template basic-llm
# Configure in test_configs/refund-test.json
{
"name": "Refund Test",
"scenario": "Customer requests refund",
"success_criteria": ["Agent acknowledges empathetically", "Verifies order details"],
"expected_tool_call": { "tool_name": "lookup_order", "parameters": { "order_id": "..." } }
}
# Deploy and execute
elevenlabs tests push
elevenlabs agents test "Support Agent"9 New API Endpoints (Aug 2025):
POST /v1/convai/tests - Create testGET /v1/convai/tests/:id - Retrieve testPATCH /v1/convai/tests/:id - Update testDELETE /v1/convai/tests/:id - Delete testPOST /v1/convai/tests/:id/execute - Execute testGET /v1/convai/test-invocations - List invocations (pagination, agent filtering)POST /v1/convai/test-invocations/:id/resubmit - Resubmit failed testGET /v1/convai/test-results/:id - Get resultsGET /v1/convai/test-results/:id/debug - Detailed debugging infoTest Invocation Listing (Oct 2025):
const invocations = await client.convai.testInvocations.list({
agent_id: 'agent_123', // Filter by agent
page_size: 30, // Default 30, max 100
cursor: 'next_page_cursor' // Pagination
});
// Returns: test run counts, pass/fail stats, titlesProgrammatic Testing:
const simulation = await client.agents.simulate({
agent_id: 'agent_123',
scenario: 'Refund request',
user_messages: ["I want a refund", "Order #12345"],
success_criteria: ["Acknowledges request", "Verifies order"]
});
console.log('Passed:', simulation.passed);Agent Tracking (Oct 2025): Tests now include agent_id association for better organization
2025 Features:
call_start_before_unix parameteraggregation_interval (hour/day/week/month)tool_latency_secs trackingConversation Analysis: Success evaluation (LLM-based), data collection fields, post-call webhooks
Access: Dashboard β Analytics | Post-call Webhooks | API
Data Retention: 2 years default (GDPR). Configure: { "transcripts": { "retention_days": 730 }, "audio": { "retention_days": 2190 } }
Encryption: TLS 1.3 (transit), AES-256 (rest)
Regional: serverLocation: 'eu-residency' | 'us' | 'global' | 'in-residency'
Zero Retention Mode: Immediate deletion (no history, analytics, webhooks, or MCP)
Compliance: GDPR (1-2 years), HIPAA (6 years), SOC 2 (automatic encryption)
LLM Caching: Up to 90% savings on repeated inputs. { "caching": { "enabled": true, "ttl_seconds": 3600 } }
Model Swapping: GPT-5.1, GPT-4o/mini, Claude Sonnet 4.5, Gemini 3 Pro/2.5 Flash (2025 models)
Burst Pricing: 3x concurrency limit at 2x cost. { "burst_pricing_enabled": true }
2025 Platform Updates:
Events: audio, transcript, agent_response, tool_call, agent_chat_response_part (streaming, Oct 2025), agent_tool_request (Oct 2025), conversation_state
Custom Models: Bring your own LLM (OpenAI-compatible endpoints). { "llm_config": { "custom": { "endpoint": "...", "api_key": "{{secret__key}}" } } }
Post-Call Webhooks: HMAC verification required. Return 200 or auto-disable after 10 failures. Payload includes conversation_id, transcript, analysis.
Chat Mode: Text-only (no ASR/TTS). { "chat_mode": true }. Saves ~200ms + costs.
Telephony: SIP (sip-static.rtc.elevenlabs.io), Twilio native, Vonage, RingCentral. 2025: Twilio keypad fix (Jul), SIP TLS remote_domains validation (Oct)
Installation & Auth:
npm install -g @elevenlabs/agents-cli@0.6.1
elevenlabs auth login
elevenlabs auth residency eu-residency # 'in-residency' | 'global'
export ELEVENLABS_API_KEY=your-api-key # For CI/CDProject Structure: agents.json, tools.json, tests.json + agent_configs/, tool_configs/, test_configs/
Key Commands:
elevenlabs agents init
elevenlabs agents add "Bot" --template customer-service
elevenlabs agents push --env prod --dry-run # Preview
elevenlabs agents push --env prod # Deploy
elevenlabs agents pull # Import existing
elevenlabs agents test "Bot" # 2025: Enhanced testing
elevenlabs tools add-webhook "Weather" --config-path tool_configs/weather.json
elevenlabs tools push
elevenlabs tests add "Test" --template basic-llm
elevenlabs tests pushMulti-Environment: Create agent.dev.json, agent.staging.json, agent.prod.json for overrides
CI/CD: GitHub Actions with --dry-run validation before deploy
.gitignore: .env, .elevenlabs/, *.secret.json
Cause: Variables referenced in prompts not provided at conversation start
Solution: Provide all variables in dynamic_variables: { user_name: "John", ... }
Cause: Tool name mismatch (case-sensitive)
Solution: Ensure tool_ids: ["orderLookup"] matches name: "orderLookup" exactly
Cause: Incorrect HMAC signature, not returning 200, or 10+ failures
Solution: Verify hmac = crypto.createHmac('sha256', SECRET).update(payload).digest('hex') and return 200
β οΈ Header Name: Use ElevenLabs-Signature (NOT X-ElevenLabs-Signature - no X- prefix!)
Cause: Background noise, inconsistent mic distance, extreme volumes in training Solution: Use clean audio, consistent distance, avoid extremes
Cause: English-trained voice for non-English language
Solution: Use language-matched voices: { "language": "es", "voice_id": "spanish_voice" }
Cause: CLI doesn't support restricted API keys Solution: Use unrestricted API key for CLI
Cause: Hash-based change detection missed modification
Solution: elevenlabs agents init --override + elevenlabs agents pull + push
Cause: Schema doesn't match usage
Solution: Add clear descriptions: "description": "Order ID (format: ORD-12345)"
Cause: Index still computing (takes minutes)
Solution: Check index.status === 'ready' before using
Cause: Network instability, incompatible browser, or firewall issues Symptoms:
Error receiving message: received 1002 (protocol error)
Error sending user audio chunk: received 1002 (protocol error)
WebSocket is already in CLOSING or CLOSED stateConnection cycles: Disconnected β Connected β Disconnected rapidly
Solution:
connectionType: 'webrtc'Source: GitHub Issue #134
Cause: Agent visibility or API key config Solution: Check visibility (public/private), verify API key in prod, check allowlist
Cause: Allowlist enabled but using shared link, OR localhost validation bug Symptoms:
Host is not supported
Host is not valid or supported
Host is not in insights whitelist
WebSocket is already in CLOSING or CLOSED stateSolution:
127.0.0.1:3000 instead of localhost:3000β οΈ Localhost Validation Bug: The dashboard has inconsistent validation for localhost URLs:
localhost:3000 β Rejected (should be valid)http://localhost:3000 β Rejected (protocol not allowed)localhost:3000/voice-chat β Rejected (paths not allowed)www.localhost:3000 β Accepted (invalid but accepted!)127.0.0.1:3000 β Accepted (use this for local dev)Source: GitHub Issue #320
Cause: Edge conditions creating loops Solution: Add max iteration limits, test all paths, explicit exit conditions
Cause: Burst not enabled in settings
Solution: { "call_limits": { "burst_pricing_enabled": true } }
Cause: MCP server slow/unreachable Solution: Check URL accessible, verify transport (SSE/HTTP), check auth, monitor logs
Cause: Android needs time to switch audio mode
Solution: connectionDelay: { android: 3_000, ios: 0 } (3s for audio routing)
Cause: Strict CSP blocks blob: URLs. SDK uses Audio Worklets loaded as blobs
Solution: Self-host worklets:
cp node_modules/@elevenlabs/client/dist/worklets/*.js public/elevenlabs/workletPaths: { 'rawAudioProcessor': '/elevenlabs/rawAudioProcessor.worklet.js', 'audioConcatProcessor': '/elevenlabs/audioConcatProcessor.worklet.js' }script-src 'self' https://elevenlabs.io; worker-src 'self';
Gotcha: Update worklets when upgrading @elevenlabs/clientCause: Schema expects message: string but ElevenLabs sends null when agent makes tool calls
Solution: Use z.string().nullable() for message field in Zod schemas
// β Fails on tool call turns:
message: z.string()
// β
Correct:
message: z.string().nullable()Real payload example:
{ "role": "agent", "message": null, "tool_calls": [{ "tool_name": "my_tool", ... }] }Cause: Schema expects call_successful: boolean but ElevenLabs sends "success" or "failure" strings
Solution: Accept both types and convert for database storage
// Schema:
call_successful: z.union([z.boolean(), z.string()]).optional()
// Conversion helper:
function parseCallSuccessful(value: unknown): boolean | undefined {
if (value === undefined || value === null) return undefined
if (typeof value === 'boolean') return value
if (typeof value === 'string') return value.toLowerCase() === 'success'
return undefined
}Cause: Real ElevenLabs payloads have many undocumented fields that strict schemas reject Undocumented fields in transcript turns:
agent_metadata, multivoice_message, llm_override, rag_retrieval_infollm_usage, interrupted, original_message, source_medium
Solution: Add all as .optional() with z.any() for fields you don't process
Debugging tip: Use https://webhook.site to capture real payloads, then test schema locallyCause: metadata.cost contains ElevenLabs credits, not USD dollars. Displaying this directly shows wildly wrong values (e.g., "$78.0000" when actual cost is ~$0.003)
Solution: Extract actual USD from metadata.charging.llm_price instead
// β Wrong - displays credits as dollars:
cost: metadata?.cost // Returns 78 (credits)
// β
Correct - actual USD cost:
const charging = metadata?.charging as any
cost: charging?.llm_price ?? null // Returns 0.0036 (USD)Real payload structure:
{
"metadata": {
"cost": 78, // β CREDITS, not dollars!
"charging": {
"llm_price": 0.0036188999999999995, // β Actual USD cost
"llm_charge": 18, // LLM credits
"call_charge": 60, // Audio credits
"tier": "pro"
}
}
}Note: llm_price only covers LLM costs. Audio costs may require separate calculation based on your plan.
Cause: Webhook contains authenticated user info from widget but code doesn't extract it
Solution: Extract dynamic_variables from conversation_initiation_client_data
const dynamicVars = data.conversation_initiation_client_data?.dynamic_variables
const callerName = dynamicVars?.user_name || null
const callerEmail = dynamicVars?.user_email || null
const currentPage = dynamicVars?.current_page || nullPayload example:
{
"conversation_initiation_client_data": {
"dynamic_variables": {
"user_name": "Jeremy Dawes",
"user_email": "jeremy@jezweb.net",
"current_page": "/dashboard/calls"
}
}
}Cause: ElevenLabs agents can collect structured data during calls (configured in agent settings). This data is stored in analysis.data_collection_results but often not parsed/displayed in UI.
Solution: Parse the JSON and display collected fields with their values and rationales
const dataCollectionResults = analysis?.dataCollectionResults
? JSON.parse(analysis.dataCollectionResults)
: null
// Display each collected field:
Object.entries(dataCollectionResults).forEach(([key, data]) => {
console.log(`${key}: ${data.value} (${data.rationale})`)
})Payload example:
{
"data_collection_results": {
"customer_name": { "value": "John Smith", "rationale": "Customer stated their name" },
"intent": { "value": "billing_inquiry", "rationale": "Asking about invoice" },
"callback_number": { "value": "+61400123456", "rationale": "Provided for callback" }
}
}Cause: Custom success criteria (configured in agent) produce results in analysis.evaluation_criteria_results but often not parsed/displayed
Solution: Parse and show pass/fail status with rationales
const evaluationResults = analysis?.evaluationCriteriaResults
? JSON.parse(analysis.evaluationCriteriaResults)
: null
Object.entries(evaluationResults).forEach(([key, data]) => {
const passed = data.result === 'success' || data.result === true
console.log(`${key}: ${passed ? 'PASS' : 'FAIL'} - ${data.rationale}`)
})Payload example:
{
"evaluation_criteria_results": {
"verified_identity": { "result": "success", "rationale": "Customer verified DOB" },
"resolved_issue": { "result": "failure", "rationale": "Escalated to human" }
}
}Cause: User can provide thumbs up/down feedback. Stored in metadata.feedback.thumb_rating but not extracted
Solution: Extract and store the rating (1 = thumbs up, -1 = thumbs down)
const feedback = metadata?.feedback as any
const feedbackRating = feedback?.thumb_rating ?? null // 1, -1, or null
// Also available:
const likes = feedback?.likes // Array of things user liked
const dislikes = feedback?.dislikes // Array of things user dislikedPayload example:
{
"metadata": {
"feedback": {
"thumb_rating": 1,
"likes": ["helpful", "natural"],
"dislikes": []
}
}
}Cause: Each transcript turn has valuable metadata that's often ignored Solution: Store these fields per message for analytics and debugging
const turnAny = turn as any
const messageData = {
// ... existing fields
interrupted: turnAny.interrupted ?? null, // Was turn cut off by user?
sourceMedium: turnAny.source_medium ?? null, // Channel: web, phone, etc.
originalMessage: turnAny.original_message ?? null, // Pre-processed message
ragRetrievalInfo: turnAny.rag_retrieval_info // What knowledge was retrieved
? JSON.stringify(turnAny.rag_retrieval_info)
: null,
}Use cases:
interrupted: true β User spoke over agent (UX insight)source_medium β Analytics by channelrag_retrieval_info β Debug/improve knowledge base retrievalCause: Three new boolean fields coming in August 2025 webhooks that may break schemas Solution: Add these fields to schemas now (as optional) to be ready
// In webhook payload (coming August 15, 2025):
has_audio: boolean // Was full audio recorded?
has_user_audio: boolean // Was user audio captured?
has_response_audio: boolean // Was agent audio captured?
// Schema (future-proof):
const schema = z.object({
// ... existing fields
has_audio: z.boolean().optional(),
has_user_audio: z.boolean().optional(),
has_response_audio: z.boolean().optional(),
})Note: These match the existing fields in the GET Conversation API response
Cause: Calling conversations.get(id) when conversation contains tool_results where the tool was deleted/not found
Error Message:
Error: response -> transcript -> [11] -> tool_results -> [0] -> type:
Expected string. Received null.;
response -> transcript -> [11] -> tool_results -> [0] -> type:
[Variant 1] Expected "system". Received null.;
response -> transcript -> [11] -> tool_results -> [0] -> type:
[Variant 2] Expected "workflow". Received null.Solution:
conversation.get() in try-catch until SDK is fixedtry {
const conversation = await client.conversationalAi.conversations.get(id);
} catch (error) {
console.error('Tool parsing error - conversation may reference deleted tools');
}Source: GitHub Issue #268
Cause: Using snake_case parameters (from API/Python SDK docs) in JS SDK, which expects camelCase Symptoms: Parameters silently ignored, wrong model/voice used, no error messages
Common Mistakes:
// β WRONG - parameter ignored:
convert(voiceId, { model_id: "eleven_v3" })
// β
CORRECT:
convert(voiceId, { modelId: "eleven_v3" })Solution: Always use camelCase for JS SDK parameters. Check TypeScript types if unsure.
Affected Parameters: model_id, voice_id, output_format, voice_settings, and all API parameters
Source: GitHub Issue #300
Cause: SDK expects full transcription response but webhook mode returns only { request_id }
Error Message:
ParseError: Missing required key "language_code"; Missing required key "text"; ...Solution: Use direct fetch API instead of SDK for webhook mode:
const formData = new FormData();
formData.append('file', audioFile);
formData.append('model_id', 'scribe_v1');
formData.append('webhook', 'true');
formData.append('webhook_id', webhookId);
const response = await fetch('https://api.elevenlabs.io/v1/speech-to-text', {
method: 'POST',
headers: { 'xi-api-key': apiKey },
body: formData,
});
const result = await response.json(); // { request_id: 'xxx' }Source: GitHub Issue #232
Cause: Using @elevenlabs/elevenlabs-js in browser/client environments (depends on Node.js child_process)
Error Message:
Module not found: Can't resolve 'child_process'Affected Frameworks:
Solution:
@elevenlabs/client or @elevenlabs/react insteadelevenlabs-js, call from browserelevenlabs-js in main process only, not rendererNote: This is by design - elevenlabs-js is server-only
Source: GitHub Issue #293
Cause: Using legacy prompt.tools array field after July 23, 2025 cutoff
Error Message:
A request must include either prompt.tool_ids or the legacy prompt.tools array β never bothSolution: Migrate to new format:
// β Old (rejected):
{ agent: { prompt: { tools: [...] } } }
// β
New (required):
{
agent: {
prompt: {
tool_ids: ["tool_abc123"], // Client/server tools
built_in_tools: ["end_call"] // System tools
}
}
}Note: All legacy tools were auto-migrated to standalone records. Just update your configuration references.
Source: Official Migration Guide
Cause: OpenAI API breaking change affected gpt-4o-mini tool execution (historical issue)
Symptoms: Tools silently fail to execute, no error messages
Solution: Upgrade to SDK v2.25.0+ (released Feb 17, 2025). If using older SDK versions, upgrade or avoid gpt-4o-mini for tool-based workflows.
Source: Changelog Feb 17, 2025
Cause: WebSocket URI wasn't including audio_format parameter even when specified (historical issue)
Solution: Upgrade to @elevenlabs/elevenlabs-js@2.32.0 or later (released Jan 19, 2026)
Source: GitHub PR #319
ElevenLabs introduced Agent Versioning in January 2026, enabling git-like version control for conversational AI agents. This allows safe experimentation, A/B testing, and gradual rollouts.
| Concept | ID Format | Description |
|---|---|---|
| Version | agtvrsn_xxxx | Immutable snapshot of agent config at a point in time |
| Branch | agtbrch_xxxx | Isolated development path (like git branches) |
| Draft | Per-user/branch | Work-in-progress changes before committing |
| Deployment | Traffic splits | A/B testing with percentage-based routing |
// Enable versioning on existing agent
const agent = await client.conversationalAi.agents.update({
agentId: 'your-agent-id',
enableVersioningIfNotEnabled: true
});β οΈ Note: Once enabled, versioning cannot be disabled on an agent.
// Create a new branch for experimentation
const branch = await client.conversationalAi.agents.branches.create({
agentId: 'your-agent-id',
parentVersionId: 'agtvrsn_xxxx', // Branch from this version
name: 'experiment-v2'
});
// List all branches
const branches = await client.conversationalAi.agents.branches.list({
agentId: 'your-agent-id'
});
// Delete a branch (must not have active traffic)
await client.conversationalAi.agents.branches.delete({
agentId: 'your-agent-id',
branchId: 'agtbrch_xxxx'
});Route traffic between branches using percentage splits:
// Deploy 90/10 traffic split
const deployment = await client.conversationalAi.agents.deployments.create({
agentId: 'your-agent-id',
deployments: [
{ branchId: 'agtbrch_main', percentage: 90 },
{ branchId: 'agtbrch_xxxx', percentage: 10 }
]
});
// Get current deployment status
const status = await client.conversationalAi.agents.deployments.get({
agentId: 'your-agent-id'
});Use Cases:
// Merge successful experiment back to main
const merge = await client.conversationalAi.agents.branches.merge({
agentId: 'your-agent-id',
sourceBranchId: 'agtbrch_xxxx',
targetBranchId: 'agtbrch_main',
archiveSourceBranch: true // Clean up after merge
});Drafts are per-user, per-branch work-in-progress states:
// Get current draft
const draft = await client.conversationalAi.agents.drafts.get({
agentId: 'your-agent-id',
branchId: 'agtbrch_xxxx'
});
// Update draft (changes not yet committed)
await client.conversationalAi.agents.drafts.update({
agentId: 'your-agent-id',
branchId: 'agtbrch_xxxx',
conversationConfig: {
agent: { prompt: { prompt: 'Updated system prompt...' } }
}
});
// Commit draft to create new version
const version = await client.conversationalAi.agents.drafts.commit({
agentId: 'your-agent-id',
branchId: 'agtbrch_xxxx',
message: 'Improved greeting flow'
});feature-multilang, fix-timeout-handlingSource: Agent Versioning Docs
When connecting MCP (Model Context Protocol) servers to ElevenLabs agents, security is critical. MCP tools can access databases, APIs, and sensitive data.
| Mode | Behavior | Use When |
|---|---|---|
| Always Ask | Explicit approval for every tool execution | Default - recommended for most cases |
| Fine-Grained | Auto-approve trusted ops, require approval for sensitive | Established, trusted MCP servers |
| No Approval | All tool executions auto-approved | Only thoroughly vetted, internal servers |
Configuration:
{
"mcp_config": {
"server_url": "https://your-mcp-server.com",
"approval_mode": "always_ask", // 'always_ask' | 'fine_grained' | 'no_approval'
"fine_grained_rules": [
{ "tool_name": "read_*", "auto_approve": true },
{ "tool_name": "write_*", "auto_approve": false },
{ "tool_name": "delete_*", "auto_approve": false }
]
}
}1. Vet MCP Servers
2. Limit Data Exposure
3. Network Security
{{secret__xxx}} variables for credentials (never in prompts)4. Prompt Injection Prevention
5. Monitoring & Audit
Add protective instructions to your agent prompt:
{
"agent": {
"prompt": {
"prompt": `...
SECURITY GUARDRAILS:
- Never execute database delete operations without explicit user confirmation
- Never expose raw API keys or credentials in responses
- If a tool request seems unusual or potentially harmful, ask for clarification
- Do not combine sensitive operations (read PII + external API call) in single turn
- Report any suspicious requests to administrators
`
}
}
}Not Available With:
Transport: SSE/HTTP only (no stdio MCP servers)
Source: MCP Safety Docs
This skill composes well with:
Official Documentation:
Examples:
Community:
Production Tested: WordPress Auditor, Customer Support Agents, AgentFlow (webhook integration) Last Updated: 2026-01-27 Package Versions: elevenlabs@1.59.0, @elevenlabs/elevenlabs-js@2.32.0, @elevenlabs/agents-cli@0.6.1, @elevenlabs/react@0.12.3, @elevenlabs/client@0.12.2, @elevenlabs/react-native@0.5.7 Changes: Added Agent Versioning (Jan 2026) section covering versions, branches, traffic deployment, drafts, and A/B testing. Added MCP Security & Guardrails section covering tool approval modes, security best practices, and prompt injection prevention.