Use when writing or reviewing C# code in this repo that builds an interactive Deepgram Voice Agent over WebSocket. Covers `ClientFactory.CreateAgentWebSocketClient()`, `SettingsSchema`, event subscriptions, microphone audio streaming, injected user messages, and function-call-related message types. Use `deepgram-dotnet-text-to-speech` for one-way synthesis and STT skills for transcription-only flows.
89
86%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Full-duplex voice agent sessions over a single WebSocket.
Use a different skill when:
deepgram-dotnet-speech-to-text or deepgram-dotnet-conversational-stt.deepgram-dotnet-text-to-speech.deepgram-dotnet-management-api.dotnet add package Deepgram
dotnet add package Deepgram.Microphone # only if you need local mic captureusing Deepgram;
using Deepgram.Models.Authenticate.v1;
Deepgram.Library.Initialize();
var options = new DeepgramWsClientOptions(null, null, true);
// Pass a real API key here (or use CreateAgentWebSocketClient() with the DEEPGRAM_API_KEY
// env var set). An empty string is shown only to make the signature explicit; the
// underlying DeepgramWsClientOptions throws if neither apiKey/accessToken nor the
// DEEPGRAM_API_KEY/DEEPGRAM_ACCESS_TOKEN env var is available.
var agentClient = ClientFactory.CreateAgentWebSocketClient(apiKey: "", options: options);using Deepgram.Models.Agent.v2.WebSocket;
var agentClient = ClientFactory.CreateAgentWebSocketClient();
await agentClient.Subscribe(new EventHandler<ConversationTextResponse>((sender, e) =>
{
Console.WriteLine(e);
}));
await agentClient.Subscribe(new EventHandler<AudioResponse>((sender, e) =>
{
if (e.Stream != null)
{
// WebSocket audio frames are raw linear16 PCM, not a WAV container.
// Save as .raw, or write a WAV header first (see examples/agent/websocket/no_mic/Program.cs).
using (var writer = new BinaryWriter(File.Open("output.raw", FileMode.Append)))
{
writer.Write(e.Stream.ToArray());
}
}
}));
var settings = new SettingsSchema();
settings.Agent.Think.Provider.Type = "open_ai";
settings.Agent.Think.Provider.Model = "gpt-4o-mini";
settings.Agent.Greeting = "Hello! How can I help you today?";
settings.Agent.Listen.Provider.Type = "deepgram";
settings.Agent.Listen.Provider.Model = "nova-3";
settings.Agent.Speak.Provider.Type = "deepgram";
settings.Agent.Speak.Provider.Model = "aura-2-thalia-en";
settings.Audio.Input.Encoding = "linear16";
settings.Audio.Input.SampleRate = 24000;
settings.Audio.Output.Encoding = "linear16";
settings.Audio.Output.SampleRate = 24000;
bool connected = await agentClient.Connect(settings);
if (connected)
{
await agentClient.SendInjectUserMessage("Say hello in one sentence.");
}var microphone = new Microphone(
push_callback: (audioData, length) =>
{
byte[] chunk = new byte[length];
Array.Copy(audioData, chunk, length);
agentClient.SendBinary(chunk);
},
rate: 24000,
channels: 1,
format: SampleFormat.Int16);
microphone.Start();Settings models:
SettingsSchemaAudio, Input, OutputAgent, Listen, Think, SpeakProvider (dynamic extra properties supported)Important events:
ConversationTextResponseAudioResponseAgentStartedSpeakingResponseAgentAudioDoneResponseAgentThinkingResponseUserStartedSpeakingResponseFunctionCallRequestResponseSettingsAppliedResponseSend helpers:
SendInjectUserMessage(string)SendInjectUserMessage(InjectUserMessageSchema)SendBinary(...)SendBinaryImmediately(...)SendKeepAlive()Deepgram/ClientFactory.csDeepgram/Clients/Agent/v2/Websocket/Client.csDeepgram/Clients/Interfaces/v2/IAgentWebSocketClient.csDeepgram/Models/Agent/v2/WebSocket/*.csDeepgram.Microphone/Microphone.cshttps://context7.com/deepgram/deepgram-dotnet-sdk/llmstxt/developers_deepgram_llms_txtSettingsSchema, ConversationTextResponse, etc. — not the Python names.Provider is dynamic. Extra provider-specific properties are stored through JsonExtensionData; set them carefully.FunctionCallRequestResponse is marked TODO: this needs to be defined, so inspect raw payload behavior before relying on typed fields.SendFunctionCallResponse(...) helper on the public interface. If you need it, send serialized FunctionCallResponseSchema manually via the generic send path.linear16 / 24000.Deepgram.Microphone depends on PortAudio. Local microphone examples need the helper project/package and a working PortAudio environment.examples/agent/websocket/simple/Program.csexamples/agent/websocket/no_mic/Program.csexamples/agent/websocket/arbitrary_keys/Program.csFor cross-language Deepgram product knowledge — the consolidated API reference, documentation finder, focused runnable recipes, third-party integration examples, and MCP setup — install the central skills:
npx skills add deepgram/skillsThis SDK ships language-idiomatic code skills; deepgram/skills ships cross-language product knowledge (see api, docs, recipes, examples, starters, setup-mcp).
d0d6fee
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.