Use when writing or reviewing C# code in this repo that calls Deepgram Speech-to-Text for prerecorded or live transcription. Covers `ClientFactory.CreateListenRESTClient()` with `TranscribeUrl` / `TranscribeFile`, and `ClientFactory.CreateListenWebSocketClient()` with `Connect`, `Subscribe`, and `Send`. Use `deepgram-dotnet-audio-intelligence` for summaries/sentiment/topics overlays, `deepgram-dotnet-conversational-stt` for Flux-specific work (not fully surfaced in this SDK), and `deepgram-dotnet-voice-agent` for full-duplex assistants.
89
86%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Basic transcription for prerecorded audio (REST) or live audio (WebSocket).
Use a different skill when:
/listen → deepgram-dotnet-audio-intelligence.deepgram-dotnet-conversational-stt.deepgram-dotnet-voice-agent.dotnet add package Deepgramusing Deepgram;
Library.Initialize();
// Reads DEEPGRAM_API_KEY by default.
var client = ClientFactory.CreateListenRESTClient();The SDK also accepts DEEPGRAM_ACCESS_TOKEN via DeepgramHttpClientOptions / DeepgramWsClientOptions. In this repo, async methods return Task<T> but do not use an Async suffix.
using Deepgram;
using Deepgram.Models.Listen.v1.REST;
Library.Initialize();
var client = ClientFactory.CreateListenRESTClient();
var response = await client.TranscribeUrl(
new UrlSource("https://dpgr.am/bueller.wav"),
new PreRecordedSchema()
{
Model = "nova-3",
SmartFormat = true,
Punctuate = true,
Keyterm = new List<string> { "Bueller" },
});
Console.WriteLine(response.Results.Channels[0].Alternatives[0].Transcript);using Deepgram.Models.Listen.v1.REST;
var client = ClientFactory.CreateListenRESTClient();
var audioData = File.ReadAllBytes("audio.wav");
var response = await client.TranscribeFile(
audioData,
new PreRecordedSchema()
{
Model = "nova-3",
Punctuate = true,
});You can also call TranscribeFile(Stream source, ...) when you already have a Stream.
using Deepgram;
using Deepgram.Microphone;
using Deepgram.Models.Listen.v2.WebSocket;
Library.Initialize();
var liveClient = ClientFactory.CreateListenWebSocketClient();
await liveClient.Subscribe(new EventHandler<ResultResponse>((sender, e) =>
{
var transcript = e.Channel.Alternatives[0].Transcript;
if (!string.IsNullOrWhiteSpace(transcript))
{
Console.WriteLine(transcript);
}
}));
bool connected = await liveClient.Connect(new LiveSchema()
{
Model = "nova-3",
Encoding = "linear16",
SampleRate = 16000,
InterimResults = true,
UtteranceEnd = "1000",
VadEvents = true,
});
if (connected)
{
var microphone = new Microphone(liveClient.Send);
microphone.Start();
Console.ReadKey();
microphone.Stop();
await liveClient.Stop();
}REST PreRecordedSchema: Model, Language, Encoding, SampleRate, Punctuate, SmartFormat, Keywords, Keyterm, Utterances, Paragraphs, Numerals, MultiChannel, Replace, Search, Tag, Version.
WebSocket LiveSchema: Model, Encoding, SampleRate, Channels, InterimResults, UtteranceEnd, VadEvents, Punctuate, SmartFormat, Endpointing, Keywords, Keyterm, NoDelay.
Deepgram/ClientFactory.csDeepgram/Clients/Listen/v1/REST/Client.csDeepgram/Clients/Listen/v2/WebSocket/Client.csDeepgram/Models/Listen/v1/REST/PreRecordedSchema.csDeepgram/Models/Listen/v2/WebSocket/LiveSchema.cshttps://context7.com/deepgram/deepgram-dotnet-sdk/llmstxt/developers_deepgram_llms_txtawait client.TranscribeUrl(...), not TranscribeUrlAsync(...).Deepgram.Models.Listen.v1.REST; live is Deepgram.Models.Listen.v2.WebSocket.Subscribe(...) handlers before Connect(...).Encoding + SampleRate. Wrong declarations produce bad transcripts or server errors.Keyterm is guarded. Listen.v2.WebSocket.Client.Connect throws if you use Keyterm with a non-nova-3 model.TranscribeUrlCallBack / TranscribeFileCallBack; sync methods reject CallBack in PreRecordedSchema.Deepgram.Microphone is optional. It is a helper package/project, not required for file or URL transcription.examples/speech-to-text/rest/url/Program.csexamples/speech-to-text/rest/file/Program.csexamples/speech-to-text/websocket/file/Program.csexamples/speech-to-text/websocket/http/Program.csexamples/speech-to-text/websocket/microphone/Program.cstests/edge_cases/stt_v1_client_example/For cross-language Deepgram product knowledge — the consolidated API reference, documentation finder, focused runnable recipes, third-party integration examples, and MCP setup — install the central skills:
npx skills add deepgram/skillsThis SDK ships language-idiomatic code skills; deepgram/skills ships cross-language product knowledge (see api, docs, recipes, examples, starters, setup-mcp).
d0d6fee
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.