Use when writing or reviewing JavaScript/TypeScript in this repo that calls Deepgram Conversational STT v2 / Flux (`/v2/listen`) for turn-aware streaming transcription. Covers `client.listen.v2.createConnection()` / `connect()`, Flux models, and turn events like `TurnInfo`. Use `deepgram-js-speech-to-text` for standard v1 ASR and `deepgram-js-voice-agent` for full-duplex assistants. Triggers include "flux", "v2 listen", "conversational STT", "turn detection", "end of turn", "EOT", and "listen.v2".
90
88%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Turn-aware streaming STT via /v2/listen for conversational audio and explicit turn events.
TurnInfo and Connected.Use a different skill when:
deepgram-js-speech-to-text.deepgram-js-voice-agent.deepgram-js-audio-intelligence.require("dotenv").config();
const { DeepgramClient } = require("@deepgram/sdk");
const deepgramClient = new DeepgramClient({
apiKey: process.env.DEEPGRAM_API_KEY,
});From examples/26-transcription-live-websocket-v2.ts:
const deepgramConnection = await deepgramClient.listen.v2.createConnection({
model: "flux-general-en",
});
deepgramConnection.on("message", (data) => {
if (data.type === "Connected") {
console.log("Connected:", data);
} else if (data.type === "TurnInfo") {
console.log("Turn Info:", data);
} else if (data.type === "FatalError") {
deepgramConnection.close();
}
});
deepgramConnection.connect();
await deepgramConnection.waitForOpen();
// Swap this for a live mic capture in real apps; the repo example uses
// `createReadStream` over a sample file.
const { createReadStream } = require("node:fs");
const audioStream = createReadStream("samples/spacewalk.wav");
audioStream.on("data", (chunk) => {
deepgramConnection.sendMedia(chunk);
});
audioStream.on("end", () => {
deepgramConnection.sendCloseStream({ type: "CloseStream" });
});src/api/resources/listen/resources/v2/client/Client.ts: model, encoding, sample_rate, eager_eot_threshold, eot_threshold, eot_timeout_ms, keyterm, tag, mip_opt_out.src/api/resources/listen/resources/v2/client/Socket.ts: sendMedia(...), sendCloseStream(...), sendListenV2Configure(...), waitForOpen().src/CustomClient.ts: createConnection(...) alias and Node-only ping(...) helper.ListenV2Model type only exposes "flux-general-en".flux-general-multi / language_hint are not surfaced in this repo's generated types today.reference.md does not currently document listen.v2; use src/CustomClient.ts and src/api/resources/listen/resources/v2/client/{Client,Socket}.ts as the in-repo source of truth./llmstxt/developers_deepgram_llms_txt/v2/listen WebSocket only.flux-general-en. Treat multilingual Flux support as a product capability not yet reflected in this SDK surface.sendCloseStream, not sendFinalize. Finalize is the v1 pattern.ping() is Node-only. src/CustomClient.ts throws in browsers because browser WS ping frames are not user-exposed.createConnection() is lazy. Call connect() after registering handlers.encoding for containerized audio. Keep it for raw PCM/Opus streams.examples/26-transcription-live-websocket-v2.tsexamples/27-deepgram-session-header.tsFor cross-language Deepgram product knowledge — the consolidated API reference, documentation finder, focused runnable recipes, third-party integration examples, and MCP setup — install the central skills:
npx skills add deepgram/skillsThis SDK ships language-idiomatic code skills; deepgram/skills ships cross-language product knowledge (see api, docs, recipes, examples, starters, setup-mcp).
c567b98
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.