Use when writing or reviewing JavaScript/TypeScript in this repo that calls Deepgram audio analytics overlays on `/v1/listen` - summarize, topics, intents, sentiment, diarize, redact, detect_language, and entity detection. Same endpoint as plain STT, different params. Covers REST via `client.listen.v1.media.transcribeUrl` / `transcribeFile` and the WebSocket-supported subset on `client.listen.v1.createConnection()` / `connect()`. Use `deepgram-js-speech-to-text` for plain transcription and `deepgram-js-text-intelligence` for analytics on already-transcribed text. Triggers include "audio intelligence", "summarize audio", "diarize", "sentiment from audio", "redact PII", and "detect language audio".
90
88%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Analytics overlays applied to /v1/listen: summaries, topics, intents, sentiment, language detection, diarization, redaction, entities. Same client surface as STT; turn features on with parameters.
Use a different skill when:
deepgram-js-speech-to-text.deepgram-js-text-intelligence.deepgram-js-conversational-stt.deepgram-js-voice-agent.| Feature | REST | WSS |
|---|---|---|
diarize | yes | yes |
redact | yes | yes |
detect_entities | yes | yes |
punctuate, smart_format | yes | yes |
summarize | yes | no in current WSS connect args |
topics | yes | no |
intents | yes | no |
sentiment | yes | no |
detect_language | yes | no |
require("dotenv").config();
const { DeepgramClient } = require("@deepgram/sdk");
const deepgramClient = new DeepgramClient({
apiKey: process.env.DEEPGRAM_API_KEY,
});From examples/22-transcription-advanced-options.ts:
const data = await deepgramClient.listen.v1.media.transcribeUrl({
url: "https://dpgr.am/spacewalk.wav",
model: "nova-3",
language: "en",
punctuate: true,
paragraphs: true,
utterances: true,
smart_format: true,
sentiment: true,
topics: true,
custom_topic: "custom_topic",
custom_topic_mode: "extended",
intents: true,
custom_intent: "custom_intent",
custom_intent_mode: "extended",
detect_entities: true,
detect_language: true,
diarize: true,
keyterm: ["keyword1", "keyword2"],
redact: ["pci", "ssn"],
});Start from examples/07-transcription-live-websocket.ts and keep the same socket flow, but only use WSS-supported intelligence flags such as diarize, redact, and detect_entities in the connection args.
const deepgramConnection = await deepgramClient.listen.v1.createConnection({
model: "nova-3",
diarize: true,
redact: "pci",
detect_entities: true,
});summarize, topics, intents, sentiment, detect_language, detect_entities, diarize, redact, custom_topic, custom_topic_mode, custom_intent, custom_intent_mode.model, language, encoding, sample_rate, punctuate, smart_format, utterances, paragraphs, multichannel.keyterm, not keywords.reference.md → Listen V1 Media; WSS subset behavior lives in src/CustomClient.ts and src/api/resources/listen/resources/v1/client/{Client,Socket}.ts./llmstxt/developers_deepgram_llms_txtsummarize on /v1/listen is versioned, not plain boolean. The generated REST surface and examples point at "v2".topics, intents, sentiment, summarize, or detect_language.redact typing is looser in practice than in the generated alias. Examples pass arrays like ["pci", "ssn"], even though ListenV1Redact itself is just a string alias.keyterm for Nova-3 biasing. examples/22-transcription-advanced-options.ts explicitly notes keywords are not supported for Nova-3.nova-3 is the safest choice when mixing many overlays.examples/22-transcription-advanced-options.tsexamples/04-transcription-prerecorded-url.tsexamples/05-transcription-prerecorded-file.tsexamples/07-transcription-live-websocket.tsFor cross-language Deepgram product knowledge — the consolidated API reference, documentation finder, focused runnable recipes, third-party integration examples, and MCP setup — install the central skills:
npx skills add deepgram/skillsThis SDK ships language-idiomatic code skills; deepgram/skills ships cross-language product knowledge (see api, docs, recipes, examples, starters, setup-mcp).
c567b98
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.