Use when writing or reviewing Python code in this repo that calls Deepgram Speech-to-Text v1 (`/v1/listen`) for prerecorded or live audio transcription. Covers `client.listen.v1.media.transcribe_url` / `transcribe_file` (REST) and `client.listen.v1.connect` (WebSocket). Use this skill for basic ASR; use `deepgram-python-audio-intelligence` for summarize/sentiment/topics/diarize overlays, `deepgram-python-conversational-stt` for turn-taking v2/Flux, and `deepgram-python-voice-agent` for full-duplex assistants. Triggers include "transcribe", "live transcription", "speech to text", "STT", "listen endpoint", "nova-3", "listen.v1".
94
92%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that clearly defines its scope (Deepgram STT v1 for prerecorded/live transcription), provides explicit trigger terms, and carefully distinguishes itself from related skills. The inclusion of specific API methods, endpoint paths, and boundary guidance with sibling skills makes it highly effective for skill selection among a large pool of options.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions and API methods: 'transcribe_url', 'transcribe_file' (REST), 'client.listen.v1.connect' (WebSocket), prerecorded and live audio transcription. Also explicitly names the endpoint '/v1/listen'. | 3 / 3 |
Completeness | Clearly answers both 'what' (Python code calling Deepgram STT v1 for prerecorded/live transcription via REST and WebSocket) and 'when' (explicit 'Use when' clause at the start, plus explicit trigger terms and boundary conditions distinguishing from related skills). | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms users would say: 'transcribe', 'live transcription', 'speech to text', 'STT', 'listen endpoint', 'nova-3', 'listen.v1'. These cover both casual and technical variations. | 3 / 3 |
Distinctiveness Conflict Risk | Exceptionally distinctive — explicitly delineates boundaries with four related skills (audio-intelligence, conversational-stt, voice-agent), specifying exactly which features belong to each. This makes conflict with sibling skills very unlikely. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
85%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a strong, well-structured skill that provides excellent actionable guidance for Deepgram STT integration. Its main strength is the comprehensive, executable code examples covering all major use patterns (REST, WebSocket, async, callback). The primary weakness is moderate verbosity — some sections like the interim/final semantics explanation and the async/deferred patterns could be tightened without losing clarity, and a few inline comments explain things Claude already knows.
Suggestions
Trim the inline code comments that explain Python concepts Claude already knows (e.g., 'Mutable container so the on_message closure can update state without `global`') to save tokens.
Consider condensing the 'Async / deferred result patterns' section — the distinction between Python async/await and webhook callbacks could be conveyed more concisely without the lengthy prose explanations.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is generally well-structured but includes some unnecessary verbosity — the interim vs. final flag semantics section is quite detailed, the async/deferred patterns section is lengthy with a comparison table, and some explanatory prose (e.g., 'Mutable container so the on_message closure can update state without `global`') explains things Claude already knows. The gotchas section is valuable but could be tighter. | 2 / 3 |
Actionability | Excellent actionability throughout — every major use case (REST URL, REST file, WebSocket, async, callback) has fully executable, copy-paste-ready Python code with correct imports and realistic parameters. The code examples are complete and specific, not pseudocode. | 3 / 3 |
Workflow Clarity | The skill clearly sequences multi-step processes: authentication → choose REST vs WebSocket → execute → handle results. The WebSocket example includes explicit cleanup steps (send_finalize before close), the gotchas section serves as validation checkpoints, and the interim/final flag semantics provide clear decision logic for handling streaming results. | 3 / 3 |
Progressive Disclosure | Content is well-layered: quick starts at the top, detailed patterns in the middle, and references to external files (example scripts, tests, reference.md, related skills) at the bottom. The 'API reference (layered)' section provides clear one-level-deep pointers to Fern-generated docs, OpenAPI specs, and product docs. Related skills are clearly signaled at the top with trigger conditions. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
16b9839
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.