Use when writing or reviewing Python code in this repo that calls Deepgram Conversational STT v2 / Flux (`/v2/listen`) for turn-aware streaming transcription. Covers `client.listen.v2.connect(...)`, Flux models, end-of-turn detection. Use `deepgram-python-speech-to-text` for standard v1 ASR, `deepgram-python-voice-agent` for full-duplex interactive assistants. Triggers include "flux", "v2 listen", "conversational STT", "turn detection", "end of turn", "EOT", "listen.v2", "flux-general-en", "flux-general-multi".
88
82%
Does it follow best practices?
Impact
100%
1.19xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that clearly defines its scope (Deepgram Conversational STT v2/Flux), provides explicit trigger terms, and proactively disambiguates from related skills. It uses third-person voice, includes both 'Use when' guidance and a comprehensive trigger list, and is specific enough to avoid conflicts while remaining discoverable.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions and entities: 'client.listen.v2.connect(...)', Flux models, end-of-turn detection, '/v2/listen' endpoint, turn-aware streaming transcription. Also distinguishes from related skills with specific alternatives. | 3 / 3 |
Completeness | Clearly answers both 'what' (writing/reviewing Python code for Deepgram Conversational STT v2/Flux, covering specific API calls and features) and 'when' (explicit 'Use when' clause at the start, plus explicit trigger list and disambiguation from related skills). | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms including abbreviations ('EOT'), API paths ('listen.v2'), model names ('flux-general-en', 'flux-general-multi'), and natural language phrases ('conversational STT', 'turn detection', 'end of turn', 'v2 listen', 'flux'). | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive — explicitly differentiates from 'deepgram-python-speech-to-text' for v1 ASR and 'deepgram-python-voice-agent' for voice agents. The niche is narrowly scoped to v2/Flux conversational STT with clear, unique trigger terms. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid, actionable skill with excellent concrete code examples and a thorough gotchas section that captures real-world pitfalls. Its main weaknesses are the lack of error recovery/validation workflows for the streaming connection lifecycle and some verbosity in promotional/cross-reference sections that don't directly help Claude execute the task.
Suggestions
Add explicit error recovery steps: what to do on ListenV2FatalError (reconnect strategy), how to validate the connection is healthy, and a feedback loop for handling dropped connections.
Trim the 'Central product skills' and 'When to use this product' sections — Claude can infer product boundaries from the description metadata, and the npx install block is not actionable in a coding context.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Generally efficient but includes some unnecessary sections like the 'Central product skills' promo block and the 'When to use this product' section which could be trimmed. The 'Related skills' and cross-references add some bloat. However, the core technical content is well-condensed. | 2 / 3 |
Actionability | Provides fully executable code for both sync and async usage, concrete parameter tables, specific model names, exact type imports, and a complete quick-start example with event handling. The gotchas section contains highly specific, actionable details (e.g., sample_rate is a string, use 80ms chunks, exact close method). | 3 / 3 |
Workflow Clarity | The quick start shows a clear sequence (connect → register handlers → send audio → close), but there are no explicit validation checkpoints or error recovery steps. For a WebSocket streaming workflow, there's no guidance on what to do when errors occur mid-stream, no reconnection strategy, and the error handler just prints. | 2 / 3 |
Progressive Disclosure | References to external docs and example files are well-signaled, and the layered API reference section is good. However, there's no bundle to support the references (reference.md mentioned but not provided), and the skill itself is somewhat long with inline content that could be split (e.g., the full parameters table and gotchas could be in a reference file). | 2 / 3 |
Total | 9 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
16b9839
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.