Use when writing or reviewing JavaScript/TypeScript in this repo that calls Deepgram Conversational STT v2 / Flux (`/v2/listen`) for turn-aware streaming transcription. Covers `client.listen.v2.createConnection()` / `connect()`, Flux models, and turn events like `TurnInfo`. Use `deepgram-js-speech-to-text` for standard v1 ASR and `deepgram-js-voice-agent` for full-duplex assistants. Triggers include "flux", "v2 listen", "conversational STT", "turn detection", "end of turn", "EOT", and "listen.v2".
90
88%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that hits all the marks. It opens with a clear 'Use when' clause, specifies concrete API methods and models, explicitly lists trigger terms users would naturally say, and proactively disambiguates from related skills to minimize conflict risk. The third-person voice is maintained throughout.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions and artifacts: 'client.listen.v2.createConnection() / connect()', 'Flux models', 'turn events like TurnInfo', and clearly names the API endpoint '/v2/listen'. Also distinguishes from related skills by naming specific alternatives. | 3 / 3 |
Completeness | Clearly answers both 'what' (writing/reviewing JS/TS code calling Deepgram Conversational STT v2/Flux for turn-aware streaming transcription) and 'when' (explicit 'Use when' clause at the start, plus explicit trigger terms listed at the end). Also includes disambiguation guidance for related skills. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms explicitly listed: 'flux', 'v2 listen', 'conversational STT', 'turn detection', 'end of turn', 'EOT', 'listen.v2'. These are terms a developer would naturally use when working with this specific API. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive — explicitly differentiates itself from two related skills ('deepgram-js-speech-to-text' for v1 ASR and 'deepgram-js-voice-agent' for full-duplex assistants), carving out a clear niche for v2/Flux conversational STT specifically. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-crafted, actionable skill with executable code examples, clear workflow sequencing, and highly valuable gotchas that address real pitfalls. Its main weaknesses are minor verbosity in cross-referencing sections and the lack of supporting bundle files to back up the progressive disclosure references. The gotchas section is a standout strength, providing specific, non-obvious guidance that genuinely helps Claude avoid common mistakes.
Suggestions
Trim the 'Central product skills' section to 1-2 lines or move it to a bundle reference file — it's promotional rather than instructional.
Consider consolidating the 5-item 'API reference (layered)' list into a more compact format, since most items are external URLs that could be a simple reference table.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Generally efficient but has some areas that could be tightened — the 'When to use this product' section with negative cases is useful but slightly verbose, and the 'Central product skills' section at the bottom adds tokens for cross-promotion rather than task execution. The API reference layered section lists 5 external links that could be more compact. | 2 / 3 |
Actionability | Provides fully executable, copy-paste-ready code for authentication and the main quick start flow. Key parameters, socket methods, and gotchas are specific and concrete (e.g., 'Close with sendCloseStream, not sendFinalize'). The code references real file paths and real method signatures. | 3 / 3 |
Workflow Clarity | The quick start demonstrates a clear sequence: create connection → register handlers → connect → wait for open → stream audio → close stream. The gotchas section serves as validation checkpoints (e.g., gotcha #5 about lazy createConnection, #6 about 400 errors). For a WebSocket streaming skill, the workflow is well-sequenced with error handling guidance. | 3 / 3 |
Progressive Disclosure | References to source files (src/CustomClient.ts, src/api/resources/...) and example files are well-signaled, and external docs are layered. However, there are no bundle files provided, and the skill references a 'reference.md' that doesn't document listen.v2, which is a gap. The content is reasonably structured but the API reference section is somewhat inline-heavy for what could be a separate reference file. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
c567b98
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.