Use when writing or reviewing Python code in this repo that calls Deepgram audio analytics overlays on `/v1/listen` - summarize, topics, intents, sentiment, diarize, redact, detect_language, entity detection. Same endpoint as plain STT but with analytics params. Covers both REST (`client.listen.v1.media.transcribe_url`/`transcribe_file`) and the WSS-supported subset (`client.listen.v1.connect`). Use `deepgram-python-speech-to-text` for plain transcription, `deepgram-python-text-intelligence` for analytics on already-transcribed text. Triggers include "diarize", "summarize audio", "sentiment from audio", "redact PII", "topic detection audio", "audio intelligence", "detect language audio".
86
82%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that clearly defines its scope (Deepgram audio analytics overlays), lists specific capabilities, provides explicit trigger terms, and carefully delineates boundaries with related skills. The description is information-dense without being padded, uses third-person voice appropriately, and would allow Claude to confidently select or reject this skill from a large pool.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: summarize, topics, intents, sentiment, diarize, redact, detect_language, entity detection. Also specifies exact API paths and client methods (REST and WSS). | 3 / 3 |
Completeness | Clearly answers both 'what' (audio analytics overlays on /v1/listen including summarize, topics, intents, sentiment, etc.) and 'when' (explicit 'Use when' clause at the start plus explicit trigger terms at the end). Also provides boundary guidance distinguishing from related skills. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms users would say: 'diarize', 'summarize audio', 'sentiment from audio', 'redact PII', 'topic detection audio', 'audio intelligence', 'detect language audio'. These are realistic phrases a developer would use. | 3 / 3 |
Distinctiveness Conflict Risk | Explicitly distinguishes itself from related skills ('deepgram-python-speech-to-text' for plain transcription, 'deepgram-python-text-intelligence' for text analytics), creating clear boundaries. The focus on audio analytics overlays on a specific endpoint is a well-defined niche. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid, actionable skill with excellent executable code examples covering REST and WSS paths, clear feature availability tables, and useful gotchas. Its main weaknesses are moderate verbosity (the diarization section is particularly expansive), lack of error handling/validation checkpoints especially around sensitive operations like PII redaction, and references to bundle files (reference.md, example files) that don't exist in the bundle.
Suggestions
Add error handling and validation checkpoints — at minimum, check response status and verify redaction output contains redacted tokens before considering the operation successful.
Trim the diarization section: the per-word field table and groupby example could be moved to a separate reference file, keeping just the core diarize=True example inline.
Include the referenced 'reference.md' file in the bundle, or remove the reference to avoid broken navigation.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Generally efficient with good use of tables and code examples, but includes some unnecessary explanation (e.g., the 'When to use this product' section rehashes routing logic Claude can infer, the diarization section is quite lengthy with a full field table, and the 'Central product skills' section at the bottom adds marginal value). The gotchas section is valuable but could be tighter. | 2 / 3 |
Actionability | Provides fully executable, copy-paste-ready code examples for REST URL, REST file, diarization with word-level timings, and WSS. Parameter names are specific, response field access paths are concrete, and the redact type clarification with fallback guidance is highly actionable. | 3 / 3 |
Workflow Clarity | The skill presents clear individual code examples but lacks validation checkpoints — there's no error handling, no guidance on checking response status, no feedback loop for failed transcriptions or invalid parameters. For operations involving PII redaction (a sensitive/destructive operation), the absence of verification steps (e.g., confirming redaction actually occurred in output) is a gap. | 2 / 3 |
Progressive Disclosure | References to external docs, OpenAPI/AsyncAPI specs, and related skills are well-organized in the 'API reference (layered)' section. However, the skill itself is quite long (~180 lines) with inline content that could be split out (e.g., the detailed diarization word-field table, the full WSS availability matrix). No bundle files are provided despite references to 'reference.md' and example files. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
16b9839
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.