CtrlK
BlogDocsLog inGet started
Tessl Logo

groq-core-workflow-b

Execute Groq secondary workflows: audio transcription (Whisper), vision, text-to-speech, and batch model evaluation. Trigger with phrases like "groq whisper", "groq transcription", "groq audio", "groq vision", "groq TTS", "groq speech".

67

Quality

82%

Does it follow best practices?

Impact

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that concisely communicates specific capabilities (audio transcription, vision, TTS, batch evaluation) within the Groq platform context. It includes explicit trigger phrases covering natural user language variations and is clearly distinguishable from other skills. The description uses proper third-person voice and avoids vague language.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: audio transcription (Whisper), vision, text-to-speech, and batch model evaluation. These are distinct, named capabilities.

3 / 3

Completeness

Clearly answers both 'what' (execute Groq secondary workflows: audio transcription, vision, TTS, batch evaluation) and 'when' (explicit trigger phrases provided). The 'Trigger with phrases like...' clause serves as an explicit 'Use when' equivalent.

3 / 3

Trigger Term Quality

Includes natural trigger terms users would say: 'groq whisper', 'groq transcription', 'groq audio', 'groq vision', 'groq TTS', 'groq speech'. These cover common variations of how users would phrase requests.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with the 'Groq' qualifier and specific modality terms (Whisper, TTS, vision). The combination of platform name + specific workflow types makes it very unlikely to conflict with other skills.

3 / 3

Total

12

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid reference skill with excellent, executable code examples covering multiple Groq secondary workflows. Its main weaknesses are the monolithic structure (all features inline rather than split into focused sub-files) and the lack of validation/error-handling within the workflow steps themselves. The content could be tightened by removing explanatory framing and splitting per-feature examples into referenced files.

Suggestions

Add inline error handling and validation to code examples (e.g., check file exists before streaming, verify transcription result is non-empty) to improve workflow clarity.

Split per-feature code examples (audio, vision, TTS, benchmarking) into separate referenced files to improve progressive disclosure and reduce the monolithic body.

Remove the Overview paragraph's marketing-style framing ('ultra-fast', '216x real-time') and let the model table speak for itself to improve conciseness.

DimensionReasoningScore

Conciseness

The content is mostly efficient with good code examples, but includes some unnecessary explanation (e.g., the Overview paragraph explaining Whisper speed stats, 'Beyond chat completions' framing). The model table and error table are useful, but some inline comments in code are redundant for Claude.

2 / 3

Actionability

Fully executable TypeScript and Python code examples covering all described workflows—transcription, translation, vision, TTS, and benchmarking. Code is copy-paste ready with proper imports, types, and realistic parameters.

3 / 3

Workflow Clarity

Steps are clearly labeled and sequenced, but the 'steps' are really independent features rather than a sequential workflow. There are no validation checkpoints—no verification that transcription succeeded, no error handling within the code examples, and no feedback loops for failures like file-too-large scenarios.

2 / 3

Progressive Disclosure

The content is fairly long (~170 lines of substantive content) and could benefit from splitting detailed code examples into separate files. The error table and model reference tables could be externalized. There is one reference to 'groq-common-errors' and external doc links, but the main body is monolithic.

2 / 3

Total

9

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.