Transcribe audio via OpenAI Audio Transcriptions API (Whisper).
76
Quality
70%
Does it follow best practices?
Impact
91%
2.27xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./openclaw/skills/openai-whisper-api/SKILL.mdSpeaker-hinted transcription with custom output path
Uses transcribe.sh
0%
100%
No direct curl call
100%
100%
Speaker prompt flag
0%
100%
Custom output path flag
0%
100%
Output extension correct
50%
100%
Script accepts audio file argument
50%
100%
API key reference
0%
100%
Model not incorrectly overridden
100%
100%
transcripts/ directory created
100%
100%
Without context: $0.1331 · 38s · 11 turns · 16 in / 2,030 out tokens
With context: $0.3534 · 1m 17s · 20 turns · 278 in / 4,422 out tokens
Language-hinted batch transcription
Uses transcribe.sh
0%
100%
No direct curl call
100%
100%
Language flag used
50%
100%
Loops over .ogg files
100%
100%
Default output naming
46%
0%
API key documented
0%
100%
Directory argument
100%
100%
Model not incorrectly overridden
0%
100%
openclaw config mentioned
0%
0%
Without context: $0.2222 · 1m 9s · 15 turns · 22 in / 3,784 out tokens
With context: $0.2983 · 1m 19s · 16 turns · 1,173 in / 4,266 out tokens
JSON-format transcription for data pipelines
Uses transcribe.sh
0%
100%
No direct curl call
0%
100%
JSON flag used
0%
100%
Output path flag
0%
100%
Output extension is .json
100%
100%
API key from environment
100%
100%
openclaw config mentioned
0%
0%
Audio file as argument
100%
100%
call_transcripts/ directory created
100%
100%
Without context: $0.1823 · 48s · 13 turns · 20 in / 2,989 out tokens
With context: $0.2802 · 1m 2s · 18 turns · 277 in / 3,766 out tokens
8763418
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.