Local speech-to-text with the Whisper CLI (no API key).
Install with Tessl CLI
npx tessl i github:qsimeon/openclaw-engaging --skill openai-whisper76
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillEvaluation — 91%
↓ 0.91xAgent success when using this skill
Validation for skill structure
Use whisper to transcribe audio locally.
Quick start
whisper /path/audio.mp3 --model medium --output_format txt --output_dir .whisper /path/audio.m4a --task translate --output_format srtNotes
~/.cache/whisper on first run.--model defaults to turbo on this install.cc4d1f7
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.