Local speech-to-text with the Whisper CLI (no API key).
Install with Tessl CLI
npx tessl i github:qsimeon/openclaw-engaging --skill openai-whisper76
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillEvaluation — 91%
↓ 0.91xAgent success when using this skill
Validation for skill structure
CLI syntax and output control
Uses whisper CLI
100%
100%
output_format txt flag
100%
100%
output_dir flag
100%
100%
model flag present
100%
100%
Correct model for accuracy
100%
0%
Iterates over files
100%
100%
No API key required
100%
100%
Without context: $0.2726 · 1m 6s · 19 turns · 25 in / 3,603 out tokens
With context: $0.2662 · 49s · 17 turns · 20 in / 2,880 out tokens
Translation and subtitle generation
Uses whisper CLI
100%
100%
translate task flag
100%
100%
SRT output format
100%
100%
output_dir flag
100%
100%
model flag present
100%
0%
No API key required
100%
100%
Handles M4A files
100%
100%
Without context: $0.2014 · 46s · 14 turns · 21 in / 2,608 out tokens
With context: $0.2833 · 51s · 20 turns · 22 in / 2,857 out tokens
Model size selection for speed vs accuracy
Uses whisper CLI
100%
100%
model flag in fast mode
100%
100%
model flag in accurate mode
100%
100%
Different models per mode
100%
100%
model flag present both modes
100%
100%
USAGE.md trade-offs documented
100%
100%
No API key required
100%
100%
Without context: $0.2914 · 58s · 19 turns · 25 in / 3,479 out tokens
With context: $0.3103 · 1m 1s · 20 turns · 26 in / 3,301 out tokens
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.