Use when writing or reviewing Go code in this repo that synthesizes audio with Speak v1 REST or Speak WebSockets. Route transcription work to deepgram-go-speech-to-text, voice conversation runtime work to deepgram-go-voice-agent, and repository maintenance work to deepgram-go-maintaining-sdk.
85
81%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Use this skill for pkg/client/speak work:
Use a different skill when:
deepgram-go-speech-to-text)deepgram-go-voice-agent)deepgram-go-maintaining-sdk)Set DEEPGRAM_API_KEY before creating Speak clients.
export DEEPGRAM_API_KEY="your_api_key"Use the repo's env-backed client defaults instead of embedding secrets in code.
REST synthesis to file:
package main
import (
"context"
"log"
api "github.com/deepgram/deepgram-go-sdk/v3/pkg/api/speak/v1/rest"
speak "github.com/deepgram/deepgram-go-sdk/v3/pkg/client/speak"
interfaces "github.com/deepgram/deepgram-go-sdk/v3/pkg/client/interfaces"
)
func main() {
if err := run(); err != nil {
log.Fatal(err)
}
}
func run() error {
ctx := context.Background()
client := speak.NewRESTWithDefaults()
dg := api.New(client)
if _, err := dg.ToSave(
ctx,
"hello.wav",
"Hello from the Deepgram Go SDK.",
&interfaces.SpeakOptions{Model: "aura-2-thalia-en"},
); err != nil {
return err
}
return nil
}Streaming synthesis with callbacks or channels:
package main
import (
"context"
"fmt"
"log"
speakws "github.com/deepgram/deepgram-go-sdk/v3/pkg/api/speak/v1/websocket"
speak "github.com/deepgram/deepgram-go-sdk/v3/pkg/client/speak"
interfaces "github.com/deepgram/deepgram-go-sdk/v3/pkg/client/interfaces"
)
func main() {
if err := run(); err != nil {
log.Fatal(err)
}
}
func run() error {
ctx := context.Background()
handler := speakws.NewDefaultChanHandler()
conn, err := speak.NewWSUsingChanWithDefaults(
ctx,
&interfaces.WSSpeakOptions{Model: "aura-2-thalia-en"},
handler,
)
if err != nil {
return err
}
defer conn.Stop()
if ok := conn.Connect(); !ok {
return fmt.Errorf("connect failed")
}
conn.Start()
if err := conn.SpeakWithText("Streaming TTS from Go."); err != nil {
return err
}
// The handler receives binary audio and flow-control events.
if err := conn.Flush(); err != nil {
return err
}
return nil
}interfaces.SpeakOptions
Model, Encoding, Container, SampleRateinterfaces.WSSpeakOptions
Model, streaming audio format settingspkg/api/speak/v1/rest: api.New(client).ToStream, ToFile, ToSaveSpeakWithText, Speak, Flush, Resetspeak.NewRESTWithDefaults() / speak.NewREST(...)speak.NewWSUsingCallback...speak.NewWSUsingChan...README.mddocs.gopkg/client/speak/client.gopkg/client/speak/v1/rest/client.gopkg/client/speak/v1/websocket/client_callback.gopkg/client/speak/v1/websocket/client_channel.gopkg/client/interfaces/v1/types-speak.gohttps://developers.deepgram.com/openapi.yamlhttps://developers.deepgram.com/asyncapi.yaml/llmstxt/developers_deepgram_llms_txthttps://developers.deepgram.com/reference/text-to-speech/speak-requesthttps://developers.deepgram.com/reference/text-to-speech/speak-streaminghttps://developers.deepgram.com/docs/tts-modelspkg/api/speak/v1/rest; build them with api.New(client).Connect() returns bool, and shutdown is Stop().examples/text-to-speech/rest/file/hello-world/main.goexamples/text-to-speech/websocket/simple_channel/main.goexamples/text-to-speech/websocket/simple_callback/main.goFor cross-language Deepgram product knowledge — the consolidated API reference, documentation finder, focused runnable recipes, third-party integration examples, and MCP setup — install the central skills:
npx skills add deepgram/skillsThis SDK ships language-idiomatic code skills; deepgram/skills ships cross-language product knowledge (see api, docs, recipes, examples, starters, setup-mcp).
b7c92f4
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.