CtrlK
BlogDocsLog inGet started
Tessl Logo

groq-migration-deep-dive

Migrate from OpenAI/Anthropic/other LLM providers to Groq, or migrate between Groq model generations with zero-downtime traffic shifting. Trigger with phrases like "migrate to groq", "switch to groq", "groq migration", "openai to groq", "groq replatform".

80

Quality

77%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/saas-packs/groq-pack/skills/groq-migration-deep-dive/SKILL.md
SKILL.md
Quality
Evals
Security

Groq Migration Deep Dive

Current State

!npm list groq-sdk openai @anthropic-ai/sdk 2>/dev/null | grep -E "groq|openai|anthropic" || echo 'No LLM SDKs found'

Overview

Migrate to Groq from OpenAI, Anthropic, or other LLM providers. Groq's OpenAI-compatible API makes migration straightforward -- the primary changes are: different SDK import, different model IDs, and different response metadata. The reward is 10-50x faster inference.

Migration Complexity

SourceComplexityKey Changes
OpenAILowImport, model IDs, base URL -- API shape is identical
AnthropicMediumDifferent API shape, message format, streaming protocol
Local LLMsMediumRemove infra, add API calls
Other cloud (Bedrock, Vertex)MediumRemove cloud SDK, add groq-sdk

Instructions

Step 1: OpenAI to Groq Migration

// BEFORE: OpenAI
import OpenAI from "openai";
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const result = await openai.chat.completions.create({
  model: "gpt-4o-mini",
  messages: [{ role: "user", content: "Hello" }],
});

// AFTER: Groq (minimal changes)
import Groq from "groq-sdk";
const groq = new Groq({ apiKey: process.env.GROQ_API_KEY });
const result = await groq.chat.completions.create({
  model: "llama-3.3-70b-versatile",  // or "llama-3.1-8b-instant"
  messages: [{ role: "user", content: "Hello" }],
});

// Same response shape: result.choices[0].message.content

Step 2: Model ID Mapping

// OpenAI → Groq model equivalents
const MODEL_MAP: Record<string, string> = {
  // OpenAI → Groq (quality equivalent)
  "gpt-4o":        "llama-3.3-70b-versatile",
  "gpt-4o-mini":   "llama-3.1-8b-instant",
  "gpt-4-turbo":   "llama-3.3-70b-versatile",
  "gpt-3.5-turbo": "llama-3.1-8b-instant",

  // Anthropic → Groq (approximate)
  "claude-3-5-sonnet": "llama-3.3-70b-versatile",
  "claude-3-haiku":    "llama-3.1-8b-instant",
};

function migrateModelId(model: string): string {
  return MODEL_MAP[model] || "llama-3.3-70b-versatile";
}

Step 3: Provider Abstraction Layer

// Build a provider-agnostic layer for zero-downtime migration
interface LLMProvider {
  name: string;
  complete(messages: any[], model: string, maxTokens: number): Promise<{
    content: string;
    model: string;
    tokens: { prompt: number; completion: number; total: number };
  }>;
}

class GroqProvider implements LLMProvider {
  name = "groq";
  private client: Groq;

  constructor() {
    this.client = new Groq();
  }

  async complete(messages: any[], model: string, maxTokens: number) {
    const result = await this.client.chat.completions.create({
      model,
      messages,
      max_tokens: maxTokens,
    });

    return {
      content: result.choices[0].message.content || "",
      model: result.model,
      tokens: {
        prompt: result.usage!.prompt_tokens,
        completion: result.usage!.completion_tokens,
        total: result.usage!.total_tokens,
      },
    };
  }
}

class OpenAIProvider implements LLMProvider {
  name = "openai";
  private client: OpenAI;

  constructor() {
    this.client = new OpenAI();
  }

  async complete(messages: any[], model: string, maxTokens: number) {
    const result = await this.client.chat.completions.create({
      model,
      messages,
      max_tokens: maxTokens,
    });

    return {
      content: result.choices[0].message.content || "",
      model: result.model,
      tokens: {
        prompt: result.usage!.prompt_tokens,
        completion: result.usage!.completion_tokens,
        total: result.usage!.total_tokens,
      },
    };
  }
}

Step 4: Feature Flag Traffic Shifting

// Gradually shift traffic from OpenAI to Groq
function getProvider(): LLMProvider {
  const groqPercentage = getFeatureFlag("groq_migration_pct"); // 0-100

  if (Math.random() * 100 < groqPercentage) {
    return new GroqProvider();
  }
  return new OpenAIProvider();
}

// Migration schedule:
// Week 1: groq_migration_pct = 10  (canary)
// Week 2: groq_migration_pct = 50  (validate quality)
// Week 3: groq_migration_pct = 90  (near-complete)
// Week 4: groq_migration_pct = 100 (done, remove OpenAI)

Step 5: Automated Migration Scanner

set -euo pipefail
echo "=== Migration Assessment ==="

echo ""
echo "--- OpenAI references ---"
grep -rn "from ['\"]openai['\"]" src/ --include="*.ts" --include="*.js" 2>/dev/null | wc -l
grep -rn "openai\." src/ --include="*.ts" --include="*.js" 2>/dev/null | head -5

echo ""
echo "--- Model IDs to migrate ---"
grep -roh "model.*['\"]gpt-[^'\"]*['\"]" src/ --include="*.ts" --include="*.js" 2>/dev/null | sort -u

echo ""
echo "--- OpenAI-specific features used ---"
grep -rn "\.images\.\|\.audio\.\|\.embeddings\.\|\.moderations\.\|\.files\.\|\.fine_tuning\." \
  src/ --include="*.ts" --include="*.js" 2>/dev/null || echo "None (chat.completions only -- easy migration)"

echo ""
echo "--- API keys to update ---"
grep -rn "OPENAI_API_KEY" src/ .env* --include="*.ts" --include="*.js" --include=".env*" 2>/dev/null | wc -l

Step 6: Comparison Benchmark

// Run the same prompts through both providers to compare quality + speed
async function migrationBenchmark(prompts: string[]) {
  const groq = new GroqProvider();
  const openai = new OpenAIProvider();

  for (const prompt of prompts) {
    const messages = [{ role: "user" as const, content: prompt }];

    const startGroq = performance.now();
    const groqResult = await groq.complete(messages, "llama-3.3-70b-versatile", 256);
    const groqMs = performance.now() - startGroq;

    const startOAI = performance.now();
    const oaiResult = await openai.complete(messages, "gpt-4o-mini", 256);
    const oaiMs = performance.now() - startOAI;

    console.log(`Prompt: "${prompt.slice(0, 50)}..."`);
    console.log(`  Groq:   ${groqMs.toFixed(0)}ms | ${groqResult.tokens.total} tokens`);
    console.log(`  OpenAI: ${oaiMs.toFixed(0)}ms | ${oaiResult.tokens.total} tokens`);
    console.log(`  Speedup: ${(oaiMs / groqMs).toFixed(1)}x faster with Groq`);
    console.log();
  }
}

Step 7: Key Differences to Handle

FeatureOpenAIGroq
SDK importimport OpenAI from "openai"import Groq from "groq-sdk"
Env varOPENAI_API_KEYGROQ_API_KEY
Modelsgpt-4o, gpt-4o-minillama-3.3-70b-versatile, llama-3.1-8b-instant
Embeddingsopenai.embeddings.create()Not available (use OpenAI or local)
Fine-tuningSupportedNot available
Image generationopenai.images.generate()Not available
Audio (STT)openai.audio.transcriptionsgroq.audio.transcriptions (faster)
Structured outputsstrict: truestrict: true (same format)
Tool callingSupportedSupported (same format)
JSON moderesponse_format: { type: "json_object" }Same
Visiongpt-4o with imagesLlama 4 Scout/Maverick
StreamingSupportedSupported (same SSE format)
Response usageStandard fieldsAdds queue_time, completion_time, total_time

Rollback Plan

set -euo pipefail
# Immediate rollback: flip feature flag
# groq_migration_pct = 0

# Verify:
# - All requests routing to OpenAI
# - Error rates returned to baseline
# - No Groq API calls in logs

Error Handling

IssueCauseSolution
Quality regressionDifferent model strengthsTune system prompts for Llama models
Missing featuresGroq doesn't have embeddings/imagesKeep OpenAI for those features
Rate limitsDifferent limits than OpenAIConfigure per-model rate limits
Cost increaseDifferent pricing structureRoute simple tasks to 8B model

Resources

  • Groq Quickstart
  • Groq Models
  • Groq API Reference
  • groq-sdk npm

Next Steps

For ongoing SDK version upgrades, see groq-upgrade-migration.

Repository
jeremylongshore/claude-code-plugins-plus-skills
Last updated
Created

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.