CtrlK
BlogDocsLog inGet started
Tessl Logo

firecrawl-cost-tuning

Optimize Firecrawl costs through crawl limits, format selection, caching, and credit monitoring. Use when analyzing Firecrawl billing, reducing API costs, or implementing credit budget alerts. Trigger with phrases like "firecrawl cost", "firecrawl billing", "reduce firecrawl costs", "firecrawl pricing", "firecrawl credits", "firecrawl budget".

84

Quality

82%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

SKILL.md
Quality
Evals
Security

Firecrawl Cost Tuning

Overview

Firecrawl charges credits per operation: 1 credit per scrape, 1 per crawled page, 1 per map call, and variable credits for extract (LLM usage). An unbounded crawl on a large site can consume thousands of credits in minutes. This skill covers concrete techniques to reduce credit consumption by 50-80%.

Credit Cost Table

OperationCreditsNotes
scrapeUrl1Per page, any format
crawlUrl1 per pageEach discovered page costs 1 credit
mapUrl1Regardless of URLs returned
batchScrapeUrls1 per URLSame as individual scrape
extract5+LLM processing adds cost

Instructions

Step 1: Always Set Crawl Limits

import FirecrawlApp from "@mendable/firecrawl-js";

const firecrawl = new FirecrawlApp({
  apiKey: process.env.FIRECRAWL_API_KEY!,
});

// BAD: no limit — could crawl 100K pages
await firecrawl.crawlUrl("https://docs.large-project.org");
// Cost: potentially 100,000+ credits

// GOOD: bounded crawl
await firecrawl.crawlUrl("https://docs.large-project.org", {
  limit: 50,               // max 50 pages
  maxDepth: 2,             // only 2 levels deep
  includePaths: ["/api/*"], // only API docs
  excludePaths: ["/blog/*", "/changelog/*"],
  scrapeOptions: { formats: ["markdown"] },
});
// Cost: max 50 credits

Step 2: Use Scrape for Known URLs Instead of Crawl

// If you know which pages you need, don't crawl — scrape them directly
const targetUrls = [
  "https://docs.example.com/api/auth",
  "https://docs.example.com/api/users",
  "https://docs.example.com/api/billing",
];

// Cost: 3 credits (one per page)
const results = await firecrawl.batchScrapeUrls(targetUrls, {
  formats: ["markdown"],
});

// vs crawling the whole docs site: potentially 500+ credits

Step 3: Map First, Then Selective Scrape

// Map costs 1 credit and returns up to 30K URLs
const map = await firecrawl.mapUrl("https://docs.example.com");
// Cost: 1 credit

// Filter to only what you need
const apiDocs = (map.links || []).filter(url => url.includes("/api/"));
console.log(`${map.links?.length} total URLs, only ${apiDocs.length} are API docs`);

// Scrape only relevant pages
const results = await firecrawl.batchScrapeUrls(apiDocs.slice(0, 20), {
  formats: ["markdown"],
});
// Cost: 1 (map) + 20 (scrape) = 21 credits
// vs blind crawl: could be 500+ credits

Step 4: Cache to Prevent Re-Scraping

import { createHash } from "crypto";

const cache = new Map<string, { content: string; timestamp: number }>();
const CACHE_TTL = 24 * 3600 * 1000; // 24 hours

async function cachedScrape(url: string): Promise<string> {
  const key = createHash("md5").update(url).digest("hex");
  const cached = cache.get(key);

  if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
    return cached.content; // Free — no API call
  }

  const result = await firecrawl.scrapeUrl(url, { formats: ["markdown"] });
  if (result.markdown) {
    cache.set(key, { content: result.markdown, timestamp: Date.now() });
  }
  return result.markdown || "";
}
// Typical savings: 50-80% credit reduction for recurring scrapes

Step 5: Monitor Credit Consumption

set -euo pipefail
# Check current credit balance
curl -s https://api.firecrawl.dev/v1/team/credits \
  -H "Authorization: Bearer $FIRECRAWL_API_KEY" | jq .
// Daily credit tracker
class CreditBudget {
  private dailyLimit: number;
  private usage = new Map<string, number>();

  constructor(dailyLimit = 1000) {
    this.dailyLimit = dailyLimit;
  }

  canAfford(estimatedCredits: number): boolean {
    const today = new Date().toISOString().split("T")[0];
    const used = this.usage.get(today) || 0;
    return used + estimatedCredits <= this.dailyLimit;
  }

  record(credits: number) {
    const today = new Date().toISOString().split("T")[0];
    this.usage.set(today, (this.usage.get(today) || 0) + credits);
  }

  remaining(): number {
    const today = new Date().toISOString().split("T")[0];
    return this.dailyLimit - (this.usage.get(today) || 0);
  }
}

const budget = new CreditBudget(1000);

// Before each crawl
if (!budget.canAfford(50)) {
  throw new Error(`Daily credit budget exceeded. ${budget.remaining()} credits left`);
}
await firecrawl.crawlUrl(url, { limit: 50 });
budget.record(50);

Step 6: Choose Minimal Formats

set -euo pipefail
# Cheapest: markdown only (1 credit, fastest)
curl -X POST https://api.firecrawl.dev/v1/scrape \
  -H "Authorization: Bearer $FIRECRAWL_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"url":"https://example.com","formats":["markdown"]}'

# Avoid requesting screenshots, rawHtml, or extract unless needed
# Extract uses LLM calls — significantly more credits

Error Handling

IssueCauseSolution
402 Payment RequiredCredits exhaustedCheck balance, upgrade plan, or wait for reset
Credits drained by one crawlNo limit setAlways set limit and maxDepth
Duplicate scraping costsSame URLs scraped dailyImplement URL-keyed caching
High per-page costRequesting all formats + extractUse formats: ["markdown"] only
Budget overrunNo daily capImplement credit budget tracker

Cost Optimization Summary

TechniqueCredit Savings
Set crawl limitPrevents 100x overages
Map + selective scrape50-90% vs blind crawl
Cache repeated scrapes50-80% reduction
Markdown-only formatFastest, no extras
Batch scrape vs individualSame cost, less overhead

Resources

  • Firecrawl Pricing
  • Firecrawl Dashboard
  • Rate Limits

Next Steps

For reference architecture, see firecrawl-reference-architecture.

Repository
jeremylongshore/claude-code-plugins-plus-skills
Last updated
Created

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.