CtrlK
BlogDocsLog inGet started
Tessl Logo

firecrawl-hello-world

Create a minimal working Firecrawl example that scrapes a page to markdown. Use when starting a new Firecrawl integration, testing your setup, or learning the scrape/crawl/map/extract API surface. Trigger with phrases like "firecrawl hello world", "firecrawl example", "firecrawl quick start", "simple firecrawl code".

84

Quality

82%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

SKILL.md
Quality
Evals
Security

Firecrawl Hello World

Overview

Four minimal examples covering Firecrawl's core endpoints: scrape (single page), crawl (multi-page), map (URL discovery), and extract (LLM structured data). Each is a standalone snippet you can run immediately.

Prerequisites

  • @mendable/firecrawl-js installed (npm install @mendable/firecrawl-js)
  • FIRECRAWL_API_KEY environment variable set

Instructions

Step 1: Single-Page Scrape

import FirecrawlApp from "@mendable/firecrawl-js";

const firecrawl = new FirecrawlApp({
  apiKey: process.env.FIRECRAWL_API_KEY!,
});

// Scrape one page — returns markdown, HTML, metadata, links
const result = await firecrawl.scrapeUrl("https://docs.firecrawl.dev", {
  formats: ["markdown"],
});

console.log("Title:", result.metadata?.title);
console.log("Markdown:", result.markdown?.substring(0, 500));

Step 2: Multi-Page Crawl

// Crawl a site recursively — follows links, respects robots.txt
const crawlResult = await firecrawl.crawlUrl("https://docs.firecrawl.dev", {
  limit: 10,  // max 10 pages (saves credits)
  scrapeOptions: {
    formats: ["markdown"],
  },
});

console.log(`Crawled ${crawlResult.data?.length} pages`);
for (const page of crawlResult.data || []) {
  console.log(`  ${page.metadata?.title} — ${page.metadata?.sourceURL}`);
}

Step 3: Map a Site (URL Discovery)

// Discover all URLs on a site in ~2-3 seconds (uses sitemap + SERP)
const mapResult = await firecrawl.mapUrl("https://docs.firecrawl.dev");

console.log(`Found ${mapResult.links?.length} URLs`);
mapResult.links?.slice(0, 10).forEach(url => console.log(`  ${url}`));

Step 4: LLM Extract (Structured Data)

// Extract structured data from a page using an LLM + JSON schema
const extracted = await firecrawl.scrapeUrl("https://firecrawl.dev/pricing", {
  formats: ["extract"],
  extract: {
    schema: {
      type: "object",
      properties: {
        plans: {
          type: "array",
          items: {
            type: "object",
            properties: {
              name: { type: "string" },
              price: { type: "string" },
              credits: { type: "number" },
            },
          },
        },
      },
    },
  },
});

console.log("Pricing plans:", JSON.stringify(extracted.extract, null, 2));

Output

  • Single-page markdown scraped from a live URL
  • Multi-page crawl results with titles and source URLs
  • Site map with all discovered URLs
  • Structured JSON extracted by LLM from page content

Error Handling

ErrorCauseSolution
Cannot find moduleSDK not installednpm install @mendable/firecrawl-js
401 UnauthorizedMissing or invalid API keyCheck FIRECRAWL_API_KEY env var
429 Too Many RequestsRate limit exceededWait and retry with backoff
Empty markdownJS-heavy page not renderedAdd waitFor: 5000 to scrape options
402 Payment RequiredCredits exhaustedCheck balance at firecrawl.dev/app

Examples

Python Hello World

from firecrawl import FirecrawlApp

firecrawl = FirecrawlApp(api_key="fc-YOUR_API_KEY")

# Scrape
result = firecrawl.scrape_url("https://example.com", params={
    "formats": ["markdown"]
})
print(result["markdown"][:500])

# Map
urls = firecrawl.map_url("https://example.com")
print(f"Found {len(urls.get('links', []))} URLs")

Batch Scrape Multiple URLs

// Scrape many URLs at once — more efficient than individual scrapes
const batchResult = await firecrawl.batchScrapeUrls(
  [
    "https://docs.firecrawl.dev/features/scrape",
    "https://docs.firecrawl.dev/features/crawl",
    "https://docs.firecrawl.dev/features/extract",
  ],
  { formats: ["markdown"] }
);

for (const page of batchResult.data || []) {
  console.log(`${page.metadata?.title}: ${page.markdown?.length} chars`);
}

Resources

  • Firecrawl Quickstart
  • Scrape Endpoint
  • Crawl Endpoint
  • Map Endpoint
  • Extract (JSON Mode)

Next Steps

Proceed to firecrawl-local-dev-loop for development workflow setup.

Repository
jeremylongshore/claude-code-plugins-plus-skills
Last updated
Created

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.