JavaScript SDK for Firecrawl API that enables comprehensive web scraping, crawling, and data extraction with AI-ready output formats.
npx @tessl/cli install tessl/npm-mendable--firecrawl-js@4.3.0JavaScript/TypeScript SDK for the Firecrawl API that enables comprehensive web scraping, crawling, and data extraction with AI-ready output formats.
npm install @mendable/firecrawl-js// Unified client (recommended)
import Firecrawl from '@mendable/firecrawl-js';
// Direct v2 client
import { FirecrawlClient } from '@mendable/firecrawl-js';
// Legacy v1 client
import { FirecrawlAppV1 } from '@mendable/firecrawl-js';// Initialize the client
const app = new Firecrawl({ apiKey: 'your-api-key' });
// Scrape a single URL
const scrapeResult = await app.scrape('https://example.com', {
formats: ['markdown', 'html']
});
// Crawl a website
const crawlResult = await app.crawl('https://example.com', {
limit: 100,
scrapeOptions: { formats: ['markdown'] }
});The SDK provides both current (v2) and legacy (v1) API access:
Firecrawl): Extends v2 client with .v1 property for backward compatibilityFirecrawlClient): Current API with modern async patternsFirecrawlAppV1): Feature-frozen v1 API for existing integrationsAll clients support real-time job monitoring via WebSocket with automatic fallback to polling.
// Client configuration
interface FirecrawlClientOptions {
apiKey?: string | null;
apiUrl?: string | null;
timeoutMs?: number;
maxRetries?: number;
backoffFactor?: number;
}
// Document structure
interface Document {
markdown?: string;
html?: string;
rawHtml?: string;
json?: unknown;
summary?: string;
metadata?: DocumentMetadata;
links?: string[];
images?: string[];
screenshot?: string;
attributes?: Array<{
selector: string;
attribute: string;
values: string[];
}>;
actions?: Record<string, unknown>;
warning?: string;
changeTracking?: Record<string, unknown>;
}
// Document metadata
interface DocumentMetadata {
title?: string;
description?: string;
language?: string;
keywords?: string | string[];
robots?: string;
ogTitle?: string;
ogDescription?: string;
ogUrl?: string;
ogImage?: string;
sourceURL?: string;
statusCode?: number;
error?: string;
[key: string]: unknown;
}// Unified client class
class Firecrawl extends FirecrawlClient {
constructor(opts?: FirecrawlClientOptions);
get v1(): FirecrawlAppV1;
}
// Direct v2 client
class FirecrawlClient {
constructor(options?: FirecrawlClientOptions);
// Core methods available - see capability sections below
scrape(url: string, options?: ScrapeOptions): Promise<Document>;
search(query: string, req?: Omit<SearchRequest, "query">): Promise<SearchData>;
map(url: string, options?: MapOptions): Promise<MapData>;
crawl(url: string, req?: CrawlOptions): Promise<CrawlJob>;
batchScrape(urls: string[], opts?: BatchScrapeOptions): Promise<BatchScrapeJob>;
extract(args: any): Promise<ExtractResponse>;
watcher(jobId: string, opts?: WatcherOptions): Watcher;
}Single URL scraping with multiple output formats including structured data extraction.
Key APIs:
scrape(url: string, options?: ScrapeOptions): Promise<Document>;Recursive website crawling with configurable limits, path filtering, and webhook support.
Key APIs:
startCrawl(url: string, req?: CrawlOptions): Promise<CrawlResponse>;
getCrawlStatus(jobId: string, pagination?: PaginationConfig): Promise<CrawlJob>;
crawl(url: string, req?: CrawlOptions): Promise<CrawlJob>;Concurrent processing of multiple URLs with job monitoring and error handling.
Key APIs:
startBatchScrape(urls: string[], opts?: BatchScrapeOptions): Promise<BatchScrapeResponse>;
getBatchScrapeStatus(jobId: string, pagination?: PaginationConfig): Promise<BatchScrapeJob>;
batchScrape(urls: string[], opts?: BatchScrapeOptions): Promise<BatchScrapeJob>;Batch Operations Documentation
Web search with optional result scraping across different sources (web, news, images).
Key APIs:
search(query: string, req?: Omit<SearchRequest, "query">): Promise<SearchData>;Discover and map website URLs using sitemaps and crawling techniques.
Key APIs:
map(url: string, options?: MapOptions): Promise<MapData>;LLM-powered structured data extraction using natural language prompts or schemas.
Key APIs:
startExtract(args: any): Promise<ExtractResponse>;
getExtractStatus(jobId: string): Promise<ExtractResponse>;
extract(args: any): Promise<ExtractResponse>;WebSocket-based job monitoring with automatic fallback to polling for long-running operations.
Key APIs:
watcher(jobId: string, opts?: WatcherOptions): Watcher;Real-time Monitoring Documentation
Monitor API usage, credits, tokens, and queue status for billing and optimization.
Key APIs:
getConcurrency(): Promise<ConcurrencyCheck>;
getCreditUsage(): Promise<CreditUsage>;
getTokenUsage(): Promise<TokenUsage>;
getQueueStatus(): Promise<QueueStatusResponse>;Feature-frozen v1 API with additional capabilities like deep research and LLMs.txt generation.
Key APIs:
// Access via unified client
const v1Client = app.v1;