CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl/npm-mendable--firecrawl-js

JavaScript SDK for Firecrawl API that enables comprehensive web scraping, crawling, and data extraction with AI-ready output formats.

Pending
Quality

Pending

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Pending

The risk profile of this skill

Overview
Eval results
Files

Firecrawl JavaScript SDK

JavaScript/TypeScript SDK for the Firecrawl API that enables comprehensive web scraping, crawling, and data extraction with AI-ready output formats.

Package Information

  • Package Name: @mendable/firecrawl-js
  • Package Type: npm
  • Language: JavaScript/TypeScript
  • Installation: npm install @mendable/firecrawl-js

Core Imports

// Unified client (recommended)
import Firecrawl from '@mendable/firecrawl-js';

// Direct v2 client
import { FirecrawlClient } from '@mendable/firecrawl-js';

// Legacy v1 client
import { FirecrawlAppV1 } from '@mendable/firecrawl-js';

Basic Usage

// Initialize the client
const app = new Firecrawl({ apiKey: 'your-api-key' });

// Scrape a single URL
const scrapeResult = await app.scrape('https://example.com', {
  formats: ['markdown', 'html']
});

// Crawl a website
const crawlResult = await app.crawl('https://example.com', {
  limit: 100,
  scrapeOptions: { formats: ['markdown'] }
});

Architecture

The SDK provides both current (v2) and legacy (v1) API access:

  • Unified Client (Firecrawl): Extends v2 client with .v1 property for backward compatibility
  • Direct v2 Client (FirecrawlClient): Current API with modern async patterns
  • Legacy v1 Client (FirecrawlAppV1): Feature-frozen v1 API for existing integrations

All clients support real-time job monitoring via WebSocket with automatic fallback to polling.

Core Types

// Client configuration
interface FirecrawlClientOptions {
  apiKey?: string | null;
  apiUrl?: string | null;
  timeoutMs?: number;
  maxRetries?: number;
  backoffFactor?: number;
}

// Document structure
interface Document {
  markdown?: string;
  html?: string;
  rawHtml?: string;
  json?: unknown;
  summary?: string;
  metadata?: DocumentMetadata;
  links?: string[];
  images?: string[];
  screenshot?: string;
  attributes?: Array<{
    selector: string;
    attribute: string;
    values: string[];
  }>;
  actions?: Record<string, unknown>;
  warning?: string;
  changeTracking?: Record<string, unknown>;
}

// Document metadata
interface DocumentMetadata {
  title?: string;
  description?: string;
  language?: string;
  keywords?: string | string[];
  robots?: string;
  ogTitle?: string;
  ogDescription?: string;
  ogUrl?: string;
  ogImage?: string;
  sourceURL?: string;
  statusCode?: number;
  error?: string;
  [key: string]: unknown;
}

Main Client Classes

// Unified client class
class Firecrawl extends FirecrawlClient {
  constructor(opts?: FirecrawlClientOptions);
  get v1(): FirecrawlAppV1;
}

// Direct v2 client
class FirecrawlClient {
  constructor(options?: FirecrawlClientOptions);
  
  // Core methods available - see capability sections below
  scrape(url: string, options?: ScrapeOptions): Promise<Document>;
  search(query: string, req?: Omit<SearchRequest, "query">): Promise<SearchData>;
  map(url: string, options?: MapOptions): Promise<MapData>;
  crawl(url: string, req?: CrawlOptions): Promise<CrawlJob>;
  batchScrape(urls: string[], opts?: BatchScrapeOptions): Promise<BatchScrapeJob>;
  extract(args: any): Promise<ExtractResponse>;
  watcher(jobId: string, opts?: WatcherOptions): Watcher;
}

Capabilities

Web Scraping

Single URL scraping with multiple output formats including structured data extraction.

Key APIs:

scrape(url: string, options?: ScrapeOptions): Promise<Document>;

Web Scraping Documentation

Web Crawling

Recursive website crawling with configurable limits, path filtering, and webhook support.

Key APIs:

startCrawl(url: string, req?: CrawlOptions): Promise<CrawlResponse>;
getCrawlStatus(jobId: string, pagination?: PaginationConfig): Promise<CrawlJob>;
crawl(url: string, req?: CrawlOptions): Promise<CrawlJob>;

Web Crawling Documentation

Batch Operations

Concurrent processing of multiple URLs with job monitoring and error handling.

Key APIs:

startBatchScrape(urls: string[], opts?: BatchScrapeOptions): Promise<BatchScrapeResponse>;
getBatchScrapeStatus(jobId: string, pagination?: PaginationConfig): Promise<BatchScrapeJob>;
batchScrape(urls: string[], opts?: BatchScrapeOptions): Promise<BatchScrapeJob>;

Batch Operations Documentation

Search

Web search with optional result scraping across different sources (web, news, images).

Key APIs:

search(query: string, req?: Omit<SearchRequest, "query">): Promise<SearchData>;

Search Documentation

Site Mapping

Discover and map website URLs using sitemaps and crawling techniques.

Key APIs:

map(url: string, options?: MapOptions): Promise<MapData>;

Site Mapping Documentation

Data Extraction

LLM-powered structured data extraction using natural language prompts or schemas.

Key APIs:

startExtract(args: any): Promise<ExtractResponse>;
getExtractStatus(jobId: string): Promise<ExtractResponse>;
extract(args: any): Promise<ExtractResponse>;

Data Extraction Documentation

Real-time Monitoring

WebSocket-based job monitoring with automatic fallback to polling for long-running operations.

Key APIs:

watcher(jobId: string, opts?: WatcherOptions): Watcher;

Real-time Monitoring Documentation

Usage Analytics

Monitor API usage, credits, tokens, and queue status for billing and optimization.

Key APIs:

getConcurrency(): Promise<ConcurrencyCheck>;
getCreditUsage(): Promise<CreditUsage>;
getTokenUsage(): Promise<TokenUsage>;
getQueueStatus(): Promise<QueueStatusResponse>;

Usage Analytics Documentation

Legacy V1 API

Feature-frozen v1 API with additional capabilities like deep research and LLMs.txt generation.

Key APIs:

// Access via unified client
const v1Client = app.v1;

Legacy V1 API Documentation

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
npmpkg:npm/@mendable/firecrawl-js@4.3.x
Publish Source
CLI
Badge
tessl/npm-mendable--firecrawl-js badge