CtrlK
CommunityDocumentationLog inGet started
Tessl Logo

firecrawl-load-scale

tessl install github:jeremylongshore/claude-code-plugins-plus-skills --skill firecrawl-load-scale
github.com/jeremylongshore/claude-code-plugins-plus-skills

Implement FireCrawl load testing, auto-scaling, and capacity planning strategies. Use when running performance tests, configuring horizontal scaling, or planning capacity for FireCrawl integrations. Trigger with phrases like "firecrawl load test", "firecrawl scale", "firecrawl performance test", "firecrawl capacity", "firecrawl k6", "firecrawl benchmark".

Review Score

78%

Validation Score

12/16

Implementation Score

65%

Activation Score

90%

FireCrawl Load & Scale

Overview

Load testing, scaling strategies, and capacity planning for FireCrawl integrations.

Prerequisites

  • k6 load testing tool installed
  • Kubernetes cluster with HPA configured
  • Prometheus for metrics collection
  • Test environment API keys

Load Testing with k6

Basic Load Test

// firecrawl-load-test.js
import http from 'k6/http';
import { check, sleep } from 'k6';

export const options = {
  stages: [
    { duration: '2m', target: 10 },   // Ramp up
    { duration: '5m', target: 10 },   // Steady state
    { duration: '2m', target: 50 },   // Ramp to peak
    { duration: '5m', target: 50 },   // Stress test
    { duration: '2m', target: 0 },    // Ramp down
  ],
  thresholds: {
    http_req_duration: ['p(95)<500'],
    http_req_failed: ['rate<0.01'],
  },
};

export default function () {
  const response = http.post(
    'https://api.firecrawl.com/v1/resource',
    JSON.stringify({ test: true }),
    {
      headers: {
        'Content-Type': 'application/json',
        'Authorization': `Bearer ${__ENV.FIRECRAWL_API_KEY}`,
      },
    }
  );

  check(response, {
    'status is 200': (r) => r.status === 200,
    'latency < 500ms': (r) => r.timings.duration < 500,
  });

  sleep(1);
}

Run Load Test

# Install k6
brew install k6  # macOS
# or: sudo apt install k6  # Linux

# Run test
k6 run --env FIRECRAWL_API_KEY=${FIRECRAWL_API_KEY} firecrawl-load-test.js

# Run with output to InfluxDB
k6 run --out influxdb=http://localhost:8086/k6 firecrawl-load-test.js

Scaling Patterns

Horizontal Scaling

# kubernetes HPA
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: firecrawl-integration-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: firecrawl-integration
  minReplicas: 2
  maxReplicas: 20
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 70
    - type: Pods
      pods:
        metric:
          name: firecrawl_queue_depth
        target:
          type: AverageValue
          averageValue: 100

Connection Pooling

import { Pool } from 'generic-pool';

const firecrawlPool = Pool.create({
  create: async () => {
    return new FireCrawlClient({
      apiKey: process.env.FIRECRAWL_API_KEY!,
    });
  },
  destroy: async (client) => {
    await client.close();
  },
  max: 20,
  min: 5,
  idleTimeoutMillis: 30000,
});

async function withFireCrawlClient<T>(
  fn: (client: FireCrawlClient) => Promise<T>
): Promise<T> {
  const client = await firecrawlPool.acquire();
  try {
    return await fn(client);
  } finally {
    firecrawlPool.release(client);
  }
}

Capacity Planning

Metrics to Monitor

MetricWarningCritical
CPU Utilization> 70%> 85%
Memory Usage> 75%> 90%
Request Queue Depth> 100> 500
Error Rate> 1%> 5%
P95 Latency> 1000ms> 3000ms

Capacity Calculation

interface CapacityEstimate {
  currentRPS: number;
  maxRPS: number;
  headroom: number;
  scaleRecommendation: string;
}

function estimateFireCrawlCapacity(
  metrics: SystemMetrics
): CapacityEstimate {
  const currentRPS = metrics.requestsPerSecond;
  const avgLatency = metrics.p50Latency;
  const cpuUtilization = metrics.cpuPercent;

  // Estimate max RPS based on current performance
  const maxRPS = currentRPS / (cpuUtilization / 100) * 0.7; // 70% target
  const headroom = ((maxRPS - currentRPS) / currentRPS) * 100;

  return {
    currentRPS,
    maxRPS: Math.floor(maxRPS),
    headroom: Math.round(headroom),
    scaleRecommendation: headroom < 30
      ? 'Scale up soon'
      : headroom < 50
      ? 'Monitor closely'
      : 'Adequate capacity',
  };
}

Benchmark Results Template

## FireCrawl Performance Benchmark
**Date:** YYYY-MM-DD
**Environment:** [staging/production]
**SDK Version:** X.Y.Z

### Test Configuration
- Duration: 10 minutes
- Ramp: 10 → 100 → 10 VUs
- Target endpoint: /v1/resource

### Results
| Metric | Value |
|--------|-------|
| Total Requests | 50,000 |
| Success Rate | 99.9% |
| P50 Latency | 120ms |
| P95 Latency | 350ms |
| P99 Latency | 800ms |
| Max RPS Achieved | 150 |

### Observations
- [Key finding 1]
- [Key finding 2]

### Recommendations
- [Scaling recommendation]

Instructions

Step 1: Create Load Test Script

Write k6 test script with appropriate thresholds.

Step 2: Configure Auto-Scaling

Set up HPA with CPU and custom metrics.

Step 3: Run Load Test

Execute test and collect metrics.

Step 4: Analyze and Document

Record results in benchmark template.

Output

  • Load test script created
  • HPA configured
  • Benchmark results documented
  • Capacity recommendations defined

Error Handling

IssueCauseSolution
k6 timeoutRate limitedReduce RPS
HPA not scalingWrong metricsVerify metric name
Connection refusedPool exhaustedIncrease pool size
Inconsistent resultsWarm-up neededAdd ramp-up phase

Examples

Quick k6 Test

k6 run --vus 10 --duration 30s firecrawl-load-test.js

Check Current Capacity

const metrics = await getSystemMetrics();
const capacity = estimateFireCrawlCapacity(metrics);
console.log('Headroom:', capacity.headroom + '%');
console.log('Recommendation:', capacity.scaleRecommendation);

Scale HPA Manually

kubectl scale deployment firecrawl-integration --replicas=5
kubectl get hpa firecrawl-integration-hpa

Resources

  • k6 Documentation
  • Kubernetes HPA
  • FireCrawl Rate Limits

Next Steps

For reliability patterns, see firecrawl-reliability-patterns.