or run

tessl search
Log in

clay-load-scale

tessl install github:jeremylongshore/claude-code-plugins-plus-skills --skill clay-load-scale

github.com/jeremylongshore/claude-code-plugins-plus-skills

Implement Clay load testing, auto-scaling, and capacity planning strategies. Use when running performance tests, configuring horizontal scaling, or planning capacity for Clay integrations. Trigger with phrases like "clay load test", "clay scale", "clay performance test", "clay capacity", "clay k6", "clay benchmark".

Review Score

88%

Validation Score

12/16

Implementation Score

88%

Activation Score

90%

Clay Load & Scale

Overview

Load testing, scaling strategies, and capacity planning for Clay integrations.

Prerequisites

  • k6 load testing tool installed
  • Kubernetes cluster with HPA configured
  • Prometheus for metrics collection
  • Test environment API keys

Load Testing with k6

Basic Load Test

// clay-load-test.js
import http from 'k6/http';
import { check, sleep } from 'k6';

export const options = {
  stages: [
    { duration: '2m', target: 10 },   // Ramp up
    { duration: '5m', target: 10 },   // Steady state
    { duration: '2m', target: 50 },   // Ramp to peak
    { duration: '5m', target: 50 },   // Stress test
    { duration: '2m', target: 0 },    // Ramp down
  ],
  thresholds: {
    http_req_duration: ['p(95)<500'],
    http_req_failed: ['rate<0.01'],
  },
};

export default function () {
  const response = http.post(
    'https://api.clay.com/v1/resource',
    JSON.stringify({ test: true }),
    {
      headers: {
        'Content-Type': 'application/json',
        'Authorization': `Bearer ${__ENV.CLAY_API_KEY}`,
      },
    }
  );

  check(response, {
    'status is 200': (r) => r.status === 200,
    'latency < 500ms': (r) => r.timings.duration < 500,
  });

  sleep(1);
}

Run Load Test

# Install k6
brew install k6  # macOS
# or: sudo apt install k6  # Linux

# Run test
k6 run --env CLAY_API_KEY=${CLAY_API_KEY} clay-load-test.js

# Run with output to InfluxDB
k6 run --out influxdb=http://localhost:8086/k6 clay-load-test.js

Scaling Patterns

Horizontal Scaling

# kubernetes HPA
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: clay-integration-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: clay-integration
  minReplicas: 2
  maxReplicas: 20
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 70
    - type: Pods
      pods:
        metric:
          name: clay_queue_depth
        target:
          type: AverageValue
          averageValue: 100

Connection Pooling

import { Pool } from 'generic-pool';

const clayPool = Pool.create({
  create: async () => {
    return new ClayClient({
      apiKey: process.env.CLAY_API_KEY!,
    });
  },
  destroy: async (client) => {
    await client.close();
  },
  max: 20,
  min: 5,
  idleTimeoutMillis: 30000,
});

async function withClayClient<T>(
  fn: (client: ClayClient) => Promise<T>
): Promise<T> {
  const client = await clayPool.acquire();
  try {
    return await fn(client);
  } finally {
    clayPool.release(client);
  }
}

Capacity Planning

Metrics to Monitor

MetricWarningCritical
CPU Utilization> 70%> 85%
Memory Usage> 75%> 90%
Request Queue Depth> 100> 500
Error Rate> 1%> 5%
P95 Latency> 1000ms> 3000ms

Capacity Calculation

interface CapacityEstimate {
  currentRPS: number;
  maxRPS: number;
  headroom: number;
  scaleRecommendation: string;
}

function estimateClayCapacity(
  metrics: SystemMetrics
): CapacityEstimate {
  const currentRPS = metrics.requestsPerSecond;
  const avgLatency = metrics.p50Latency;
  const cpuUtilization = metrics.cpuPercent;

  // Estimate max RPS based on current performance
  const maxRPS = currentRPS / (cpuUtilization / 100) * 0.7; // 70% target
  const headroom = ((maxRPS - currentRPS) / currentRPS) * 100;

  return {
    currentRPS,
    maxRPS: Math.floor(maxRPS),
    headroom: Math.round(headroom),
    scaleRecommendation: headroom < 30
      ? 'Scale up soon'
      : headroom < 50
      ? 'Monitor closely'
      : 'Adequate capacity',
  };
}

Benchmark Results Template

## Clay Performance Benchmark
**Date:** YYYY-MM-DD
**Environment:** [staging/production]
**SDK Version:** X.Y.Z

### Test Configuration
- Duration: 10 minutes
- Ramp: 10 → 100 → 10 VUs
- Target endpoint: /v1/resource

### Results
| Metric | Value |
|--------|-------|
| Total Requests | 50,000 |
| Success Rate | 99.9% |
| P50 Latency | 120ms |
| P95 Latency | 350ms |
| P99 Latency | 800ms |
| Max RPS Achieved | 150 |

### Observations
- [Key finding 1]
- [Key finding 2]

### Recommendations
- [Scaling recommendation]

Instructions

Step 1: Create Load Test Script

Write k6 test script with appropriate thresholds.

Step 2: Configure Auto-Scaling

Set up HPA with CPU and custom metrics.

Step 3: Run Load Test

Execute test and collect metrics.

Step 4: Analyze and Document

Record results in benchmark template.

Output

  • Load test script created
  • HPA configured
  • Benchmark results documented
  • Capacity recommendations defined

Error Handling

IssueCauseSolution
k6 timeoutRate limitedReduce RPS
HPA not scalingWrong metricsVerify metric name
Connection refusedPool exhaustedIncrease pool size
Inconsistent resultsWarm-up neededAdd ramp-up phase

Examples

Quick k6 Test

k6 run --vus 10 --duration 30s clay-load-test.js

Check Current Capacity

const metrics = await getSystemMetrics();
const capacity = estimateClayCapacity(metrics);
console.log('Headroom:', capacity.headroom + '%');
console.log('Recommendation:', capacity.scaleRecommendation);

Scale HPA Manually

kubectl scale deployment clay-integration --replicas=5
kubectl get hpa clay-integration-hpa

Resources

  • k6 Documentation
  • Kubernetes HPA
  • Clay Rate Limits

Next Steps

For reliability patterns, see clay-reliability-patterns.