CtrlK
BlogDocsLog inGet started
Tessl Logo

simon/skills

Auto-generated tile from GitHub (10 skills)

92

1.16x
Quality

94%

Does it follow best practices?

Impact

92%

1.16x

Average score across 44 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Overview
Quality
Evals
Security
Files

profiling.mdskills/node/rules/

name:
profiling
description:
Profiling and benchmarking tools
metadata:
{"tags":"profiling, benchmarking, performance, flame-graphs"}

Profiling in Node.js

Flame Graphs with @platformatic/flame

Use @platformatic/flame for CPU profiling with flame graph visualization:

npx @platformatic/flame app.ts

This starts your application with profiling enabled and generates an interactive flame graph.

Markdown Output for AI Analysis

flame can output markdown reports suitable for AI-assisted performance analysis:

npx @platformatic/flame --output markdown app.ts

This enables a fully agentic workflow where you can:

  1. Profile your application
  2. Get markdown output describing hotspots
  3. Feed the report to an AI assistant for optimization suggestions

Programmatic Usage

import { profile } from '@platformatic/flame';

const stop = await profile({
  outputFile: 'profile.html',
});

// Run your workload
await runBenchmark();

await stop();

Load Testing Tools

autocannon

Use autocannon for HTTP benchmarking:

# Basic benchmark
npx autocannon http://localhost:3000

# With options
npx autocannon -c 100 -d 30 -p 10 http://localhost:3000

# POST request with body
npx autocannon -m POST -H "Content-Type: application/json" -b '{"name":"test"}' http://localhost:3000/users

Options:

  • -c - Number of concurrent connections (default: 10)
  • -d - Duration in seconds (default: 10)
  • -p - Number of pipelined requests (default: 1)
  • -m - HTTP method
  • -b - Request body

Programmatic autocannon

import autocannon from 'autocannon';

const result = await autocannon({
  url: 'http://localhost:3000',
  connections: 100,
  duration: 30,
  pipelining: 10,
});

console.log(autocannon.printResult(result));

wrk

wrk is a high-performance HTTP benchmarking tool:

# Basic benchmark
wrk -t12 -c400 -d30s http://localhost:3000

# With Lua script for custom requests
wrk -t12 -c400 -d30s -s post.lua http://localhost:3000

Options:

  • -t - Number of threads
  • -c - Number of connections
  • -d - Duration
  • -s - Lua script for custom logic

k6

k6 is ideal for complex load testing scenarios:

// load-test.js
import http from 'k6/http';
import { check, sleep } from 'k6';

export const options = {
  vus: 100,
  duration: '30s',
};

export default function () {
  const res = http.get('http://localhost:3000');
  check(res, {
    'status is 200': (r) => r.status === 200,
    'response time < 200ms': (r) => r.timings.duration < 200,
  });
  sleep(1);
}
k6 run load-test.js

Built-in Node.js Profiling

CPU Profiling

# Generate V8 profiling log
node --prof app.js

# Process the log
node --prof-process isolate-*.log > profile.txt

Heap Snapshots

# Start with inspector
node --inspect app.js

# Then use Chrome DevTools (chrome://inspect) to:
# - Take heap snapshots
# - Record allocation timelines
# - Find memory leaks

Diagnostic Reports

# Generate report on signal
node --report-on-signal app.js
kill -SIGUSR2 <pid>

# Generate report on uncaught exception
node --report-on-fatalerror app.js

Profiling Workflow

  1. Establish baseline - Run autocannon to get initial metrics
  2. Profile - Use @platformatic/flame to identify hotspots
  3. Optimize - Fix the identified bottlenecks
  4. Verify - Run autocannon again to measure improvement
  5. Repeat - Continue until performance goals are met

Tool Comparison

ToolBest For
@platformatic/flameCPU profiling, flame graphs, AI-assisted analysis
autocannonQuick HTTP benchmarks, Node.js native
wrkMaximum throughput testing
k6Complex scenarios, CI/CD integration, scripted tests

README.md

tile.json