Auto-generated tile from GitHub (10 skills)
92
94%
Does it follow best practices?
Impact
92%
1.16xAverage score across 44 eval scenarios
Advisory
Suggest reviewing before use
Use @platformatic/flame for CPU profiling with flame graph visualization:
npx @platformatic/flame app.tsThis starts your application with profiling enabled and generates an interactive flame graph.
flame can output markdown reports suitable for AI-assisted performance analysis:
npx @platformatic/flame --output markdown app.tsThis enables a fully agentic workflow where you can:
import { profile } from '@platformatic/flame';
const stop = await profile({
outputFile: 'profile.html',
});
// Run your workload
await runBenchmark();
await stop();Use autocannon for HTTP benchmarking:
# Basic benchmark
npx autocannon http://localhost:3000
# With options
npx autocannon -c 100 -d 30 -p 10 http://localhost:3000
# POST request with body
npx autocannon -m POST -H "Content-Type: application/json" -b '{"name":"test"}' http://localhost:3000/usersOptions:
-c - Number of concurrent connections (default: 10)-d - Duration in seconds (default: 10)-p - Number of pipelined requests (default: 1)-m - HTTP method-b - Request bodyimport autocannon from 'autocannon';
const result = await autocannon({
url: 'http://localhost:3000',
connections: 100,
duration: 30,
pipelining: 10,
});
console.log(autocannon.printResult(result));wrk is a high-performance HTTP benchmarking tool:
# Basic benchmark
wrk -t12 -c400 -d30s http://localhost:3000
# With Lua script for custom requests
wrk -t12 -c400 -d30s -s post.lua http://localhost:3000Options:
-t - Number of threads-c - Number of connections-d - Duration-s - Lua script for custom logick6 is ideal for complex load testing scenarios:
// load-test.js
import http from 'k6/http';
import { check, sleep } from 'k6';
export const options = {
vus: 100,
duration: '30s',
};
export default function () {
const res = http.get('http://localhost:3000');
check(res, {
'status is 200': (r) => r.status === 200,
'response time < 200ms': (r) => r.timings.duration < 200,
});
sleep(1);
}k6 run load-test.js# Generate V8 profiling log
node --prof app.js
# Process the log
node --prof-process isolate-*.log > profile.txt# Start with inspector
node --inspect app.js
# Then use Chrome DevTools (chrome://inspect) to:
# - Take heap snapshots
# - Record allocation timelines
# - Find memory leaks# Generate report on signal
node --report-on-signal app.js
kill -SIGUSR2 <pid>
# Generate report on uncaught exception
node --report-on-fatalerror app.js| Tool | Best For |
|---|---|
| @platformatic/flame | CPU profiling, flame graphs, AI-assisted analysis |
| autocannon | Quick HTTP benchmarks, Node.js native |
| wrk | Maximum throughput testing |
| k6 | Complex scenarios, CI/CD integration, scripted tests |
evals
scenario-1
scenario-2
scenario-3
scenario-4
scenario-5
scenario-6
scenario-7
scenario-8
scenario-9
scenario-10
scenario-11
scenario-12
scenario-13
scenario-14
scenario-15
scenario-16
scenario-17
scenario-18
scenario-19
scenario-20
scenario-21
scenario-22
scenario-23
scenario-24
scenario-25
scenario-26
scenario-27
scenario-28
scenario-29
scenario-30
scenario-31
scenario-32
scenario-33
scenario-34
scenario-35
scenario-36
scenario-37
scenario-38
scenario-39
scenario-40
scenario-41
scenario-42
scenario-43
scenario-44
skills
documentation
fastify
init
linting-neostandard-eslint9
node
nodejs-core
rules
oauth
octocat
snipgrapher