Auto-generated tile from GitHub (10 skills)
92
94%
Does it follow best practices?
Impact
92%
1.16xAverage score across 44 eval scenarios
Advisory
Suggest reviewing before use
A B2B SaaS platform uses a Node.js routing layer to direct API requests to customer-specific backend services. Each incoming request must resolve the customer's backend hostname, load the customer's configuration file, and then proxy the request. Under low traffic the service performs well, but during peak hours (hundreds of concurrent requests) the team is observing p99 latency spikes of 400–600ms even though the underlying operations should each take under 10ms.
The engineering team suspects a Node.js runtime bottleneck rather than a network problem, because the latency grows non-linearly with concurrency. The existing implementation was written quickly and hasn't been reviewed for Node.js internals correctness. Your task is to diagnose the source of the latency and produce an optimized replacement for the service module.
Produce the following files in your working directory:
service.js — the optimized routing service implementation. It should expose an async function handleRequest(customerId, hostname, configPath) that resolves the hostname, reads the customer config, and returns { address, config }. It should be a drop-in replacement for the existing implementation.
start.sh — a shell script that shows exactly how to launch the service with correct runtime configuration for production concurrency.
analysis.md — a brief explanation of what was wrong in the original implementation, what changes were made and why, and how to verify the service is no longer experiencing the bottleneck.
The following files are provided as inputs. Extract them before beginning.
=============== FILE: inputs/service_original.js =============== 'use strict';
const dns = require('node:dns'); const fs = require('node:fs/promises');
// Increase thread pool for more concurrency process.env.UV_THREADPOOL_SIZE = 32;
/**
// Load customer configuration const config = await fs.readFile(configPath, 'utf8');
return { address, config }; }
module.exports = { handleRequest }; =============== FILE: inputs/load_test.js =============== 'use strict';
// Simulates 50 concurrent requests to demonstrate the latency problem const { handleRequest } = require('./service_original');
const CONCURRENCY = 50;
const hostnames = Array.from({ length: CONCURRENCY }, (_, i) => customer-${i % 5}.internal.example.com);
const configs = Array.from({ length: CONCURRENCY }, () => './inputs/sample_config.json');
async function runLoadTest() {
const start = Date.now();
await Promise.all(
hostnames.map((h, i) => handleRequest(c${i}, h, configs[i]))
);
const elapsed = Date.now() - start;
console.log(${CONCURRENCY} concurrent requests completed in ${elapsed}ms);
}
runLoadTest().catch(console.error); =============== FILE: inputs/sample_config.json =============== { "version": "2.1", "timeout": 30000, "retries": 3, "features": { "rateLimit": true, "caching": false, "tracing": true }, "endpoints": { "primary": "/api/v2", "health": "/health" } }
evals
scenario-1
scenario-2
scenario-3
scenario-4
scenario-5
scenario-6
scenario-7
scenario-8
scenario-9
scenario-10
scenario-11
scenario-12
scenario-13
scenario-14
scenario-15
scenario-16
scenario-17
scenario-18
scenario-19
scenario-20
scenario-21
scenario-22
scenario-23
scenario-24
scenario-25
scenario-26
scenario-27
scenario-28
scenario-29
scenario-30
scenario-31
scenario-32
scenario-33
scenario-34
scenario-35
scenario-36
scenario-37
scenario-38
scenario-39
scenario-40
scenario-41
scenario-42
scenario-43
scenario-44
skills
documentation
fastify
init
linting-neostandard-eslint9
node
nodejs-core
rules
oauth
octocat
snipgrapher