CtrlK
BlogDocsLog inGet started
Tessl Logo

simon/skills

Auto-generated tile from GitHub (10 skills)

92

1.16x
Quality

94%

Does it follow best practices?

Impact

92%

1.16x

Average score across 44 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Overview
Quality
Evals
Security
Files

task.mdevals/scenario-40/

Microservice Routing Layer: Latency Spike Investigation

Problem/Feature Description

A B2B SaaS platform uses a Node.js routing layer to direct API requests to customer-specific backend services. Each incoming request must resolve the customer's backend hostname, load the customer's configuration file, and then proxy the request. Under low traffic the service performs well, but during peak hours (hundreds of concurrent requests) the team is observing p99 latency spikes of 400–600ms even though the underlying operations should each take under 10ms.

The engineering team suspects a Node.js runtime bottleneck rather than a network problem, because the latency grows non-linearly with concurrency. The existing implementation was written quickly and hasn't been reviewed for Node.js internals correctness. Your task is to diagnose the source of the latency and produce an optimized replacement for the service module.

Output Specification

Produce the following files in your working directory:

  1. service.js — the optimized routing service implementation. It should expose an async function handleRequest(customerId, hostname, configPath) that resolves the hostname, reads the customer config, and returns { address, config }. It should be a drop-in replacement for the existing implementation.

  2. start.sh — a shell script that shows exactly how to launch the service with correct runtime configuration for production concurrency.

  3. analysis.md — a brief explanation of what was wrong in the original implementation, what changes were made and why, and how to verify the service is no longer experiencing the bottleneck.

Input Files

The following files are provided as inputs. Extract them before beginning.

=============== FILE: inputs/service_original.js =============== 'use strict';

const dns = require('node:dns'); const fs = require('node:fs/promises');

// Increase thread pool for more concurrency process.env.UV_THREADPOOL_SIZE = 32;

/**

  • Handle a single routing request.
  • @param {string} customerId
  • @param {string} hostname - customer backend hostname to resolve
  • @param {string} configPath - path to customer config file
  • @returns {Promise<{address: string, config: string}>} */ async function handleRequest(customerId, hostname, configPath) { // Resolve hostname to IP address const { address } = await dns.promises.lookup(hostname);

// Load customer configuration const config = await fs.readFile(configPath, 'utf8');

return { address, config }; }

module.exports = { handleRequest }; =============== FILE: inputs/load_test.js =============== 'use strict';

// Simulates 50 concurrent requests to demonstrate the latency problem const { handleRequest } = require('./service_original');

const CONCURRENCY = 50; const hostnames = Array.from({ length: CONCURRENCY }, (_, i) => customer-${i % 5}.internal.example.com); const configs = Array.from({ length: CONCURRENCY }, () => './inputs/sample_config.json');

async function runLoadTest() { const start = Date.now(); await Promise.all( hostnames.map((h, i) => handleRequest(c${i}, h, configs[i])) ); const elapsed = Date.now() - start; console.log(${CONCURRENCY} concurrent requests completed in ${elapsed}ms); }

runLoadTest().catch(console.error); =============== FILE: inputs/sample_config.json =============== { "version": "2.1", "timeout": 30000, "retries": 3, "features": { "rateLimit": true, "caching": false, "tracing": true }, "endpoints": { "primary": "/api/v2", "health": "/health" } }

evals

README.md

tile.json