Expert guidance for configuring and deploying the OpenTelemetry Collector. Use when setting up a Collector pipeline, configuring receivers, exporters, or processors, deploying a Collector to Kubernetes or Docker, or forwarding telemetry to Dash0. Triggers on requests involving collector, pipeline, OTLP receiver, exporter, or Dash0 collector setup.
100
100%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Instrument Node.js applications to generate traces, logs, and metrics for deep insights into behavior and performance.
npm install @opentelemetry/auto-instrumentations-nodeNote: Installing the package alone is insufficient—you must activate the SDK AND enable exporters.
All environment variables that control the SDK behavior:
| Variable | Required | Default | Description |
|---|---|---|---|
OTEL_SERVICE_NAME | Yes | unknown_service | Identifies your service in telemetry data |
OTEL_TRACES_EXPORTER | Yes | none | Must set to otlp to export traces |
OTEL_METRICS_EXPORTER | No | none | Set to otlp to export metrics |
OTEL_LOGS_EXPORTER | No | none | Set to otlp to export logs |
OTEL_EXPORTER_OTLP_ENDPOINT | Yes | http://localhost:4317 | OTLP collector endpoint |
OTEL_EXPORTER_OTLP_HEADERS | No | - | Headers for authentication (e.g., Authorization=Bearer TOKEN) |
OTEL_EXPORTER_OTLP_PROTOCOL | No | http/protobuf when using auto-instrumentations-node; grpc otherwise | Protocol: grpc, http/protobuf, or http/json |
OTEL_RESOURCE_ATTRIBUTES | No | - | Additional resource attributes (e.g., deployment.environment=production) |
Critical: Without OTEL_TRACES_EXPORTER=otlp, the SDK defaults to none and no telemetry is exported.
Protocol mismatch pitfall.
@opentelemetry/auto-instrumentations-node defaults to http/protobuf, not grpc.
When targeting a Collector gRPC receiver on port 4317, always set OTEL_EXPORTER_OTLP_PROTOCOL=grpc explicitly.
Omitting it causes a parse error on the Collector side (Parse Error: Expected HTTP/) and silent span loss on the SDK side.
https://<region>.your-platform.comorder-api, checkout-service)The SDK must be loaded before your application code. The method depends on your module system:
ESM Projects (package.json has "type": "module" or using .mjs files):
export NODE_OPTIONS="--import @opentelemetry/auto-instrumentations-node/register"CommonJS Projects (default, or using .cjs files):
export NODE_OPTIONS="--require @opentelemetry/auto-instrumentations-node/register"Note: Tools like npm, pnpm, and yarn are Node.js applications, so you may observe instrumentation data from package managers when running commands.
export OTEL_SERVICE_NAME="my-service"This step is required - without it, no telemetry is sent:
# Required for traces
export OTEL_TRACES_EXPORTER="otlp"
# Optional: also export metrics and logs
export OTEL_METRICS_EXPORTER="otlp"
export OTEL_LOGS_EXPORTER="otlp"export OTEL_EXPORTER_OTLP_ENDPOINT="https://<OTLP_ENDPOINT>"
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer YOUR_AUTH_TOKEN"export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer YOUR_AUTH_TOKEN,Dash0-Dataset=my-dataset"# Service identification
export OTEL_SERVICE_NAME="my-service"
# Enable exporters (required!)
export OTEL_TRACES_EXPORTER="otlp"
export OTEL_METRICS_EXPORTER="otlp"
export OTEL_LOGS_EXPORTER="otlp"
# Configure endpoint
export OTEL_EXPORTER_OTLP_ENDPOINT="https://<OTLP_ENDPOINT>"
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer YOUR_AUTH_TOKEN"
# Activate SDK (use --import for ESM, --require for CommonJS)
export NODE_OPTIONS="--import @opentelemetry/auto-instrumentations-node/register"
node app.jsNode.js does not automatically load .env files. Use the --env-file flag (Node.js 20.6+):
.env.local:
OTEL_SERVICE_NAME=my-service
OTEL_TRACES_EXPORTER=otlp
OTEL_METRICS_EXPORTER=otlp
OTEL_LOGS_EXPORTER=otlp
OTEL_EXPORTER_OTLP_ENDPOINT=https://<OTLP_ENDPOINT>
OTEL_EXPORTER_OTLP_HEADERS=Authorization=Bearer YOUR_AUTH_TOKEN
NODE_OPTIONS=--import @opentelemetry/auto-instrumentations-node/registerRun with:
node --env-file=.env.local app.jsNote: The --env-file flag requires Node.js 20.6 or later.
Add instrumented scripts to your package.json:
{
"scripts": {
"start": "node app.js",
"start:otel": "node --env-file=.env.local app.js",
"start:otel:console": "OTEL_SERVICE_NAME=my-service OTEL_TRACES_EXPORTER=console node --import @opentelemetry/auto-instrumentations-node/register app.js",
"dev": "node --env-file=.env.local --watch app.js"
}
}.env.local (create this file):
OTEL_SERVICE_NAME=my-service
OTEL_TRACES_EXPORTER=otlp
OTEL_METRICS_EXPORTER=otlp
OTEL_LOGS_EXPORTER=otlp
OTEL_EXPORTER_OTLP_ENDPOINT=https://<OTLP_ENDPOINT>
OTEL_EXPORTER_OTLP_HEADERS=Authorization=Bearer YOUR_AUTH_TOKEN
NODE_OPTIONS=--import @opentelemetry/auto-instrumentations-node/registerUsage:
npm run start:otel # Run with OTLP export to backend
npm run start:otel:console # Run with console output (no collector needed)
npm run dev # Development with watch mode + telemetryFor development without a collector, use the console exporter to see telemetry in your terminal:
export OTEL_SERVICE_NAME="my-service"
export OTEL_TRACES_EXPORTER="console"
export OTEL_METRICS_EXPORTER="console"
export OTEL_LOGS_EXPORTER="console"
export NODE_OPTIONS="--import @opentelemetry/auto-instrumentations-node/register"
node app.jsThis prints spans, metrics, and logs directly to stdout—useful for verifying instrumentation works before configuring a remote backend.
If you set OTEL_TRACES_EXPORTER=otlp but have no collector running, you'll see connection errors. This is expected behavior:
Error: 14 UNAVAILABLE: No connection established. Last error: connect ECONNREFUSED 127.0.0.1:4317Options:
console exporter during development (recommended for quick testing)Set service.name, service.version, and deployment.environment.name for every deployment.
See resource attributes for the full list of required and recommended attributes.
See Kubernetes deployment for pod metadata injection, resource attributes, and Dash0 Kubernetes Operator guidance.
The auto-instrumentation package automatically instruments:
| Category | Libraries |
|---|---|
| HTTP | http, https, express, fastify, koa, hapi |
| Database | pg, mysql, mysql2, mongodb, redis, ioredis |
| ORM | knex, sequelize, typeorm, prisma |
| Messaging | amqplib, kafkajs |
| AWS | aws-sdk, @aws-sdk/* |
| Logging | pino, winston, bunyan |
| GraphQL | graphql |
| gRPC | @grpc/grpc-js |
Refer to OpenTelemetry documentation for the complete list.
Add business context to auto-instrumented traces:
import { trace, SpanStatusCode } from "@opentelemetry/api";
const tracer = trace.getTracer("my-service");
async function processOrder(order) {
return tracer.startActiveSpan("order.process", async (span) => {
try {
span.setAttribute("order.id", order.id);
span.setAttribute("order.total", order.total);
const result = await saveOrder(order);
return result;
} catch (error) {
span.setStatus({ code: SpanStatusCode.ERROR, message: error.message });
const ctx = span.spanContext();
logger.error({
'trace_id': ctx.traceId,
'span_id': ctx.spanId,
'exception.type': error.name,
'exception.message': error.message,
'exception.stacktrace': error.stack,
}, 'order.process.failed');
throw error;
} finally {
span.end();
}
});
}Auto-instrumentation creates spans you do not control directly (e.g., the SERVER span for an HTTP request).
To enrich these spans with business context or set their status, retrieve the active span from the current context.
See adding attributes to auto-instrumented spans for when to use this pattern.
import { trace } from "@opentelemetry/api";
app.post("/api/orders", async (req, res) => {
const span = trace.getActiveSpan();
span?.setAttribute("order.id", req.body.orderId);
span?.setAttribute("tenant.id", req.headers["x-tenant-id"]);
// ... handler logic
});trace.getActiveSpan() returns undefined if no span is active (e.g., when instrumentation is disabled).
Always use optional chaining (?.) when calling methods on the result.
See span status code for the full rules. This section shows how to apply them in Node.js.
ERRORThe message field on the status object must contain the error class and a short explanation — enough to understand the failure without opening the full trace.
// BAD: no status message
span.setStatus({ code: SpanStatusCode.ERROR });
// BAD: generic message with no diagnostic value
span.setStatus({ code: SpanStatusCode.ERROR, message: 'something went wrong' });
// GOOD: specific message with error class and context
span.setStatus({
code: SpanStatusCode.ERROR,
message: `TimeoutError: upstream payment service did not respond within 5s`,
});Do not include stack traces in the status message.
Record those in a log record with exception.stacktrace instead.
// BAD: stack trace in the status message
span.setStatus({ code: SpanStatusCode.ERROR, message: error.stack });
// GOOD: short message only
span.setStatus({ code: SpanStatusCode.ERROR, message: error.message });OK only for confirmed successSet status to OK when application logic has explicitly verified the operation succeeded.
Leave status UNSET if the code simply did not encounter an error.
// GOOD: explicit confirmation from downstream
const response = await fetch(url);
if (response.ok) {
span.setStatus({ code: SpanStatusCode.OK });
}
// BAD: setting OK speculatively
span.setStatus({ code: SpanStatusCode.OK });
return await someFunction(); // might still fail after this pointConfigure your logging framework to serialize exceptions into a single structured field so that stack traces do not break the one-line-per-record contract. See logs for general guidance on structured logging and exception stack traces.
pino serializes errors into structured JSON by default when passed as the first argument.
The err serializer extracts message, type, and stack as separate fields, keeping each log record on a single line.
import pino from 'pino';
const logger = pino();
try {
processOrder(order);
} catch (err) {
logger.error({ err, order_id: order.id }, 'order.failed');
}Pass the error as { err } in the first argument, not as the message string.
If you log error.stack directly as the message, pino prints it as multi-line text.
winston does not serialize errors by default.
Enable the errors format with { stack: true } to capture the stack trace as a structured field.
import winston from 'winston';
const logger = winston.createLogger({
format: winston.format.combine(
winston.format.errors({ stack: true }),
winston.format.json(),
),
transports: [new winston.transports.Console()],
});
try {
processOrder(order);
} catch (err) {
logger.error('order.failed', { error: err, order_id: order.id });
}Without winston.format.errors({ stack: true }), the stack trace is silently dropped from JSON output.
The Node.js auto-instrumentation registers shutdown hooks for SIGTERM and SIGINT automatically.
No additional code is needed for normal process termination.
However, unhandled exceptions and unhandled promise rejections cause immediate process exit before the SDK flushes its buffers. Register handlers that flush the tracer provider before exiting so that spans from the failing request are not lost.
import { trace } from "@opentelemetry/api";
function forceFlushAll() {
const promises = [];
let tp = trace.getTracerProvider();
// The auto-instrumentation wraps the real provider in a ProxyTracerProvider
// that does not expose forceFlush(). Unwrap it to reach the SDK provider.
if (typeof tp.forceFlush !== "function" && typeof tp.getDelegate === "function") {
tp = tp.getDelegate();
}
if (typeof tp.forceFlush === "function") promises.push(tp.forceFlush());
return Promise.allSettled(promises);
}
process.on("uncaughtException", (error) => {
logger.error({
'exception.type': error.name,
'exception.message': error.message,
'exception.stacktrace': error.stack,
}, "uncaught.exception");
forceFlushAll().finally(() => process.exit(1));
});
process.on("unhandledRejection", (reason) => {
const error = reason instanceof Error ? reason : new Error(String(reason));
logger.error({
'exception.type': error.name,
'exception.message': error.message,
'exception.stacktrace': error.stack,
}, "unhandled.rejection");
forceFlushAll().finally(() => process.exit(1));
});forceFlush() on the tracer provider only flushes span processors — it does not flush the logger or meter providers.
In the auto-instrumented setup, the logger reference here is a pino/winston logger writing to stdout (see structured logging), so the log record reaches the Collector through stdout capture, not through the OTel log provider.
If you use the OTel Logs SDK directly, add its provider to forceFlushAll().
trace.getTracerProvider() returns a ProxyTracerProvider that does not expose forceFlush().
Call getDelegate() to unwrap it and reach the SDK-level provider (NodeTracerProvider) where forceFlush() is defined.
The call returns a promise; finally ensures the process exits even if the flush fails or times out.
Check exporters are enabled:
echo $OTEL_TRACES_EXPORTER # Should be "otlp" or "console", not emptyThe SDK defaults OTEL_TRACES_EXPORTER to none, which silently discards all telemetry.
Verify SDK is loaded:
echo $NODE_OPTIONS # Should contain --import or --requireError: 14 UNAVAILABLE: connect ECONNREFUSED 127.0.0.1:4317This means the SDK is working but cannot reach the collector:
OTEL_TRACES_EXPORTER=consoleOTEL_EXPORTER_OTLP_ENDPOINT is correctIf using .env.local:
--env-file=.env.local flagSymptom: SDK loads but no instrumentation happens
Fix: Match the flag to your module system:
"type": "module" in package.json): Use --import--requireUsually means OTEL_TRACES_EXPORTER (or metrics/logs) is not set. Set it explicitly:
export OTEL_TRACES_EXPORTER="otlp"