Expert guidance for configuring and deploying the OpenTelemetry Collector. Use when setting up a Collector pipeline, configuring receivers, exporters, or processors, deploying a Collector to Kubernetes or Docker, or forwarding telemetry to Dash0. Triggers on requests involving collector, pipeline, OTLP receiver, exporter, or Dash0 collector setup.
100
100%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Exporters send processed telemetry to backends. Every pipeline must end with at least one exporter.
Use OTLP/gRPC for Collector-to-backend communication. gRPC provides better throughput and supports bidirectional streaming, which matters at the Collector's aggregation volume. Fall back to OTLP/HTTP only when network proxies do not support HTTP/2.
| Protocol | Exporter key | Default port | When to use |
|---|---|---|---|
| gRPC | otlp | 4317 | Default for all Collector-to-backend exports |
| HTTP | otlphttp | 4318 | Network proxies that block HTTP/2 |
Use the OTLP/gRPC exporter to send traces, metrics, and logs to Dash0.
exporters:
otlp:
endpoint: <OTLP_ENDPOINT>
headers:
Authorization: "Bearer <AUTH_TOKEN>"Replace <OTLP_ENDPOINT> with your Dash0 OTLP endpoint (e.g., ingress.eu-west-1.aws.dash0.com:4317).
Replace <AUTH_TOKEN> with your Dash0 auth token; see the Authentication section for how to optimally set up the authentication token.
Configure retry, timeout, compression, and sending queue for reliable delivery.
exporters:
otlp:
endpoint: <OTLP_ENDPOINT>
headers:
Authorization: "Bearer <AUTH_TOKEN>"
compression: gzip
timeout: 30s
retry_on_failure:
enabled: true
initial_interval: 5s
max_interval: 30s
max_elapsed_time: 300s
sending_queue:
enabled: true
num_consumers: 10
queue_size: 5000
storage: file_storageEnable gzip compression to reduce network bandwidth.
The Dash0 ingress endpoint supports gzip-compressed OTLP/gRPC.
# GOOD — reduces bandwidth by 60-80 percent
exporters:
otlp:
endpoint: <OTLP_ENDPOINT>
headers:
Authorization: "Bearer <AUTH_TOKEN>"
compression: gzip
# BAD — uncompressed traffic wastes bandwidth
exporters:
otlp:
endpoint: <OTLP_ENDPOINT>
headers:
Authorization: "Bearer <AUTH_TOKEN>"
compression: noneEnable retries to handle transient network errors and backend unavailability.
| Setting | Default | Recommendation |
|---|---|---|
initial_interval | 5s | Keep default unless backend has strict rate limiting |
max_interval | 30s | Keep default for exponential backoff ceiling |
max_elapsed_time | 300s | Increase for backends with extended maintenance windows |
randomization_factor | 0.5 | Keep default to spread retry storms |
The sending queue buffers telemetry when the backend is temporarily unavailable. Without it, data is dropped during transient failures.
exporters:
otlp:
endpoint: <OTLP_ENDPOINT>
headers:
Authorization: "Bearer <AUTH_TOKEN>"
sending_queue:
enabled: true
num_consumers: 10
queue_size: 5000
storage: file_storage| Setting | Default | Recommendation |
|---|---|---|
num_consumers | 10 | Increase for high-throughput pipelines |
queue_size | 1000 | Set to 5000 for production workloads |
storage | (in-memory) | Set to file_storage for persistence across restarts |
Use file_storage with the file storage extension to persist the queue to disk.
In-memory queues lose buffered data when the Collector restarts.
Do not hardcode auth tokens in configuration files. Reference an environment variable instead.
Create a dedicated auth token with Ingesting permissions only.
The Collector needs to send telemetry, not query or manage the organization.
In Dash0, create the token at Settings → Auth Tokens → Create Token and select the Ingesting scope.
See auth tokens for details on available permission scopes.
# GOOD — token from environment variable
exporters:
otlp:
endpoint: <OTLP_ENDPOINT>
headers:
Authorization: "Bearer ${env:DASH0_AUTH_TOKEN}"
# BAD — hardcoded token in config
exporters:
otlp:
endpoint: <OTLP_ENDPOINT>
headers:
Authorization: "Bearer dh0_1a2b3c4d5e6f..."Set the environment variable in your deployment manifest:
env:
- name: DASH0_AUTH_TOKEN
valueFrom:
secretKeyRef:
name: dash0-credentials
key: auth-tokenUse the OTLP/HTTP exporter when gRPC is not available (e.g., network proxies that do not support HTTP/2).
exporters:
otlphttp:
endpoint: https://<OTLP_ENDPOINT>
headers:
Authorization: "Bearer ${env:DASH0_AUTH_TOKEN}"
compression: gzipThe OTLP/HTTP exporter uses port 4318 by default. Check your Dash0 endpoint documentation for the correct URL.
Use the debug exporter during development to print telemetry to the Collector's stdout. Do not enable the debug exporter in production — it generates excessive log output.
# GOOD — development only
exporters:
debug:
verbosity: detailed
# BAD — debug exporter in production pipeline
exporters:
debug:
verbosity: detailed
otlp:
endpoint: <OTLP_ENDPOINT>
headers:
Authorization: "Bearer ${env:DASH0_AUTH_TOKEN}"
service:
pipelines:
traces:
receivers: [otlp]
processors: [memory_limiter]
exporters: [otlp, debug] # debug wastes CPU and I/O in production| Level | Output | Use for |
|---|---|---|
basic | One line per export (count only) | Verifying that data flows through the pipeline |
normal | One line per telemetry item | Spot-checking individual items |
detailed | Full telemetry item with all attributes | Debugging attribute values and structure |
Send telemetry to multiple backends by listing multiple exporters in a pipeline. Each exporter receives a copy of the data independently.
exporters:
otlp/dash0:
endpoint: <OTLP_ENDPOINT>
headers:
Authorization: "Bearer ${env:DASH0_AUTH_TOKEN}"
otlp/secondary:
endpoint: secondary-backend:4317
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [memory_limiter]
exporters: [otlp/dash0, otlp/secondary]Use named instances (otlp/dash0, otlp/secondary) to configure multiple exporters of the same type.
tls.insecure: true sends telemetry (including potentially sensitive data) in plaintext.
Only disable TLS for local development or within a trusted network with mTLS (e.g., a service mesh).${env:VARIABLE_NAME} syntax.sending_queue in production.
Without a queue, transient backend failures cause immediate data loss.
Always enable the sending queue for production deployments.