Expert guidance for configuring and deploying the OpenTelemetry Collector. Use when setting up a Collector pipeline, configuring receivers, exporters, or processors, deploying a Collector to Kubernetes or Docker, or forwarding telemetry to Dash0. Triggers on requests involving collector, pipeline, OTLP receiver, exporter, or Dash0 collector setup.
100
100%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Resource attributes identify what system is described by the telemetry, which very often is also the system producing the telemetry (but not always, like in case of network monitoring). They are stable for the lifetime of the process and are attached to every signal (traces, metrics, and logs) automatically. Getting them right is the single highest-impact thing you can do for observability — without them, telemetry cannot be attributed to a service, environment, or instance.
For guidance on where to place attributes across telemetry levels, see attributes.
service.nameEvery service must set service.name.
Without it, all telemetry falls into unknown_service, making it impossible to attribute to a service in dashboards, alerts, or service maps.
Set it via the OTEL_SERVICE_NAME environment variable:
export OTEL_SERVICE_NAME="order-api"[!NOTE] The value of
service.nameis the same in every environment and instance — set it in application code, a shared.envfile, or a deployment descriptor.
Choose a name that is:
checkout-service, not svc-42).checkout vs CheckOut in different environments complicate querying, especially during outages.Pick a naming convention (kebab-case, snake_case, or camelCase) and apply it consistently across the entire service fleet.
service.namespaceGroups related services within the same application or product.
Use it to scope services that belong together — for example, a checkout, payment, and inventory service might all share the namespace acme-webstore.
export OTEL_RESOURCE_ATTRIBUTES="service.namespace=acme-webstore"Without a namespace, identically named services across different products become ambiguous.
[!NOTE] The value of
service.namespaceis the same in every environment and instance — set it in application code or a shared.envfile, likeservice.name.
deployment.environment.nameDistinguish production from staging, development, and other environments. Without it, production and test telemetry are mixed together, making dashboards and alerts unreliable.
export OTEL_RESOURCE_ATTRIBUTES="deployment.environment.name=production"[!NOTE] The value of
deployment.environment.namechanges per environment — inject it from the deployment pipeline (e.g., Helm values, CI/CD variable), not from application code.
See resolving configuration values for ordered lookup strategies.
service.versionSet the service version to enable deployment tracking, regression detection, and version-aware analysis. This attribute is invaluable during rollouts to compare how the new version and the old one behave.
export OTEL_RESOURCE_ATTRIBUTES="service.version=1.4.2"[!NOTE] The value of
service.versionchanges per release — derive it from the build pipeline (e.g., git tag, CI build number), never hardcode it in application code.
See resolving configuration values for ordered lookup strategies to find the version value.
service.instance.idUniquely identifies a single instance of the service.
The triplet (service.namespace, service.name, service.instance.id) must be globally unique.
Without it, instance-level analysis (e.g., identifying a single unhealthy pod) is not possible.
[!NOTE] The value of
service.instance.idchanges per instance — generate it at startup (e.g., UUID v4) or inject it from the deployment platform (e.g., Kubernetes downward API), never hardcode it in application code.
It must be stable for the lifetime of the process and should be an opaque identifier — do not expose infrastructure details like pod names or container IDs directly. See resolving configuration values for generation strategies (UUID v4, UUID v5, and common pitfalls).
Setting service.instance.id does not replace the need to also set k8s.pod.uid in Kubernetes.
Both attributes serve different purposes: service.instance.id is a logical, opaque identifier, while k8s.pod.uid is used by the k8sattributes processor for Kubernetes metadata enrichment.
k8s.* attributes describe the infrastructure running the service, not the service itself.
[!NOTE] Set
k8s.*attributes via the Kubernetes downward API in pod specs or let thek8sattributesCollector processor resolve them automatically — never set them in application code.
Follow the guidance issued in Kubernetes deployment.
For configuring the k8sattributes processor in the Collector, see the processors rule in the otel-collector skill.
Resource attributes are set via the OTEL_RESOURCE_ATTRIBUTES environment variable as a comma-separated list of key=value pairs.
export OTEL_RESOURCE_ATTRIBUTES="service.version=1.4.2,deployment.environment.name=production,service.instance.id=$(uuidgen)"Combine this with OTEL_SERVICE_NAME (which takes precedence over service.name in OTEL_RESOURCE_ATTRIBUTES):
export OTEL_SERVICE_NAME="order-api"
export OTEL_RESOURCE_ATTRIBUTES="service.version=1.4.2,deployment.environment.name=production".env.local exampleThis example is suitable for local development. Application-identity attributes are set directly; deployment-context attributes use placeholder values that the deployment pipeline would replace in production.
# Application-identity attributes (same in every environment)
OTEL_SERVICE_NAME=order-api
OTEL_RESOURCE_ATTRIBUTES=service.namespace=acme-webstore,service.version=local-dev,deployment.environment.name=development
# Exporter configuration
OTEL_TRACES_EXPORTER=otlp
OTEL_METRICS_EXPORTER=otlp
OTEL_LOGS_EXPORTER=otlp
OTEL_EXPORTER_OTLP_ENDPOINT=https://<OTLP_ENDPOINT>
OTEL_EXPORTER_OTLP_HEADERS=Authorization=Bearer YOUR_AUTH_TOKEN
NODE_OPTIONS=--import @opentelemetry/auto-instrumentations-node/registerIn a production deployment descriptor (e.g., Kubernetes manifest, Docker Compose, or CI/CD pipeline), override service.version and deployment.environment.name with values derived from the build and deployment pipeline.
service.name.
Telemetry appears as unknown_service and cannot be attributed in service maps, dashboards, or alerts.service.name casing across environments.
Variations like checkout vs Checkout create duplicate entries in service maps and break cross-environment queries.service.instance.id across multiple pods.
Instance-level queries return aggregated data from all pods instead of a single instance.service.version.
Falls out of sync after the first deployment.
Derive it from the build pipeline, git tags, or commit SHAs.deployment.environment.name.
Production and staging telemetry are mixed, making dashboards and alerts unreliable.