Expert OpenTelemetry guidance for collector configuration, pipeline design, and production telemetry instrumentation. Use when configuring collectors, designing pipelines, instrumenting applications, implementing sampling, managing cardinality, securing telemetry, writing OTTL transformations, or setting up AI coding agent observability (Claude Code, Codex, Gemini CLI, GitHub Copilot).
93
97%
Does it follow best practices?
Impact
85%
7.08xAverage score across 4 eval scenarios
Passed
No known issues
Phase: RED → GREEN (TDD)
Purpose: Validate that the references/ai-agents.md reference causes the skill to materially improve responses to AI coding agent observability questions.
references/ai-agents.md. Record baseline responses.Prompt:
"Set up OpenTelemetry monitoring for Claude Code to track token usage and costs"
CLAUDE_CODE_ENABLE_TELEMETRY=1 is required (telemetry is opt-in)OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE=cumulative~/.claude/settings.json for persistent configOTEL_LOG_USER_PROMPTS, OTEL_LOG_TOOL_DETAILS)session.id as metric dimensionCLAUDE_CODE_ENABLE_TELEMETRY=1 as prerequisiteOTEL_METRICS_EXPORTER=otlp, OTEL_LOGS_EXPORTER=otlp~/.claude/settings.json persistent config formatOTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE=cumulativeOTEL_METRICS_INCLUDE_SESSION_ID and cardinality riskCLAUDE_CODE_ENABLE_TELEMETRY=1settings.json persistent config examplePrompt:
"I use Claude Code and Gemini CLI. Configure a single OTel Collector to receive telemetry from both."
service.name across agentsclaude_code.* to gen_ai.*memory_limiter may be missing or in wrong positionmemory_limiter as first processor in every pipelineresource processor to tag telemetry.source.type: ai-coding-agenttransform processor adding gen_ai.system attribute to Claude Code databatch processor last before exportersmemory_limiter is first processorresource processor normalizes agent identityPrompt:
"Which AI coding agents support OpenTelemetry? I need traces specifically for debugging multi-step agent operations."
gen_ai.* SemConv, v0.34.0+gen_ai.* SemConvprompt.id as pseudo-trace correlationcodex exec drops metricsPrompt:
"Enable Claude Code telemetry but make sure no user prompts are logged"
OTEL_LOG_USER_PROMPTS=true without warningOTEL_LOG_TOOL_DETAILS leaking tool parametersOTEL_LOG_USER_PROMPTS defaults to falseOTEL_LOG_USER_PROMPTS=false (or omits it, noting the safe default)OTEL_LOG_TOOL_DETAILS — tool parameters may contain secrets/pathscaptureContent risk if user later adopts GitHub CopilotOTEL_METRICS_INCLUDE_SESSION_ID=false (cardinality, not PII, but related)OTEL_LOG_TOOL_DETAILS specificallyPrompt:
"What dashboards should I build for monitoring our team's AI coding agent usage?"
user.id or session.id as metric dimensions (cardinality risk)input/outputgen_ai.token.type is limited to only input / outputDocument observed agent rationalizations and counter-guidance here as they are discovered during testing.
| Rationalization | Counter |
|---|---|
| "Claude Code supports traces via the OTEL_TRACES_EXPORTER env var" | Claude Code explicitly does NOT emit traces. OTEL_TRACES_EXPORTER is ignored. Only metrics and logs are emitted. |
| "You can use session.id as a metric label to track per-user costs" | session.id is unbounded cardinality. Use log queries with distinct count instead. |
| "Qwen Code telemetry is available now" | As of 2026-03, docs exist but code is not shipped. Verify before building on it. |
| "Codex CLI telemetry works the same in exec mode" | codex exec drops ALL metrics. Interactive mode only for full telemetry. |
docs
evals
cardinality-protection
claude-code-telemetry
collector-memory-limiter
scenario-1
scenario-2
scenario-3
scenario-4
tail-sampling-setup
references