Build and deploy a Coralogix dashboard for a given service from its logs, spans, metrics, and service specs. Discovers telemetry via cx CLI commands, emits importable Coralogix JSON, verifies every PromQL and DataPrime query live through the `cx` CLI, and creates the dashboard via `cx dashboards create`. Use whenever the user asks to create, build, generate, or deploy a Coralogix dashboard, monitoring dashboard, or observability dashboard for a service, app, or pipeline.
93
92%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Produces a Coralogix dashboard for a target service and deploys it via the cx CLI. Workflow: discover the service's telemetry, align on intent with the user, draft a plan, emit the JSON, live-verify every query through cx, then create the dashboard in a chosen folder.
Only use metric names, log fields, and span attributes you can cite from the service's code, README, configuration, or a live query that returned a result. Do not invent them.
Load these files for domain-specific guidance:
| Task | Reference |
|---|---|
| DataPrime query syntax | references/dataprime-reference.md |
| PromQL query syntax, counters vs gauges, histograms | references/promql-guidelines.md |
| Log field discovery, query patterns, wildfind policy | references/logs-querying.md |
| Span field discovery, latency analysis, trace queries | references/spans-querying.md |
Dashboard-specific query gotchas (${__range}, promqlQueryType) | references/query-syntax.md |
| Widget JSON templates | references/widget-templates.md |
For choosing the right signal (metrics / logs / traces), use cx-telemetry-querying.
Beyond creating dashboards, use these commands to manage existing ones:
| Command | Purpose |
|---|---|
cx dashboards catalog -o json | List all dashboards in the catalog |
cx dashboards get <id> -o json | Get a dashboard definition (useful as a template) |
cx dashboards folders list -o json | List dashboard folders |
cx dashboards folders create --name "Name" | Create a dashboard folder |
cx dashboards folders create --name "Sub" --parent-id <id> | Create a nested folder |
To duplicate or modify an existing dashboard:
cx dashboards get <dashboard-id> -o json > dashboard.json
# Edit dashboard.json (change name, modify widgets, etc.)
cx dashboards create --from-file dashboard.jsonTrack progress through this checklist:
Dashboard Progress:
- [ ] Phase 1: Discover telemetry & business meaning
- [ ] Phase 2: Gather dashboard specifications from user
- [ ] Phase 3: Draft internal dashboard plan (sections/rows/widgets)
- [ ] Phase 4: Generate the Coralogix JSON
- [ ] Phase 5: Live-verify every query through the cx CLI
- [ ] Phase 6: Self-verify structure against the checklist
- [ ] Phase 7: Deploy via `cx dashboards create`Proceed in order. Don't jump to Phase 4 before the user approves the Phase 3 plan, and don't run Phase 7 before Phases 5 and 6 both pass.
For the target service, gather:
README.md and the top-level entrypoint (main.*, index.*, cmd/main.go, etc.). Summarize in 2–3 sentences what it does, its key stages, and what can go wrong.request, error, latency, dlq) run cx metrics search --name '*<keyword>*'. When a metric looks promising, list its labels with cx metrics get-labels <metric>. Only use names cx metrics search returns - this is what prevents invented metrics from reaching Phase 5. Cross-check the service's instrumentation (prometheus_client, promauto.NewCounter/Histogram/Gauge, OTel meters, prom-client, Micrometer, metrics.py) for semantics and histogram buckets (_sum, _count, _bucket).$d.* fields with cx search-fields "<description>" --dataset logs before assuming a field exists. Sample message templates and severity with cx logs "filter \$l.applicationname == '<app>'" --limit 5 -o json. Standard fields ($m.severity, $m.timestamp, $l.applicationname, $l.subsystemname) don't need discovery.cx search-fields "<description>" --dataset spans. Sample with cx spans "filter \$l.serviceName == '<svc>'" --limit 5 -o json. Error conventions vary ($d.tags.error, $d.http.status_code); check samples before filtering.dlq/DLQ references. Note topic/queue names for DLQ panels.meta.yaml, Helm values.yaml, Deployment, Dockerfile, chart.yaml. Extract:
applicationname / subsystemname label values as they appear in Coralogix.prod, staging, dev, …).If the signal for a question is ambiguous (e.g. "how much revenue last week"), delegate to cx-telemetry-querying first.
Produce a short internal summary before moving on. If critical telemetry is missing (e.g. no metrics), surface that to the user and ask whether they want a log-only or trace-only dashboard.
Ask the user a focused set (≤6). Prefer AskQuestion:
${__range} so users can zoom.tenant_id, account_id, subsystem_name, region, env, …).dev, staging, test).collapsed: true).Don't block on answers you can reasonably infer - state the inference and continue.
Write a markdown plan the user can approve before JSON generation:
## Dashboard: <Service> - <Purpose>
### Section 1: <Overview> (collapsed: false)
- Row 1: [widget type] <title> - <what it shows> - source: metrics|logs|spans
- Row 2: ...
### Section 2: <Deep dive> (collapsed: false)
...
### Section N: <Logs & errors> (collapsed: true)
...
### Top-level filters
- <label> (<source>)
### Assumptions / gaps
- ...Section design:
collapsed: true.Widget-type selection:
| Signal | Widget type |
|---|---|
| Single headline number (count, % success, totals) | gauge (Coralogix calls this "stat") |
| Breakdown across ≤8 categories | pieChart |
| Change over time (rate, latency, count per bucket) | lineChart |
| Top-N tables, last errors, per-entity listings | dataTable |
Don't use other widget types unless the user asks.
Wait for the user to approve or adjust the plan before emitting JSON.
Produce a single JSON document following references/widget-templates.md. Key rules:
{
"id": "<21-char-nanoid>",
"name": "<Dashboard Name>",
"layout": { "sections": [ ... ] },
"variables": [],
"variablesV2": [],
"filters": [ ... ],
"relativeTimeFrame": "<seconds>s",
"annotations": [],
"off": {},
"actions": []
}section, row, widget, and query id."appearance": { "height": 19 } unless there's a reason to change.options.custom.name, collapsed, and color.predefined: "SECTION_PREDEFINED_COLOR_UNSPECIFIED".equals with empty values so users can fill in. Use notEquals for environment exclusions (see references/widget-templates.md)."172800s" (48h) unless the user specified otherwise.For query syntax follow references/query-syntax.md; for the full query languages load references/dataprime-reference.md and references/promql-guidelines.md.
Every PromQL and DataPrime query in the draft has to successfully run through cx before Phase 7. This catches invented metric names, typoed field paths, and malformed pipelines.
What:
TIER_FREQUENT_SEARCH): hot tier for fast search on recent logs/spans.TIER_ARCHIVE): cold tier for older logs/spans (long-term).When to choose:
The two languages are verified against different windows:
relativeTimeFrame to a $RANGE token (e.g. 48h for 172800s), substitute ${__range} with [$RANGE] for the CLI call, then restore ${__range} in the JSON before Phase 6. Range vectors are window-sensitive, so the check has to match what the dashboard will evaluate.now-15m → now, --limit 1). The goal is syntax / field / pipeline validation, not data-presence on the dashboard's window — a short window is faster and a cleaner fail signal.Full procedure (CLI invocations, $RANGE mapping table, retry budget, failure modes): references/verification.md.
If a query can't be made to pass within the retry budget, surface it to the user with the CLI error verbatim - don't ship a broken widget.
Run this checklist against the final JSON. Fix and re-check if any item fails before Phase 7.
[${__range}] - never [$__range], never [5m] (unless the panel is intentionally a sliding window).promqlQueryType is PROM_QL_QUERY_TYPE_INSTANT for single-value widgets (gauge, pieChart, dataTable). Omitted for lineChart.$d.message / $l.applicationname / unquoted severity enums (full rules: references/dataprime-reference.md).source logs or source spans (dashboard widgets require the source prefix; Phase 5 verification strips it before handing the pipeline to cx logs / cx spans).clamp_min(..., 1)._sum, _count, _bucket).filters - Coralogix injects them at render time.id.value, rows, and options.custom.id.value, appearance.height, and widgets.id.value and a definition with exactly one of gauge / pieChart / lineChart / dataTable.thresholdType: "THRESHOLD_TYPE_ABSOLUTE" with green at high values; error/DLQ gauges use red at high values.gauge, not as a stat type.filters includes each slicing dimension from Phase 2."<Service> - <Purpose>").collapsed: true unless the user said otherwise.cx dashboards createDon't tell the user to paste JSON into the Coralogix UI - deploy it directly.
cx dashboards folders list -o json.--folder) if nothing fits.cx dashboards create --from-file /tmp/cx-dashboard-<slug>.json --folder <id>. The CLI generates the requestId envelope and prints the created dashboard ID.Full procedure (folder-picking UX, command templates, idempotency note): references/deploy.md.
On failure: show the CLI error verbatim and return to Phase 5. The most common cause is a query that parses locally but the live API rejects.
## Plan
<the approved Phase 3 plan>
## Verification
- PromQL queries verified: <N>/<N>
- DataPrime queries verified: <N>/<N>
## Deployed
- Dashboard: **<Name>**
- ID: `<id>`
- Folder: `<folder name or "root">`
- Profile: `<cx profile>`
The dashboard is live in Coralogix. Adjust filter values (e.g. `account_id`) after opening it.references/query-syntax.mdreferences/widget-templates.mdreferences/verification.mdreferences/deploy.mdreferences/dataprime-reference.mdreferences/promql-guidelines.mdreferences/logs-querying.mdreferences/spans-querying.mdcx dataprime list, cx dataprime show <command>cx-observability-setup - full monitoring setup workflow (views, webhooks, notifications, integrations)cx-incident-management - SLO and alert-connected dashboards, incident triagecx-telemetry-querying - discover the right telemetry signal before building dashboardsdefdc4d
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.