Use this skill when the user asks to "add a command", "implement cx <something>", "new subcommand", "new CLI command", "add a new cx command", "create a command", "add subcommand", "implement a new command", "build a cx command", "wire up a new command", "extend the CLI", "add an API to cx", "new cx feature", "integrate a Coralogix API", or wants to add new functionality to the cx CLI. Use this even when the user describes a feature that implies a new command without saying "command" explicitly.
End-to-end workflow for adding a new command to cx. Every command falls into one of two archetypes - determine which one first, then follow the corresponding steps.
docs/adding-a-command.md has copy-pasteable code templates for every step below. Read it alongside this workflow.
Before writing any code, get clarity on the domain:
list, get, create) and what flags make sense.cx alerts - alert definitions + schedulerscx notifications - connectors, routers, presets, testcx webhooks - outgoing webhooks + actionscx enrichments - enrichment rules + custom enrichment tablescx integrations - integrations + extensions, contextual-datacx iam - api-keys, roles, scopes, users, groups, saml, ip-access
Run cx schema to see the full command tree as JSON.| Archetype | When to use | Reference implementation |
|---|---|---|
| A: DataPrime-based | Querying logs, spans, or any DataPrime source | src/commands/logs/mod.rs |
| B: REST-based | Wrapping a Coralogix REST API (most new commands) | src/commands/alerts/api.rs + src/commands/alerts/mod.rs |
DataPrime commands delegate to a shared pipeline and require minimal code (~130 lines). REST commands build the full pipeline (API client, fan-out, merge, render) - more code but more control.
Important: All API integrations must use REST (HTTP). The CLI is HTTP-only by design - do not use gRPC.
Before writing any code, read these files to internalize the existing patterns. This step is critical - agents that read existing code first produce implementations that are consistent with the codebase rather than inventing new patterns.
Always read:
src/main.rs - study the Commands enum to see how variants are structured, and the match cli.command dispatch block to see where your new variant fits. Note which commands early-exit (no credentials needed) vs which go through the full config resolution flow. Pay attention to wrapper groups (e.g., Notifications, Iam, Webhooks, Integrations) - these are top-level commands with nested subcommand enums that group related domains. If your command belongs under an existing group, add a new variant to that group's subcommand enum rather than creating a top-level command.src/commands/mod.rs - see existing module registrations so you add yours in the right placedocs/adding-a-command.md - full guide with code templates for both archetypesDataPrime archetype - also read:
src/commands/logs/mod.rs - a complete DataPrime command; notice how little code is needed because the shared pipeline does the heavy liftingsrc/commands/dataprime/mod.rs - the shared pipeline your command will delegate to; understand the run_query() signature and what it handles (fan-out, merge, spilling, agents output)REST archetype - also read:
src/commands/alerts/api.rs - see how response types are structured, how the API struct borrows &CxClient, how deserialization tests are writtensrc/commands/alerts/mod.rs - see how the handler declares pub mod api; and imports types via use api::{...};src/commands/dashboards/mod.rs - see the fan-out/merge/render pattern using render::* helpers, and how all three output formats are handledSkip this step for DataPrime commands - they use the shared DataPrime pipeline.
Create src/commands/<domain>/api.rs. See docs/adding-a-command.md § "Archetype B, Step 1" for the full template.
Key conventions and why they matter:
#[serde(rename_all = "camelCase")] on response types - Coralogix APIs use camelCase JSON keys#[serde(default)] on Vec fields - the API sometimes omits empty arrays entirely rather than sending [], so this prevents deserialization failuresOption<T> for fields that may be absent - be defensive, APIs evolve and fields vary across tiers&CxClient (don't own it) - the client is shared across the fan-out and must outlive individual API callsconst BASE_PATH for the endpoint prefix - keeps URLs DRYCreate src/commands/<domain>/mod.rs. For REST commands, declare pub mod api; at the top so the handler can use api::{...}; types from its sibling api.rs. See docs/adding-a-command.md for full templates of both archetypes.
Provide two things:
pub fn render_<domain>_text(merged: &MergedResults) -> Result<()> - called only for OutputFormat::Text; JSON and Agents output are handled by the shared pipelinerun() wrapper that calls super::dataprime::run_query() with your DataPrime source nameBuild the full fan-out/merge/render pipeline. Key patterns to understand:
render::render_table for text output - pass column headers (without "Profile") and rows where the first element is the profile name. The helper conditionally includes the Profile column based on include_profile. No duplicate struct definitions needed.render::render_json for JSON output - pretty-prints a &[Value] arraylet include_profile = targets.len() > 1; - this single boolean controls all multi-profile behavior (Profile column in text, "profile" key in JSON)eprintln!) - stdout is reserved for data so piped output isn't pollutedtoon_encode directly after any post-processing, because different commands may transform data differently before encodingRegister the module in src/commands/mod.rs.
In src/main.rs, add three things. See docs/adding-a-command.md § "CLI Wiring" for templates.
Commands enum variant - DataPrime commands use inline args; REST commands reference a subcommand enumList, Get, etc.match cli.command block. Most commands go through the full config resolution flow; only commands that don't need credentials (like profiles, cleanup) early-exit.Every new command must add tests at three layers. See
docs/adding-a-command.md § "Testing" for code templates and examples
of each.
| Layer | Location | What it verifies |
|---|---|---|
| Unit | src/**/<file>.rs #[cfg(test)] | Pure logic - deserialization (mandatory for REST), helpers, transforms |
| Integration | tests/<command>/main.rs (wiremock) | Command runner end-to-end with mocked HTTP |
| E2E | tests/e2e/<command>/mod.rs (assert_cmd, #[ignore]d) | Real cx binary against the Coralogix test team |
Things specific to this workflow that the doc doesn't emphasise:
tests/e2e/alerts/mod.rs.get <id>),
add a local discover_* fn in your e2e test module, modelled after
discover_alert_id in tests/e2e/alerts/mod.rs. Cache via OnceLock
and skip gracefully when the test team has no data - don't panic.tests/e2e.rs via
#[path = "e2e/your_domain/mod.rs"] mod your_domain;.Every command needs a corresponding skill in skills/ so AI agents know how to use it. Use the add-skill workflow to create it - it walks through the full process including reading reference implementations, writing effective trigger descriptions, and verification.
Run cargo build, cargo test (unit + integration), cargo clippy,
and cargo fmt --check. Fix any issues before committing.
If you have test team credentials configured, also run the e2e suite:
cargo test --test e2e -- --ignored --test-threads=1Smoke test all three output formats (text, json, agents) and
multi-profile (-p profile1 -p profile2).
See docs/adding-a-command.md § "PR Checklist" for the full checklist to include in your PR description.
defdc4d
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.