Retrieve and investigate API metrics and request log data from Apitally. Fetches aggregated metrics, request logs, consumers, and app metadata via the Apitally CLI, stores data in a local DuckDB database, and runs SQL queries to investigate issues or answer questions. Use when the user mentions Apitally, the Apitally CLI, API metrics, API request logs, or API consumers.
93
92%
Does it follow best practices?
Impact
92%
1.10xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
The Apitally CLI retrieves API metrics and request log data from Apitally and optionally stores it in a local DuckDB database for investigation with SQL. Two main data sources:
Run commands with npx (no install needed):
npx @apitally/cli <command> [--api-key <key>]A team-scoped API key is required to use the CLI. The auth command saves an API key to ~/.apitally/auth.json, which is then used by all subsequent commands unless overridden by the --api-key flag. If any command exits with code 3 (auth error), ask the user to run npx @apitally/cli auth to authenticate, then continue.
All commands output NDJSON to stdout by default. With --db, data is written to a DuckDB database instead (~/.apitally/data.duckdb by default), enabling SQL queries via the sql command.
app_id. Each app has one or more environments (e.g. "prod", "dev"). The env field on request logs is a string matching the environment name.consumer_id is a numeric internal ID (surrogate key, used in request log filters and JOINs). identifier is a string set by the application (e.g. email, username) to uniquely identify the consumer. name is a display name (auto-generated from identifier if not explicitly set). group is an optional group name.path is the parameterized route template (e.g. /users/{user_id}), good for grouping by endpoint. url is the full request URL with actual values and query parameters (e.g. https://api.example.com/users/123?limit=10).request-details as the logs field.request-details. Form a tree via parent_span_id.All commands are run via npx @apitally/cli <command>. For full details, see references/commands.md.
auth [--api-key <key>] -- configure API keywhoami -- check auth, show teamapps [--db [<path>]] -- list apps (get app IDs)consumers <app-id> [--requests-since <dt>] [--db [<path>]] -- list consumers for an app (get consumer IDs)endpoints <app-id> [--method <methods>] [--path <pattern>] [--db [<path>]] -- list endpoints for an appmetrics <app-id> --since <dt> [--until <dt>] --metrics <json> [--interval <interval>] [--group-by <json>] [--filters <json>] [--timezone <tz>] [--db [<path>]] -- fetch aggregated metricsrequest-logs <app-id> --since <dt> [--until <dt>] [--fields <json>] [--filters <json>] [--sample <n|rate>] [--limit <n>] [--db [<path>]] -- fetch request logs (max 1,000,000 rows at once)request-details <app-id> <request-uuid> [--db [<path>]] -- fetch full details for a single request (including headers, payloads, exception info, application logs, and spans)sql "<query>" [--db <path>] -- run SQL against local DuckDBreset-db [--db <path>] -- drop and recreate all tables in local DuckDBIdentify the app — run npx @apitally/cli apps to list apps and get their IDs. If there is more than one app, and the correct app can't be inferred from the user's messages, ask the user which app they mean. Use the app ID consistently for all commands and SQL WHERE conditions throughout the investigation.
Determine the time range — check if the user specified a time range (e.g. "last 24 hours", "since Monday", a specific date). If not, default to the last 7 days. Use this time range consistently for --requests-since / --since / --until flags and SQL WHERE conditions throughout the investigation.
Fetch supporting data if needed — skip unless you need endpoint discovery or consumer identification.
Endpoints: use endpoints to discover available method/path combinations for filtering. Use --method and/or --path to filter (e.g. --path '*users*').
npx @apitally/cli endpoints <app-id> [--method <methods>] [--path <pattern>]Consumers: use consumers to map identifiers (emails, usernames, groups) to consumer_id values and vice versa, if the question involves consumers.
npx @apitally/cli consumers <app-id> [--requests-since "<since>"] --dbnpx @apitally/cli sql "SELECT consumer_id, identifier, name, \"group\" FROM consumers WHERE app_id = <app-id> AND identifier ILIKE '%@example.com'"Fetch data — choose based on the question. Always read the command reference for available options.
Metrics — for questions that can be answered with aggregated metrics: traffic volume, error rates, response time trends, throughput, endpoint comparisons. Use --group-by and --interval to break down by environment, endpoint, consumer, status code, or time period.
npx @apitally/cli metrics <app-id> --since "<since>" \
--metrics '["requests","error_rate","response_time_p50","response_time_p95"]' \
--group-by '["method","path"]' --interval day --dbRequest logs — for questions that require individual request data: specific errors, exceptions, headers, payloads, traces, etc. Narrow down fields and use filters to avoid fetching unnecessarily large volumes of data. Refetching replaces existing records in DuckDB (no duplicates).
npx @apitally/cli request-logs <app-id> --since "<since>" \
--fields '<json-array-of-field-names>' \
--filters '<json-array-of-filter-objects>' \
--dbFilter by endpoint: --filters '[{"field":"method","op":"eq","value":"GET"},{"field":"path","op":"eq","value":"/v1/users/{user_id}"}]'
Filter by consumer: --filters '[{"field":"consumer_id","op":"in","value":[1,2,3]}]'
Both — for broad investigations, start with metrics for an overview, then fetch request logs to drill into specifics.
Query DuckDB using the sql command — CRITICAL: The DuckDB database is persistent and retains data from previous fetches, including other sessions. You MUST filter your SQL queries to match the scope of your current investigation. Always include WHERE conditions on app_id, period_start/timestamp, and any other relevant fields. Without these filters, results will include unrelated data and will be wrong.
npx @apitally/cli sql "SELECT method, path, status_code, COUNT(*) as n FROM request_logs WHERE app_id = <app-id> AND timestamp >= '<since>' AND status_code >= 400 GROUP BY ALL ORDER BY n DESC"Read the DuckDB schema reference for available tables, columns and relationships.
Iterate if needed — refine filters, fetch additional fields (headers, bodies, exceptions), or widen the time range as needed.
Fetch request counts grouped by endpoint and status code to find the most frequent errors:
npx @apitally/cli metrics <app-id> --since "<since>" \
--metrics '["requests"]' \
--group-by '["method","path","status_code"]' \
--filters '[{"field":"status_code","op":"gte","value":400}]' --dbSELECT method, path, status_code, sum(requests) as requests_sum
FROM metrics
WHERE app_id = <app-id>
AND period_start >= '<since>'
GROUP BY method, path, status_code
ORDER BY requests_sum DESCThen fetch request logs for a specific error to investigate further:
npx @apitally/cli request-logs <app-id> --since "<since>" \
--fields '["timestamp","request_uuid","url","status_code","response_body_json","exception_type","exception_message"]' \
--filters '[{"field":"method","op":"eq","value":"<method>"},{"field":"path","op":"eq","value":"<path>"},{"field":"status_code","op":"eq","value":<status_code>}]' \
--limit 5Use request-details to fetch full details (headers, body, exception, application logs, spans) for a specific request:
npx @apitally/cli request-details <app-id> <request-uuid>SELECT r.timestamp, r.method, r.url, r.status_code, r.response_time_ms
FROM request_logs r
JOIN consumers c ON r.app_id = c.app_id AND r.consumer_id = c.consumer_id
WHERE r.app_id = <app-id>
AND r.timestamp >= '<since>'
AND r.timestamp < '<until>'
AND c.identifier = 'user@example.com'
ORDER BY r.timestamp ASCHeaders are stored as STRUCT(name VARCHAR, value VARCHAR)[]. Use DuckDB list comprehensions:
-- Extract a specific header value
SELECT timestamp, method, path,
[s.value FOR s IN request_headers IF lower(s.name) = 'content-type'][1] as content_type
FROM request_logs
WHERE app_id = <app-id>
AND timestamp >= '<since>'
AND request_headers IS NOT NULL
LIMIT 20
-- Filter by header existence
SELECT timestamp, method, path
FROM request_logs
WHERE app_id = <app-id>
AND timestamp >= '<since>'
AND len([s FOR s IN request_headers IF lower(s.name) = 'authorization']) > 0Body fields (request_body_json, response_body_json) are of type JSON. Use DuckDB JSON operators and functions.
Note: Request/response bodies larger than 50 KB are not captured by the SDKs and will be NULL.
SELECT timestamp, method, path,
response_body_json->>'$.error' as error_message
FROM request_logs
WHERE app_id = <app-id>
AND timestamp >= '<since>'
AND response_body_json IS NOT NULL
AND (response_body_json->>'$.error') IS NOT NULLSee references/duckdb_json_functions.md for more JSON functions and examples.
Datetime columns are TIMESTAMPTZ. Direct casts like timestamp::DATE error out (no ICU extension). Use AT TIME ZONE 'UTC' first:
SELECT date_trunc('day', timestamp AT TIME ZONE 'UTC') AS day, count(*) AS n
FROM request_logs WHERE app_id = <app-id> AND timestamp >= '<since>'
GROUP BY day ORDER BY day| Code | Meaning |
|---|---|
| 0 | Success |
| 1 | Unknown error |
| 2 | Usage error (invalid arguments) |
| 3 | Authentication error |
| 4 | Input error (invalid values) |
| 5 | API / network error |
39d01f1
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.