Generate label matchers, line filters, log aggregations, and metric queries in LogQL (Loki Query Language) following current standards and conventions. Use this skill when creating new LogQL queries, implementing log analysis dashboards, alerting rules, or troubleshooting with Loki.
Overall
score
93%
Does it follow best practices?
Validation for skill structure
CRITICAL: Always engage the user in collaborative planning before generating queries.
Ask about: goal (error analysis, alerting, debugging), use case, log sources (labels, format), query type (log/metric), filtering needs, parsing method, aggregation, and time range.
Before generating code, present a plain-English plan and confirm with the user via AskUserQuestion:
## LogQL Query Plan
**Goal**: [Description]
**Query Structure**:
1. Select streams: `{label="value"}`
2. Filter lines: [operations]
3. Parse logs: [parser]
4. Aggregate: [function]
**Does this match your intentions?**Once confirmed, MANDATORY: consult references before generating. Do NOT rely on prior knowledge.
| Query Complexity | File to Read |
|---|---|
| Complex aggregations (nested topk, multiple sum by, percentiles) | assets/common_queries.logql |
| Performance-critical queries (large time ranges, high-volume streams) | references/best_practices.md — sections #1-5, #15-18 |
| Alerting rules | references/best_practices.md — sections #19-21, #39 |
| Structured metadata / Loki 3.x features | references/best_practices.md — sections #35-37 |
| Template functions (line_format, label_format) | assets/common_queries.logql |
| Function/parser syntax | references/function_reference.md |
| IP filtering, pattern extraction, regex | assets/common_queries.logql |
Example paths:
Read(".claude/skills/logql-generator/assets/common_queries.logql")
Read(".claude/skills/logql-generator/references/best_practices.md")Use when local references don't cover the topic:
| Trigger | Use Tool |
|---|---|
Loki 3.x features (approx_topk, pattern match |>, vector(), structured metadata) | context7 MCP → grafana loki + topic |
| Recording rules, unclear syntax, edge cases | context7 MCP → grafana loki + topic |
| Version-specific behavior, Grafana Alloy integration | WebSearch → "Grafana Loki LogQL [topic] [year]" |
{namespace="prod", app="api", level="error"} not just {namespace="prod"}Log Filtering:
{job="app"} |= "error" |= "timeout" # Contains both
{job="app"} |~ "error|fatal|critical" # Regex match
{job="app"} != "debug" # ExcludeJSON/logfmt Parsing:
{app="api"} | json | level="error" | status_code >= 500
{app="app"} | logfmt | caller="database.go"Pattern Extraction:
{job="nginx"} | pattern "<ip> - - [<_>] \"<method> <path>\" <status> <size>"Metrics:
# Rate
rate({job="app"} | json | level="error" [5m])
# Count by label
sum by (app) (count_over_time({namespace="prod"} | json [5m]))
# Error percentage
sum(rate({app="api"} | json | level="error" [5m])) / sum(rate({app="api"}[5m])) * 100
# Latency percentiles
quantile_over_time(0.95, {app="api"} | json | unwrap duration [5m])
# Top N
topk(10, sum by (error_type) (count_over_time({job="app"} | json | level="error" [1h])))Formatting:
{job="app"} | json | line_format "{{.level}}: {{.message}}"
{job="app"} | json | label_format env="{{.environment}}"IP Filtering (prefer label filter after parsing for precision):
{job="nginx"} | logfmt | remote_addr = ip("192.168.4.0/24")When to use this stage:
Present the query construction incrementally:
## Building Your Query Step-by-Step
### Step 1: Stream Selector (verify logs exist)
```logql
{app="api"}Test this first to confirm logs are flowing
{app="api"} |= "error"Reduces data before parsing
{app="api"} |= "error" | jsonNow you can filter on extracted labels
{app="api"} |= "error" | json | level="error"Final filter on parsed data
sum(count_over_time({app="api"} |= "error" | json | level="error" [5m]))Complete metric query
**Use AskUserQuestion** to offer incremental mode:
- Option: "Show step-by-step construction" vs "Show final query only"
### Stage 6: Provide Usage
1. **Final Query** with explanation
2. **How to Use**: Grafana panel, Loki alerting rules, `logcli query`, HTTP API
3. **Customization**: Labels to modify, thresholds to tune
## Advanced Techniques
### Multiple Parsers
```logql
{app="api"} | json | regexp "user_(?P<user_id>\\d+)"sum(sum_over_time({app="api"} | json | unwrap duration [5m])){service_name=`app`} |> "<_> level=debug <_>"{app="api"} | json | (status_code >= 400 and status_code < 500) or level="error"sum(rate({app="api"} | json | level="error" [5m])) - sum(rate({app="api"} | json | level="error" [5m] offset 1d)){app="api"} | json | keep namespace, pod, level
{app="api"} | json | drop pod, instanceNote: LogQL has no
dedupordistinctoperators. Use metric aggregations likesum by (field)for programmatic deduplication.
High-cardinality data without indexing (trace_id, user_id, request_id):
# Filter AFTER stream selector, NOT in it
{app="api"} | trace_id="abc123" | json | level="error"Place structured metadata filters BEFORE parsers:
# ACCELERATED
{cluster="prod"} | detected_level="error" | logfmt | json
# NOT ACCELERATED
{cluster="prod"} | logfmt | json | detected_level="error"approx_topk(10, sum by (endpoint) (rate({app="api"}[5m])))sum(count_over_time({app="api"} | json | level="error" [5m])) or vector(0)discover_log_levels: true (stored as structured metadata)For comprehensive function and parser documentation, see references/function_reference.md:
rate(), count_over_time(), bytes_rate(), absent_over_time()sum_over_time(), quantile_over_time(), etc.sum, topk, approx_topk, with by/without groupingjson, logfmt with optionsline_format/label_format# Alert when error rate exceeds 5%
(sum(rate({app="api"} | json | level="error" [5m])) / sum(rate({app="api"}[5m]))) > 0.05
# With vector() to avoid "no data"
sum(rate({app="api"} | json | level="error" [5m])) or vector(0) > 10| Issue | Solution |
|---|---|
| No results | Check labels exist, verify time range, test stream selector alone |
| Query slow | Use specific selectors, filter before parsing, reduce time range |
| Parse errors | Verify log format matches parser, test JSON validity |
| High cardinality | Use line filters not label filters for unique values, aggregate |
|>, !>)approx_topk functionDeprecations: Promtail (use Alloy), BoltDB store (use TSDB with v13 schema)
Install with Tessl CLI
npx tessl i pantheon-ai/logql-generator@0.1.4