Log queries, filtering, pattern analysis, and log correlation. Search and analyze application and infrastructure logs.
52
57%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/dt-obs-logs/SKILL.mdQuery, filter, and analyze Dynatrace log data using DQL for troubleshooting and monitoring.
Use this skill when users want to:
from:now() - <duration> for time windowsmatchesPhrase() and contains() for content searchFind specific log entries by time, severity, and content.
Typical steps:
Example:
fetch logs, from:now() - 1h
| filter status == "ERROR"
| fields timestamp, content, process_group = dt.process_group.detected_name
| sort timestamp desc
| limit 100Narrow down logs using multiple criteria (severity, entity, content).
Typical steps:
Example:
fetch logs, from:now() - 2h
| filter in(status, {"ERROR", "FATAL", "WARN"})
| summarize count(), by: {dt.process_group.id, dt.process_group.detected_name}
| fieldsAdd process_group = dt.process_group.detected_name
| sort `count()` descIdentify patterns, trends, and anomalies in log data.
Typical steps:
Example:
fetch logs, from:now() - 2h
| filter status == "ERROR"
| fieldsAdd
has_exception = if(matchesPhrase(content, "exception"), true, else: false),
has_timeout = if(matchesPhrase(content, "timeout"), true, else: false)
| summarize
count(),
exception_count = countIf(has_exception == true),
timeout_count = countIf(has_timeout == true),
by: {process_group = dt.process_group.detected_name}filter status == "ERROR" - Filter by status levelin(status, "ERROR", "FATAL", "WARN") - Multi-status filtercontains(content, "keyword") - Simple substring searchmatchesPhrase(content, "exact phrase") - Full-text phrase searchdt.process_group.detected_name - Get human-readable process group namefilter process_group == "service-name" - Filter by specific entitycount() - Count all log entriescountIf(condition) - Conditional countby: {dimension} - Group by entity or time bucketbin(timestamp, 5m) - Time bucketing for trendsfields timestamp, content, status - Select specific fieldsfieldsAdd name = expression - Add computed fieldsif(condition, true_value, else: false_value) - Conditional logicSimple substring search:
fetch logs, from:now() - 1h
| filter contains(content, "database")
| fields timestamp, content, statusFull-text phrase search:
fetch logs, from:now() - 1h
| filter matchesPhrase(content, "connection timeout")
| fields timestamp, content, process_group = dt.process_group.detected_nameCalculate error rates over time:
fetch logs, from:now() - 2h
| summarize
total_logs = count(),
error_logs = countIf(status == "ERROR"),
by: {time_bucket = bin(timestamp, 5m)}
| fieldsAdd error_rate = (error_logs * 100.0) / total_logs
| sort time_bucket ascFind most common errors:
fetch logs, from:now() - 24h
| filter status == "ERROR"
| summarize error_count = count(), by: {content}
| sort error_count desc
| limit 20Filter logs by process group:
fetch logs, from:now() - 1h
| fieldsAdd process_group = dt.process_group.detected_name
| filter process_group == "payment-service"
| filter status == "ERROR"
| fields timestamp, content, status
| sort timestamp descMany applications emit JSON-formatted log lines. Use parse to extract fields instead of dumping raw content:
fetch logs, from:now() - 1h
| filter status == "ERROR"
| parse content, "JSON:log"
| fieldsAdd level = log[level], message = log[msg], error = log[error]
| fields timestamp, level, message, error
| sort timestamp desc
| limit 50Aggregate by a parsed field:
fetch logs, from:now() - 4h
| filter status == "ERROR"
| parse content, "JSON:log"
| fieldsAdd message = log[msg]
| summarize error_count = count(), by: {message}
| sort error_count desc
| limit 20Notes:
parse content, "JSON:log" creates a record field log — access nested values with log[key]contains() before parse to reduce parsing overheadcontentfrom:now() - <duration> to limit datacontains() for simple, matchesPhrase() for exact| limit 100 to prevent overwhelming outputdt.process_group.detected_name or getNodeName() for human-readable outputbin(timestamp, 5m) for time-series analysisdt.process_group.id for service correlationbin() and time rangesmatchesPhrase()summarize and conditional functionsmatchesPhrase) may have performance implications on large datasets4991356
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.