Real User Monitoring (RUM), Web Vitals, user sessions, mobile crashes, page performance, user interactions, and frontend errors. Query web and mobile frontend telemetry.
60
70%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/dt-obs-frontends/SKILL.mdMonitor web and mobile frontends using Real User Monitoring (RUM) with DQL queries. This skill targets the new RUM experience only; do not use classic RUM data.
This skill helps you:
Data Sources:
timeseries with dt.frontend.* (trends, alerting)fetch user.events (individual page views, requests, clicks, errors)fetch user.sessions (session-level aggregates: duration, bounce, counts)dt.frontend.user_action.count - User action volumedt.frontend.user_action.duration - User action durationdt.frontend.request.count - Request volumedt.frontend.request.duration - Request latency (ms)dt.frontend.error.count - Error countsdt.frontend.session.active.estimated_count - Active sessionsdt.frontend.user.active.estimated_count - Unique usersdt.frontend.web.page.cumulative_layout_shift - CLS metricdt.frontend.web.navigation.dom_interactive - DOM interactive timedt.frontend.web.page.first_input_delay - FID metric (legacy; prefer INP)dt.frontend.web.page.largest_contentful_paint - LCP metricdt.frontend.web.page.interaction_to_next_paint - INP metricdt.frontend.web.navigation.load_event_end - Load event enddt.frontend.web.navigation.time_to_first_byte - Time to first bytefrontend.name - Filter by frontend name (e.g. my-frontend)dt.rum.user_type - Exclude synthetic monitoringgeo.country.iso_code - Geographic filteringdevice.type - Mobile, desktop, tabletbrowser.name - Browser filteringUse these for dt.frontend.* timeseries splits and breakdowns:
frontend.name - Frontend namegeo.country.iso_codedevice.typebrowser.nameos.nameuser_type - real_user, synthetic, robotfetch user.events, from: now() - 2h
| filter characteristics.has_page_summary == true
| summarize page_views = count(), by: {frontend.name}
| sort page_views desccharacteristics.has_page_summary - Page views (web)characteristics.has_view_summary - Views (mobile)characteristics.has_navigation - Navigation eventscharacteristics.has_user_interaction - Clicks, forms, etc.characteristics.has_request - Network request eventscharacteristics.has_error - Error eventscharacteristics.has_crash - Mobile crashescharacteristics.has_long_task - Long JavaScript taskscharacteristics.has_csp_violation - CSP violationsFull event model: https://docs.dynatrace.com/docs/semantic-dictionary/model/rum/user-events
user.sessions)user.sessions contains session-level aggregates produced by the session aggregation service from user.events. Field names differ from user.events — sessions use underscores where events use dots.
Session identity and context:
dt.rum.session.id — Session ID (NOT dt.rum.session_id)dt.rum.instance.id — Instance IDfrontend.name - array of frontends involved in sessiondt.rum.application.type — web or mobiledt.rum.user_type — real_user, synthetic, or robotSession aggregates (underscore naming — NOT dot):
| Field | Description | ⚠️ NOT this |
|---|---|---|
navigation_count | Number of navigations | navigation.count |
user_interaction_count | Clicks, form submissions | user_interaction.count |
user_action_count | User actions | user_action.count |
request_count | XHR/fetch requests | request.count |
event_count | Total events in session | event.count |
page_summary_count | Page views (web) | page_summary.count |
view_summary_count | Views (mobile/SPA) | view_summary.count |
Error fields (dot naming — same as events):
error.count, error.exception_count, error.http_4xx_count, error.http_5xx_counterror.anr_count, error.csp_violation_count, error.has_crashSession lifecycle:
start_time, end_time, duration (nanoseconds)end_reason — timeout, synthetic_execution_finished, etc.characteristics.is_bounce — Boolean bounce flagcharacteristics.has_replay — Session replay availableUser identity:
dt.rum.user_tag — User identifier (typically email, username or customerId), set via dtrum.identifyUser() API call in the instrumented frontend. Not always populated — only present when the frontend explicitly calls identifyUser().dt.rum.user_tag is empty, dt.rum.instance.id is often the only user differentiator. The value is a random ID assigned by the RUM agent on the client side, so it is not personally identifiable but can be used to distinguish unique users when user_tag is not set. On web this is based on a persistent cookie, so it can be deleted by the user.user.sessions, not user.events (where it may be empty even if the session has one).Client/device context:
browser.name, browser.version, device.type, os.namegeo.country.iso_code, client.ip, client.ispSynthetic-only fields:
dt.entity.synthetic_test, dt.entity.synthetic_location, dt.entity.synthetic_test_stepTime window behavior:
fetch user.sessions, from: X, to: Y only returns sessions that started in [X, Y] — NOT sessions that were merely active during that window.fetch user.sessions, from: now() - 32h.user.events to user.sessions by session ID) — a narrow user.sessions window will miss long-running sessions and produce false "orphans."Session creation delay:
user.sessions record.user.sessions entry — this is normal, not a data gap.user.events with user.sessions, exclude recent data (e.g., use to: now() - 1h) to avoid counting in-progress sessions as orphans.Zombie sessions (events without a user.sessions record):
dt.rum.session.id in user.events will have a corresponding user.sessions record. The session aggregation service intentionally skips zombie sessions — sessions with no real user activity (zero navigations and zero user interactions).user.events with user.sessions, expect a large number of unmatched session IDs. This is by design, not a data gap. Filter to sessions with activity before diagnosing orphans:
fetch user.events, from: now() - 2h, to: now() - 1h
| filter isNotNull(dt.rum.session.id)
| summarize navs = countIf(characteristics.has_navigation == true),
interactions = countIf(characteristics.has_user_interaction == true),
by: {dt.rum.session.id}
| filter navs > 0 or interactions > 0Example — bounce rate and session quality:
fetch user.sessions, from: now() - 24h
| filter dt.rum.user_type == "real_user"
| summarize
total_sessions = count(),
bounces = countIf(characteristics.is_bounce == true),
zero_activity = countIf(toLong(navigation_count) == 0 and toLong(user_interaction_count) == 0),
avg_duration_s = avg(toLong(duration)) / 1000000000
| fieldsAdd bounce_rate_pct = round((bounces * 100.0) / total_sessions, decimals: 1)Track Core Web Vitals, page performance, and request latency for SEO and UX optimization.
Primary Files:
references/WebVitals.md - Core Web Vitals (LCP, INP, CLS)references/performance-analysis.md - Request and page performanceCommon Queries:
Understand user engagement, navigation patterns, and session characteristics. Analyze button clicks, form interactions, and user journeys.
Data source choice:
fetch user.sessions for session-level analysis (bounce rate, session duration, session counts)fetch user.events for event-level detail (individual clicks, navigation timing, specific pages)Primary Files:
references/user-sessions.md - Session tracking and user analyticsreferences/performance-analysis.md - Navigation and engagement patternsCommon Queries:
user.sessions with characteristics.is_bounce)navigation_count, user_interaction_count)user.events with characteristics.has_user_interaction)Monitor error rates, analyze exceptions, and correlate frontend issues with backend.
Primary Files:
references/error-tracking.md - Error analysis and debuggingreferences/performance-analysis.md - Trace correlationCommon Queries:
Track mobile app performance, startup times, and crash analytics for iOS and Android. Analyze app version performance and device-specific issues.
Primary Files:
references/mobile-monitoring.md - App starts, crashes, and mobile-specific metricsCommon Queries:
Deep performance diagnostics including JavaScript profiling, main thread blocking, UI jank analysis, and geographic performance.
Primary Files:
references/performance-analysis.md - Advanced diagnostics and long tasksCommon Queries:
Use metrics for trends, events for debugging
Filter by frontend in multi-app environments
frontend.name for clarityMatch interval to time range
Exclude synthetic traffic when analyzing real users
dt.rum.user_type to focus on genuine behaviorCombine metrics with events for complete insights
Extend user.sessions time window for correlation queries
user.sessions only returns sessions that started in the query windowuser.eventsStart by segmenting the problem by page, browser, geo location, and dt.rum.user_type.
Heuristics:
fetch user.events
| filter frontend.name == "my-frontend" and characteristics.has_request == true
| filter page.url.path == "/checkout"
| summarize avg_ttfb = avg(request.time_to_first_byte), avg_duration = avg(duration)If TTFB is high, analyze backend spans by correlating frontend events with backend traces using dt.rum.trace_id.
Long tasks by page:
fetch user.events, from: now() - 2h
| filter characteristics.has_long_task == true
| summarize
long_task_count = count(),
total_blocking_time = sum(duration),
by: {frontend.name, page.url.path}
| sort total_blocking_time desc
| limit 20Long tasks by script source:
fetch user.events, from: now() - 2h
| filter frontend.name == "my-frontend"
| filter characteristics.has_long_task == true
| summarize
long_task_count = count(),
total_blocking_time = sum(duration),
by: {long_task.attribution.container_src}
| sort total_blocking_time desc
| limit 20fetch user.events
| filter frontend.name == "my-frontend"
| filter characteristics.has_request
| filter endsWith(url.full, ".js")
| summarize dls = max(performance.decoded_body_size), by: url.full
| sort dls desc
| limit 20fetch user.events
| filter frontend.name == "my-frontend"
| filter characteristics.has_request
| summarize dls = max(performance.decoded_body_size), by: url.full
| sort dls desc
| limit 20fetch user.events, from: now() - 2h
| filter frontend.name == "my-frontend"
| filter characteristics.has_request == true
| fieldsAdd cache_status = if(
performance.incomplete_reason == "local_cache" or performance.transfer_size == 0 and
(performance.encoded_body_size > 0 or performance.decoded_body_size > 0),
"cached",
else: if(performance.transfer_size > 0, "network", else: "uncached")
)
| summarize
request_count = count(),
avg_duration = avg(duration),
by: {url.domain, cache_status}fetch user.events, from: now() - 2h
| filter characteristics.has_request == true
| filter isNotNull(performance.encoded_body_size) and isNotNull(performance.decoded_body_size)
| filter performance.encoded_body_size > 0
| fieldsAdd
expansion_ratio = performance.decoded_body_size / performance.encoded_body_size,
wasted_bytes = performance.decoded_body_size - performance.encoded_body_size
| summarize
requests = count(),
avg_expansion_ratio = avg(expansion_ratio),
total_wasted_bytes = sum(wasted_bytes),
by: {request.url.host, request.url.path}
| sort total_wasted_bytes desc
| limit 50Compare by location and domain when TTFB is high but backend performance is good:
fetch user.events, from: now() - 2h
| filter characteristics.has_request == true
| summarize
request_count = count(),
avg_duration = avg(duration),
p75_duration = percentile(duration, 75),
p95_duration = percentile(duration, 95),
by: {geo.country.iso_code, request.url.domain}
| sort p95_duration desc
| limit 50Analyze DNS time:
fetch user.events, from: now() - 2h
| filter characteristics.has_request == true
| filter isNotNull(performance.domain_lookup_start) and isNotNull(performance.domain_lookup_end)
| fieldsAdd dns_ms = performance.domain_lookup_end - performance.domain_lookup_start
| summarize
request_count = count(),
avg_dns_ms = avg(dns_ms),
p75_dns_ms = percentile(dns_ms, 75),
p95_dns_ms = percentile(dns_ms, 95),
by: {request.url.domain}
| sort p95_dns_ms desc
| limit 50Analyze by protocol (http/1.1, h2, h3):
fetch user.events
| filter characteristics.has_request
| summarize cnt = count(), by: {url.domain, performance.next_hop_protocol}
| sort cnt desc
| limit 50Analyze request performance by domain:
fetch user.events, from: now() - 2h
| filter characteristics.has_request == true
| summarize
request_count = count(),
avg_duration = avg(duration),
p75_duration = percentile(duration, 75),
p95_duration = percentile(duration, 95),
by: {request.url.domain}
| sort p95_duration desc
| limit 50When queries return no data, follow this diagnostic workflow:
Validate Timeframe
now()-1h to now() or similarnow()-24h for initial explorationVerify frontend Configuration
frontend.name filter is correctCheck Data Availability
fetch user.events | limit 1Review Query Syntax
When to Ask User for Clarification:
When query results seem unexpected or suspicious:
Unexpected High Values:
Unexpected Low Values:
dt.rum.user_type filter isn't excluding real usersInconsistent Data:
Query returns unexpected results
│
├─ Is this a zero-result scenario?
│ ├─ YES → Follow "Handling Zero Results" workflow
│ └─ NO → Continue
│
├─ Can I validate the result independently?
│ ├─ YES → Run validation query
│ │ ├─ Validation confirms result → Report findings
│ │ └─ Validation contradicts → Investigate further
│ └─ NO → Continue
│
├─ Is the anomaly clearly explained by data?
│ ├─ YES → Report with explanation
│ └─ NO → Continue
│
├─ Do I need domain knowledge to interpret?
│ ├─ YES → Ask user for context
│ │ Example: "The error rate is 15%. Is this expected for your frontend?"
│ └─ NO → Continue
│
└─ Is the issue ambiguous or requires clarification?
├─ YES → Ask specific question with data context
│ Example: "I see two frontends named 'web-app'. Which frontend name should I use?"
└─ NO → Investigate and report findings with caveatsFor Performance Issues:
For Data Availability Issues:
For Unexpected Patterns:
Always ask the user when:
Example clarifying questions:
checkout-web or checkout-mobile?"dt.rum.user_type='REAL_USER' to focus on real users?"Use frontend-observability skill when:
Do NOT use for:
references/WebVitals.md - Core Web Vitals monitoringreferences/user-sessions.md - Session and user analyticsreferences/error-tracking.md - Error analysis and debuggingreferences/mobile-monitoring.md - Mobile app performance and crashesreferences/performance-analysis.md - Advanced performance diagnostics4991356
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.