Investigate failing ThousandEyes synthetic tests with MCP tools. Use when a user wants ThousandEyes test triage, service-map or trace-ID correlation, distributed-tracing checks, correlation across Observability Platforms, or evidence-backed root-cause analysis with optional code fixes.
68
81%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Passed
No known issues
Use this skill to diagnose a failing ThousandEyes synthetic test end-to-end. Prefer evidence from ThousandEyes first, then correlate into available Observability Platforms. Only edit code after explicit user approval.
http-server tests, always inspect distributedTracing.get_service_map before jumping to Observability Platform correlation.traceId is available, verify and evaluate that trace across every enumerated Observability Platform MCP integration that supports trace or telemetry correlation.testId or exact/partial test nameaidwindow or start_date / end_date (default 24h)Load reference.md for metric names, trace rules, and generic Observability Platform correlation guidance. Load examples.md only when formatting the final output.
list_network_app_synthetics_tests.get_network_app_synthetics_test.testId, type, target, enabled, agent list, and key options that affect diagnosis.get_network_app_synthetics_metrics.filter_dimension=TEST, filter_values=[testId]).http-server, inspect test config for distributedTracing.get_service_map with the same test and failing time window.If get_service_map is unavailable or incomplete:
traceId from service-map output if present.traceparent data available in test/request context.traceIdAlways return:
distributedTracing enabled/disabled)Use the templates in examples.md when the user wants a structured report.
a4497e7
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.