Use this skill for any question involving telemetry data: "investigate an issue", "debug a problem", "find out why something is slow", "check error rates", "analyze user behavior", "understand a production incident", "query telemetry data", "look at logs", "search logs", "find errors", "find stack traces", "filter by severity", "check traces", "examine spans", "investigate request latency", "debug service-to-service calls", "look up a trace ID", "analyze RUM data", "check frontend performance", "frontend errors", "Core Web Vitals", "JavaScript exceptions", "query metrics", "check CPU usage", "run a PromQL query", "check error rate", "look up a metric", "check memory usage", "how do I write a DataPrime query", "DataPrime syntax", or wants to answer questions using observability data from logs, metrics, traces, RUM, or APM.
79
75%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/cx-telemetry-querying/SKILL.mdQuality
Discovery
64%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description excels at providing trigger terms and natural user phrases, making it very likely to be selected when relevant queries arise. However, it is almost entirely composed of trigger phrases with very little explanation of what the skill actually does — what actions it performs, what tools it uses, or what outputs it produces. Adding a concise capability summary would significantly improve it.
Suggestions
Add a clear 'what it does' statement at the beginning, e.g., 'Queries and analyzes observability data from logs, metrics, traces, and RUM using DataPrime and PromQL. Helps debug production incidents, investigate latency, and understand error patterns.'
Restructure to separate capabilities from triggers — list concrete actions first (e.g., 'Writes DataPrime queries, builds PromQL expressions, analyzes span traces'), then follow with the 'Use when...' trigger list.
Use third-person voice consistently (e.g., 'Investigates issues using telemetry data' rather than the implicit second-person framing of 'Use this skill for any question').
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description lists many example queries and domains (logs, metrics, traces, RUM, APM, DataPrime) but doesn't clearly describe what concrete actions the skill performs — it focuses on trigger phrases rather than capabilities like 'queries telemetry databases', 'builds PromQL queries', or 'analyzes span data'. | 2 / 3 |
Completeness | The 'when' is extensively covered with explicit trigger phrases and a 'Use this skill for...' clause. However, the 'what does this do' is weak — it never clearly states what the skill actually does (e.g., queries observability platforms, writes DataPrime queries, analyzes telemetry). The description is almost entirely trigger-focused with minimal capability explanation. | 2 / 3 |
Trigger Term Quality | Excellent coverage of natural user phrases: 'debug a problem', 'find out why something is slow', 'check error rates', 'look at logs', 'check CPU usage', 'Core Web Vitals', 'JavaScript exceptions', 'DataPrime syntax' — these are highly natural terms users would actually say. | 3 / 3 |
Distinctiveness Conflict Risk | The telemetry/observability domain is fairly specific, and mentions of DataPrime, PromQL, RUM, and APM help distinguish it. However, the extremely broad scope ('any question involving telemetry data') and overlap with generic debugging/logging skills could cause conflicts with other monitoring or debugging-related skills. | 2 / 3 |
Total | 9 / 12 Passed |
Implementation
85%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a strong routing/orchestration skill that clearly guides Claude through telemetry investigation workflows. Its main strengths are excellent actionability with concrete CLI commands, clear multi-step discovery workflows with fallback paths, and well-organized progressive disclosure to reference files. The main weakness is moderate redundancy—the examples section and key principles largely restate content already covered in the routing guide and discovery workflow.
Suggestions
Consider trimming or removing the Examples section, as it mostly restates the routing guide and discovery workflow; alternatively, make examples show novel edge cases not covered above.
Remove the 'Key Principles' section at the end since each principle is already embedded in the workflow above, saving ~10 lines of redundant tokens.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is mostly efficient and well-structured, but includes some redundancy—the examples section largely restates the routing guide and discovery workflow already covered above. The 'Key Principles' section at the end also repeats guidance already given. Some tightening is possible. | 2 / 3 |
Actionability | Provides concrete, copy-paste-ready CLI commands throughout (cx metrics search, cx search-fields, cx spans), specific flag usage, and clear decision criteria. The discovery workflow gives executable steps, not vague descriptions. | 3 / 3 |
Workflow Clarity | The discovery workflow is clearly sequenced (Steps 1-4) with explicit decision points ('If a matching metric is found, load X and continue'). The fallback/pivoting section provides explicit error recovery guidance ('try at least two pillars before concluding'). Validation is addressed through the 'validate with code' step for ambiguous results. | 3 / 3 |
Progressive Disclosure | The skill serves as a clear routing/overview document that points to specific reference files (dataprime-reference.md, logs-querying.md, spans-querying.md, etc.) via a well-organized loading table. References are one level deep and clearly signaled. It also routes to other workflow skills for non-investigative intents. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
defdc4d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.