Distributed traces, spans, service dependencies, performance analysis, and failure detection. Query trace data, analyze request flows, and investigate span-level details.
64
77%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/dt-obs-tracing/SKILL.mdQuality
Discovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong description with excellent specificity and domain-relevant trigger terms that clearly carve out a distinct niche in distributed tracing and observability. The main weakness is the absence of an explicit 'Use when...' clause, which would help Claude know exactly when to select this skill over others.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about distributed tracing, latency issues across services, span analysis, or debugging request flows through microservices.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Query trace data', 'analyze request flows', 'investigate span-level details', plus domain concepts like 'distributed traces', 'spans', 'service dependencies', 'performance analysis', and 'failure detection'. | 3 / 3 |
Completeness | Clearly answers 'what does this do' with specific capabilities, but lacks an explicit 'Use when...' clause or equivalent trigger guidance. The 'when' is only implied by the domain terms. Per rubric guidelines, missing 'Use when' caps completeness at 2. | 2 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'distributed traces', 'spans', 'service dependencies', 'performance analysis', 'failure detection', 'request flows', 'trace data'. These are terms engineers naturally use when debugging distributed systems. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche focused on distributed tracing and span-level analysis. Terms like 'spans', 'distributed traces', 'service dependencies', and 'request flows' are very specific to observability/tracing and unlikely to conflict with other skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
72%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a strong, comprehensive reference skill for distributed tracing in Dynatrace. Its greatest strengths are the abundance of executable DQL queries and excellent progressive disclosure to reference files. The main weakness is verbosity — the core concepts section explains things Claude likely knows, and the document could be 30-40% shorter while retaining all actionable content.
Suggestions
Trim the 'Understanding Traces and Spans' section significantly — Claude doesn't need explanations of what HTTP requests, RPC calls, or messaging interactions are. Keep only the Dynatrace-specific span.kind values and root span concept.
Consider condensing the attribute table to only non-obvious attributes (e.g., remove trace.id/span.id descriptions like 'Unique trace identifier') and focus on Dynatrace-specific fields like dt.smartscape.service and request.is_root_span.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is quite long (~400+ lines) and includes some explanatory content Claude likely already knows (e.g., explaining what spans are, what HTTP requests are). The attribute tables and concept sections add value but could be tighter. The core query patterns are efficient, but the overall document could be trimmed significantly. | 2 / 3 |
Actionability | Nearly every section includes fully executable DQL queries that are copy-paste ready. The queries are specific, complete, and cover a wide range of use cases from basic span access to complex trace aggregation with root detection strategies. Field names, filter patterns, and output fields are all explicit. | 3 / 3 |
Workflow Clarity | The skill is primarily a reference/query catalog rather than a multi-step workflow, so explicit validation checkpoints are less critical. However, the sampling/extrapolation section describes a multi-step process without clear validation, and the trace aggregation patterns lack guidance on verifying results. The 'Best Practices' section provides useful ordering guidance but no explicit feedback loops. | 2 / 3 |
Progressive Disclosure | Excellent progressive disclosure structure. The main SKILL.md provides concise overviews with executable examples for each topic, then clearly signals 12 reference files with descriptive labels using '📖 Learn more' callouts. References are one level deep and well-organized in a final references section. Navigation is intuitive and topics are logically grouped. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (641 lines); consider splitting into references/ and linking | Warning |
Total | 10 / 11 Passed | |
4991356
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.