Implement distributed tracing with Jaeger and Tempo to track requests across microservices and identify performance bottlenecks. Use when debugging microservices, analyzing request flows, or implementing observability for distributed systems.
80
68%
Does it follow best practices?
Impact
100%
1.29xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./tests/ext_conformance/artifacts/agents-wshobson/observability-monitoring/skills/distributed-tracing/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly communicates specific capabilities (distributed tracing with named tools), includes natural trigger terms developers would use, and explicitly states both what the skill does and when to use it. The mention of specific tools (Jaeger, Tempo) and domain-specific terminology (distributed tracing, observability, request flows) makes it highly distinctive and easy to match to the right user requests.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'implement distributed tracing with Jaeger and Tempo', 'track requests across microservices', and 'identify performance bottlenecks'. These are concrete, actionable capabilities. | 3 / 3 |
Completeness | Clearly answers both 'what' (implement distributed tracing with Jaeger and Tempo to track requests and identify bottlenecks) and 'when' (explicit 'Use when debugging microservices, analyzing request flows, or implementing observability for distributed systems'). | 3 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'distributed tracing', 'Jaeger', 'Tempo', 'microservices', 'request flows', 'observability', 'performance bottlenecks', 'debugging microservices'. Good coverage of terms a developer would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with specific tool names (Jaeger, Tempo) and a clear niche (distributed tracing for microservices). Unlikely to conflict with general monitoring, logging, or other observability skills due to the specific focus on tracing and named tools. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
37%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is a comprehensive reference document for distributed tracing but fails as a SKILL.md by being excessively verbose, lacking workflow structure, and inlining too much detail. It explains concepts Claude already understands, provides redundant multi-language examples that should be in reference files, and lacks a clear step-by-step implementation workflow with validation checkpoints. The actionability is strong with executable code, but the content would benefit greatly from aggressive trimming and restructuring.
Suggestions
Remove the 'Distributed Tracing Concepts' section and best practices list entirely—Claude already knows these. Cut the content to under 100 lines focusing on the specific workflow.
Add a clear numbered workflow: 1) Deploy Jaeger/Tempo, 2) Verify deployment (kubectl get pods), 3) Instrument one service, 4) Verify traces appear in UI, 5) Add context propagation, 6) Verify end-to-end trace. Include validation at each step.
Move the multi-language instrumentation examples (Python, Node.js, Go) into references/instrumentation.md and keep only one concise example inline with a pointer to the reference file.
Move Tempo configuration and sampling strategies into their respective reference files, keeping only a brief mention and link in the main skill.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is extremely verbose at ~300+ lines, explaining basic tracing concepts Claude already knows (what a trace is, what a span is), providing full instrumentation examples in three languages (Python, Node.js, Go) when one with a note would suffice, and including a 'Distributed Tracing Concepts' section that is unnecessary for Claude. The best practices list is generic advice Claude already knows. | 1 / 3 |
Actionability | The skill provides fully executable, copy-paste ready code examples across multiple languages and deployment methods (Kubernetes, Docker Compose), with concrete configuration files, specific commands, and real query examples for trace analysis. | 3 / 3 |
Workflow Clarity | There is no clear sequential workflow for implementing distributed tracing. The content reads as a reference dump of configurations and code snippets without a defined order of operations, validation checkpoints, or feedback loops. For a complex multi-step process like setting up distributed tracing across microservices, there should be explicit steps with verification at each stage. | 1 / 3 |
Progressive Disclosure | References to external files exist (references/jaeger-setup.md, references/instrumentation.md, assets/jaeger-config.yaml.template) and related skills are listed, but the main file is a monolithic wall of content that should have much more pushed into those reference files. The inline content is far too detailed for an overview document. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
6e3d68c
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.