Implement distributed tracing with Jaeger and Tempo to track requests across microservices and identify performance bottlenecks. Use when debugging microservices, analyzing request flows, or implementing observability for distributed systems.
79
68%
Does it follow best practices?
Impact
99%
1.16xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/observability-monitoring/skills/distributed-tracing/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly communicates specific capabilities (distributed tracing with named tools), includes natural trigger terms users would employ, and explicitly states both what the skill does and when to use it. The mention of specific technologies (Jaeger, Tempo) and domain terminology (distributed tracing, observability, microservices) makes it highly distinctive and easy for Claude to select appropriately.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'implement distributed tracing with Jaeger and Tempo', 'track requests across microservices', and 'identify performance bottlenecks'. These are concrete, actionable capabilities. | 3 / 3 |
Completeness | Clearly answers both 'what' (implement distributed tracing with Jaeger and Tempo to track requests and identify bottlenecks) and 'when' (explicit 'Use when debugging microservices, analyzing request flows, or implementing observability for distributed systems'). | 3 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'distributed tracing', 'Jaeger', 'Tempo', 'microservices', 'request flows', 'observability', 'performance bottlenecks', 'debugging microservices'. Good coverage of terms a user working in this domain would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with specific tool names (Jaeger, Tempo) and a clear niche (distributed tracing for microservices). Unlikely to conflict with general monitoring, logging, or other observability skills due to the specific technology and use case focus. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
37%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is a comprehensive reference document but fails as an actionable skill guide. It is far too verbose, explaining concepts Claude already knows and inlining extensive code in three languages that should be in reference files. The biggest weakness is the lack of a clear workflow with validation steps—there's no sequential process for implementing tracing, just disconnected reference sections.
Suggestions
Add a clear numbered workflow (e.g., 1. Deploy Jaeger, 2. Verify UI accessible at :16686, 3. Instrument one service, 4. Verify traces appear, 5. Add context propagation, 6. Verify cross-service traces) with explicit validation checkpoints at each step.
Remove the 'Distributed Tracing Concepts' section entirely—Claude knows what traces, spans, and context are.
Keep only one language example inline (e.g., Python) and move Node.js and Go examples to references/instrumentation.md, which is already referenced.
Move the full Kubernetes manifests and Docker Compose configs to reference files, keeping only the minimal deployment command inline.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is extremely verbose at ~300+ lines, explaining basic tracing concepts Claude already knows (what a trace, span, and context are), providing full instrumentation examples in three languages (Python, Node.js, Go) when one would suffice with references for others, and including a 'Distributed Tracing Concepts' section that adds little value. The Docker Compose port mappings and full Kubernetes YAML manifests could be referenced rather than inlined. | 1 / 3 |
Actionability | The skill provides fully executable, copy-paste ready code examples across multiple languages and deployment methods. Kubernetes manifests, Docker Compose files, and instrumentation code are all concrete and complete with specific configuration values. | 3 / 3 |
Workflow Clarity | There is no clear sequential workflow for implementing distributed tracing. The content reads as a reference document with disconnected sections rather than a guided process. There are no validation checkpoints (e.g., verify Jaeger is running before instrumenting, verify traces appear after instrumentation) and no feedback loops for error recovery beyond a brief troubleshooting section. | 1 / 3 |
Progressive Disclosure | There are some references to external files (references/jaeger-setup.md, references/instrumentation.md, assets/jaeger-config.yaml.template) which is good, but the main file still contains massive amounts of inline content that should be in those referenced files. The three full language examples and full Kubernetes manifests should be offloaded to reference files with only one concise example kept inline. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
91fe43e
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.