Implement distributed tracing with Jaeger and Tempo to track requests across microservices and identify performance bottlenecks. Use when debugging microservices, analyzing request flows, or implementing observability for distributed systems.
60
68%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/observability-monitoring/skills/distributed-tracing/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly communicates specific capabilities (distributed tracing with named tools), includes natural trigger terms developers would use, and explicitly states both what the skill does and when to use it. The mention of specific tools (Jaeger, Tempo) and domain terminology (distributed tracing, observability, request flows) makes it highly distinctive and easy for Claude to select appropriately.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'implement distributed tracing with Jaeger and Tempo', 'track requests across microservices', and 'identify performance bottlenecks'. These are concrete, actionable capabilities. | 3 / 3 |
Completeness | Clearly answers both 'what' (implement distributed tracing with Jaeger and Tempo to track requests and identify bottlenecks) and 'when' (explicit 'Use when debugging microservices, analyzing request flows, or implementing observability for distributed systems'). | 3 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'distributed tracing', 'Jaeger', 'Tempo', 'microservices', 'request flows', 'observability', 'performance bottlenecks', 'debugging microservices'. Good coverage of terms a developer would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with specific tool names (Jaeger, Tempo) and a clear niche (distributed tracing for microservices). Unlikely to conflict with general monitoring, logging, or other observability skills due to the specific focus on tracing and named tools. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
37%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is a comprehensive but bloated reference document rather than an efficient, workflow-oriented skill. Its main strength is highly actionable, executable code examples across multiple languages and deployment targets. However, it suffers from excessive verbosity (explaining concepts Claude knows, three redundant language examples inline), lack of a clear sequential workflow with validation steps, and dead references to non-existent bundle files.
Suggestions
Add a clear sequential workflow (e.g., 1. Deploy collector → 2. Verify collector is running → 3. Instrument one service → 4. Verify traces appear → 5. Add propagation) with explicit validation checkpoints at each step.
Move the multi-language instrumentation examples (Python, Node.js, Go) into separate referenced files and keep only one primary example inline to dramatically reduce token count.
Remove the 'Distributed Tracing Concepts' section and the 'When to Use' bullet list — Claude already understands these concepts and the description already covers when to use the skill.
Either create the referenced bundle files (references/jaeger-setup.md, references/instrumentation.md, assets/jaeger-config.yaml.template) or remove the dead references.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is extremely verbose at ~300+ lines, with significant redundancy across three language examples (Python, Node.js, Go) that all demonstrate the same concept. It explains basic tracing concepts (what a trace is, what a span is) that Claude already knows, and includes a 10-item best practices list that is mostly common knowledge. The 'When to Use' section restates the description. | 1 / 3 |
Actionability | The skill provides fully executable, copy-paste ready code examples across multiple languages and deployment methods (Kubernetes YAML, Docker Compose, Python/Node.js/Go instrumentation). Commands and configurations are concrete and specific with real endpoints, ports, and package imports. | 3 / 3 |
Workflow Clarity | There is no clear sequential workflow for implementing distributed tracing end-to-end. The content is organized as a reference catalog of configurations and code snippets rather than a guided process. There are no validation checkpoints (e.g., verify traces appear before proceeding to instrumentation), no feedback loops, and the troubleshooting section is vague bullet points without actionable resolution steps. | 1 / 3 |
Progressive Disclosure | References to external files like `references/jaeger-setup.md`, `references/instrumentation.md`, and `assets/jaeger-config.yaml.template` are mentioned but no bundle files exist, making these dead references. The main file contains too much inline content (three full language examples, full Kubernetes manifests) that should be in referenced files, while the structure has some logical sections. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
34632bc
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.