CtrlK
BlogDocsLog inGet started
Tessl Logo

distributed-tracing

Implement distributed tracing with Jaeger and Tempo to track requests across microservices and identify performance bottlenecks. Use when debugging microservices, analyzing request flows, or implementing observability for distributed systems.

79

1.16x
Quality

68%

Does it follow best practices?

Impact

99%

1.16x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/observability-monitoring/skills/distributed-tracing/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly communicates specific capabilities (distributed tracing with named tools), includes natural trigger terms developers would use, and explicitly states both what the skill does and when to use it. The mention of specific tools (Jaeger, Tempo) and domain terminology (distributed tracing, observability, request flows) makes it highly distinctive and easy to match.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'implement distributed tracing with Jaeger and Tempo', 'track requests across microservices', and 'identify performance bottlenecks'. These are concrete, actionable capabilities.

3 / 3

Completeness

Clearly answers both 'what' (implement distributed tracing with Jaeger and Tempo to track requests and identify bottlenecks) and 'when' (explicit 'Use when debugging microservices, analyzing request flows, or implementing observability for distributed systems').

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'distributed tracing', 'Jaeger', 'Tempo', 'microservices', 'request flows', 'observability', 'performance bottlenecks', 'debugging microservices'. Good coverage of terms a developer would naturally use.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with specific tool names (Jaeger, Tempo) and a clear niche (distributed tracing for microservices). Unlikely to conflict with general monitoring, logging, or other observability skills due to the specific focus on tracing and named tools.

3 / 3

Total

12

/

12

Passed

Implementation

37%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is a comprehensive but bloated reference document rather than an efficient, actionable guide. While the code examples are executable and concrete, the content suffers from excessive verbosity (explaining basic concepts, providing three full language examples inline), lack of a coherent workflow with validation steps, and insufficient use of progressive disclosure to keep the main file lean. It would benefit greatly from being restructured into a concise overview with references to detailed sub-documents.

Suggestions

Remove the 'Distributed Tracing Concepts' section and the generic 'Best Practices' list — Claude already knows these concepts. Focus only on project-specific conventions or non-obvious gotchas.

Add a clear sequential workflow (e.g., 1. Deploy collector → 2. Verify collector is running → 3. Instrument one service → 4. Verify traces appear in UI → 5. Add propagation → 6. Verify cross-service traces) with explicit validation checkpoints at each step.

Move the multi-language instrumentation examples (Python, Node.js, Go) and the Tempo Kubernetes config into separate reference files, keeping only one concise example inline with links to the others.

Add verification commands after setup steps (e.g., 'kubectl get pods -n observability' to confirm Jaeger is running, curl commands to verify the collector endpoint is accepting traces).

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~300+ lines, explaining basic tracing concepts Claude already knows (what a trace is, what a span is), providing full instrumentation examples in three languages (Python, Node.js, Go) when one would suffice with references for others, and including a 'Distributed Tracing Concepts' section that adds little value. The best practices list is generic advice Claude already knows.

1 / 3

Actionability

The skill provides fully executable, copy-paste ready code examples across multiple languages and deployment methods (Kubernetes, Docker Compose). Configuration files are complete YAML/Python/JS/Go with specific endpoints, ports, and parameters.

3 / 3

Workflow Clarity

There is no clear sequential workflow for implementing distributed tracing. The content reads as a reference dump of disconnected sections (setup, instrumentation, propagation, sampling) without a coherent step-by-step process, validation checkpoints, or feedback loops. For a complex multi-step operation like setting up distributed tracing across microservices, there's no verification that traces are actually flowing.

1 / 3

Progressive Disclosure

There are some references to external files (references/jaeger-setup.md, references/instrumentation.md, assets/jaeger-config.yaml.template) and related skills, but the main file is a monolithic wall of content that should have been split. The three full language examples and both Jaeger and Tempo setup configs should be in separate reference files with the SKILL.md providing a concise overview.

2 / 3

Total

7

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
wshobson/agents
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.