Execute this skill automates the setup of distributed tracing for microservices. it helps developers implement end-to-end request visibility by configuring context propagation, span creation, trace collection, and analysis. use this skill when the user re... Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.
31
16%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/performance/distributed-tracing-setup/skills/setting-up-distributed-tracing/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description starts reasonably well by naming the domain and some concrete actions related to distributed tracing setup, but it is severely undermined by a truncated 'Use when' clause and meaningless boilerplate filler ('Use when appropriate context detected. Trigger with relevant phrases based on skill purpose'). The generic ending provides zero actionable trigger guidance and would make it difficult for Claude to reliably select this skill.
Suggestions
Replace the boilerplate 'Use when appropriate context detected. Trigger with relevant phrases based on skill purpose' with explicit trigger conditions, e.g., 'Use when the user asks about distributed tracing, OpenTelemetry setup, request tracing across microservices, or observability instrumentation.'
Complete the truncated clause ('when the user re...') to provide the full intended trigger guidance.
Add common user-facing trigger terms such as 'OpenTelemetry', 'Jaeger', 'Zipkin', 'observability', 'trace spans', and 'request tracing' to improve keyword coverage.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names the domain (distributed tracing for microservices) and lists some actions (configuring context propagation, span creation, trace collection, and analysis), but the truncation ('when the user re...') and generic boilerplate at the end undermine the specificity. | 2 / 3 |
Completeness | While the 'what' is partially addressed, the 'when' clause is truncated and then replaced with meaningless boilerplate ('Use when appropriate context detected. Trigger with relevant phrases based on skill purpose'), which provides no explicit trigger guidance. This effectively means the 'when' is missing. | 1 / 3 |
Trigger Term Quality | It includes some relevant keywords like 'distributed tracing', 'microservices', 'context propagation', 'span creation', and 'trace collection', but the boilerplate 'Trigger with relevant phrases based on skill purpose' adds no actual trigger terms, and common user phrases like 'OpenTelemetry', 'Jaeger', 'tracing setup', or 'observability' are missing. | 2 / 3 |
Distinctiveness Conflict Risk | The domain of distributed tracing for microservices is fairly specific and unlikely to conflict with many other skills, but the generic boilerplate ending ('Use when appropriate context detected') could cause false triggers, and the truncated description reduces clarity. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is almost entirely generic boilerplate with no actionable content. It describes what distributed tracing is and what the skill would theoretically do, but provides zero concrete code examples, configuration snippets, commands, or specific guidance. Every section reads like a template placeholder rather than useful instruction for Claude.
Suggestions
Replace the abstract descriptions with concrete, executable code examples showing OpenTelemetry setup (e.g., Python/Node.js tracer initialization, span creation, exporter configuration for Jaeger/Zipkin)
Remove all generic boilerplate sections (Prerequisites, Instructions, Output, Error Handling, Resources) that contain only placeholder text and add no value
Add a clear step-by-step workflow with validation checkpoints, such as verifying the tracer is connected to the backend and confirming spans appear in the UI
Include specific configuration file examples (e.g., collector config YAML, docker-compose for Jaeger) that Claude can adapt rather than vague references to 'configuration generation'
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive filler content. Sections like 'Overview', 'How It Works', 'When to Use This Skill', 'Integration', 'Prerequisites', 'Instructions', 'Output', 'Error Handling', and 'Resources' are all generic boilerplate that provide no specific, useful information. Claude already knows what distributed tracing is and doesn't need explanations of the concept. | 1 / 3 |
Actionability | No concrete code, commands, or executable guidance anywhere. The examples describe what the skill 'will do' in vague terms but never show actual code snippets, configuration files, or specific commands. Statements like 'Generate code snippets for OpenTelemetry instrumentation' describe rather than instruct. | 1 / 3 |
Workflow Clarity | The multi-step processes described are entirely abstract ('Invoke this skill when the trigger conditions are met', 'Provide necessary context and parameters'). There are no concrete steps, no validation checkpoints, and no feedback loops. The 'How It Works' section lists abstract phases without any actionable detail. | 1 / 3 |
Progressive Disclosure | No bundle files exist, yet the content is a monolithic wall of generic text with no meaningful structure. References to 'Project documentation' and 'Related skills and commands' are vague pointers to nothing. The content that is present is mostly filler that could be removed entirely rather than organized better. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3a2d27d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.