Implement comprehensive observability for service meshes including distributed tracing, metrics, and visualization. Use when setting up mesh monitoring, debugging latency issues, or implementing SLOs for service communication.
Install with Tessl CLI
npx tessl i github:wshobson/agents --skill service-mesh-observability77
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillAgent success when using this skill
Validation for skill structure
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-crafted skill description that follows best practices. It uses third person voice, lists specific capabilities, includes a clear 'Use when...' clause with multiple trigger scenarios, and targets a distinct technical domain (service mesh observability) that minimizes conflict with other skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'distributed tracing, metrics, and visualization' along with specific use cases like 'debugging latency issues' and 'implementing SLOs for service communication'. | 3 / 3 |
Completeness | Clearly answers both what ('comprehensive observability for service meshes including distributed tracing, metrics, and visualization') AND when ('Use when setting up mesh monitoring, debugging latency issues, or implementing SLOs') with explicit trigger guidance. | 3 / 3 |
Trigger Term Quality | Includes natural keywords users would say: 'service mesh', 'distributed tracing', 'metrics', 'monitoring', 'latency issues', 'SLOs', 'observability' - these are terms practitioners naturally use when seeking this capability. | 3 / 3 |
Distinctiveness Conflict Risk | Clear niche focused specifically on service mesh observability with distinct triggers like 'mesh monitoring', 'service communication', and 'distributed tracing' that differentiate it from general monitoring or logging skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
52%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill excels at providing concrete, executable configurations for service mesh observability tools, with comprehensive coverage of Istio, Linkerd, Jaeger, and OpenTelemetry. However, it lacks workflow guidance on how to sequence these deployments, validate successful setup, or troubleshoot common issues. The conceptual sections add unnecessary tokens while the practical deployment order and verification steps are missing.
Suggestions
Add a clear workflow section showing deployment order (e.g., 1. Install Prometheus, 2. Verify metrics endpoint, 3. Deploy Jaeger, 4. Validate tracing) with explicit validation commands at each step
Remove or significantly condense the 'Three Pillars' and 'Golden Signals' sections - Claude knows these concepts; focus on mesh-specific implementation details
Add verification commands after each template (e.g., 'kubectl get pods -n istio-system' to confirm deployment, 'curl metrics endpoint' to validate scraping)
Split detailed templates (Grafana JSON, OTel config) into separate reference files and keep SKILL.md as a quick-start overview with links
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill includes some unnecessary conceptual content (Three Pillars diagram, Golden Signals table) that Claude already knows. The templates themselves are efficient, but the framing adds token overhead. | 2 / 3 |
Actionability | Provides fully executable YAML configurations, PromQL queries, and bash commands that are copy-paste ready. Templates cover complete deployment scenarios with specific image versions and port configurations. | 3 / 3 |
Workflow Clarity | No clear sequencing for multi-step processes. Templates are presented as isolated configurations without guidance on order of deployment, dependencies between components, or validation steps to verify successful setup. | 1 / 3 |
Progressive Disclosure | Content is organized into logical sections with external resource links, but the skill is monolithic with 7 detailed templates inline. Advanced configurations (Kiali, OTel) could be split into separate reference files. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.