CtrlK
BlogDocsLog inGet started
Tessl Logo

service-mesh-observability

Implement comprehensive observability for service meshes including distributed tracing, metrics, and visualization. Use when setting up mesh monitoring, debugging latency issues, or implementing SLOs for service communication.

85

1.24x
Quality

64%

Does it follow best practices?

Impact

98%

1.24x

Average score across 6 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/cloud-infrastructure/skills/service-mesh-observability/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly defines its scope around service mesh observability, lists concrete capabilities, and includes an explicit 'Use when' clause with natural trigger terms. It follows the third-person voice convention and is concise without being vague. The description effectively differentiates itself from general monitoring or logging skills through its specific focus on service mesh contexts.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'distributed tracing, metrics, and visualization' as capabilities, and 'setting up mesh monitoring, debugging latency issues, implementing SLOs for service communication' as use cases. These are concrete, actionable items.

3 / 3

Completeness

Clearly answers both 'what' (implement observability for service meshes including distributed tracing, metrics, visualization) and 'when' (explicit 'Use when' clause covering mesh monitoring setup, debugging latency, implementing SLOs).

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'service mesh', 'distributed tracing', 'metrics', 'monitoring', 'latency issues', 'SLOs', 'service communication', 'observability', 'visualization'. These cover the domain well with terms practitioners naturally use.

3 / 3

Distinctiveness Conflict Risk

The combination of 'service mesh' + 'observability' creates a clear niche. Terms like 'distributed tracing', 'SLOs for service communication', and 'mesh monitoring' are highly specific and unlikely to conflict with general monitoring or generic observability skills.

3 / 3

Total

12

/

12

Passed

Implementation

29%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill provides excellent concrete, executable templates for service mesh observability tools but suffers from being a monolithic dump of configurations without workflow guidance. It wastes significant tokens on concepts Claude already knows (three pillars, golden signals) while lacking the sequential workflow and validation steps needed to actually deploy these components successfully. The content would benefit greatly from restructuring into a concise overview with linked reference files.

Suggestions

Remove the 'Core Concepts' section entirely (three pillars diagram, golden signals table) — Claude knows these — and replace with a brief workflow sequence: install metrics → configure tracing → set up dashboards → add alerts, with validation at each step.

Add explicit validation steps between templates, e.g., 'Verify Prometheus is scraping: kubectl port-forward svc/prometheus 9090 && curl localhost:9090/api/v1/targets' before proceeding to alerting rules.

Move large templates (Grafana dashboard JSON, Jaeger deployment, OTel collector config) into separate referenced files like GRAFANA_DASHBOARDS.md and TRACING_SETUP.md, keeping only the most essential quick-start template inline.

Add a clear numbered workflow that sequences the templates and explains which are alternatives (Istio vs Linkerd, Jaeger vs OTel) versus complementary components.

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~300 lines, with significant content Claude already knows (three pillars of observability diagram, golden signals table, basic concepts). The ASCII art diagram and conceptual explanations waste tokens on well-known DevOps concepts. Much of this could be condensed to key templates and queries only.

1 / 3

Actionability

The skill provides fully executable YAML manifests, PromQL queries, bash commands, and JSON dashboard configurations that are copy-paste ready. Each template is concrete and deployable with specific image versions, port numbers, and configuration values.

3 / 3

Workflow Clarity

There is no clear workflow sequence connecting the templates. The skill presents 7 templates as independent blocks with no guidance on ordering, dependencies between components, or validation steps. There's no verification that Prometheus is scraping correctly before setting up alerts, no check that Jaeger is receiving traces, and no feedback loops for debugging failed deployments.

1 / 3

Progressive Disclosure

The content is a monolithic wall of YAML/JSON templates with no references to external files. The Grafana dashboard JSON alone is ~40 lines that could be in a separate file. There's no separation between quick-start content and advanced configurations, and no navigation structure to help find specific templates.

1 / 3

Total

6

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
wshobson/agents
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.