CtrlK
BlogDocsLog inGet started
Tessl Logo

service-mesh-observability

Implement comprehensive observability for service meshes including distributed tracing, metrics, and visualization. Use when setting up mesh monitoring, debugging latency issues, or implementing SLOs for service communication.

71

1.39x
Quality

57%

Does it follow best practices?

Impact

95%

1.39x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./tests/ext_conformance/artifacts/agents-wshobson/cloud-infrastructure/skills/service-mesh-observability/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly defines its scope around service mesh observability, lists concrete capabilities, and includes an explicit 'Use when' clause with natural trigger terms. It follows third-person voice correctly and is concise without being vague. The description effectively differentiates itself from general monitoring or infrastructure skills through its specific focus on service mesh contexts.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'distributed tracing, metrics, and visualization' as capabilities, and 'setting up mesh monitoring, debugging latency issues, implementing SLOs for service communication' as use cases. These are concrete, actionable items.

3 / 3

Completeness

Clearly answers both 'what' (implement observability for service meshes including distributed tracing, metrics, and visualization) and 'when' (explicit 'Use when' clause covering mesh monitoring setup, debugging latency issues, and implementing SLOs).

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'service mesh', 'distributed tracing', 'metrics', 'monitoring', 'latency issues', 'SLOs', 'service communication', 'observability', 'visualization'. These cover the domain well with terms practitioners naturally use.

3 / 3

Distinctiveness Conflict Risk

The combination of 'service mesh' + 'observability' creates a clear niche. Terms like 'distributed tracing', 'SLOs for service communication', and 'mesh monitoring' are highly specific and unlikely to conflict with general monitoring or generic infrastructure skills.

3 / 3

Total

12

/

12

Passed

Implementation

14%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is a large reference dump of service mesh observability configurations rather than a focused, actionable guide. It suffers from excessive verbosity with inlined boilerplate YAML/JSON that should be in separate files, lacks any workflow sequencing or validation steps, and explains concepts Claude already understands. The content would benefit greatly from being restructured into a concise overview with clear workflows and references to detailed configuration files.

Suggestions

Add a clear sequential workflow (e.g., 1. Install metrics stack → 2. Verify scraping → 3. Configure tracing → 4. Validate traces appear → 5. Set up dashboards → 6. Configure alerts) with explicit validation checkpoints at each step.

Move large configuration blocks (Grafana dashboard JSON, Jaeger deployment, OTel collector config) into separate referenced files and keep only concise summaries or key snippets inline.

Remove the 'Core Concepts' section entirely (three pillars, golden signals table) — Claude already knows these concepts. Replace with a brief note on which specific metrics/labels are mesh-specific.

Add verification commands after each template (e.g., 'kubectl get pods -n istio-system' to confirm deployment, 'curl prometheus:9090/api/v1/targets' to verify scraping) to create feedback loops.

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~300+ lines. It includes unnecessary conceptual explanations (three pillars of observability, golden signals table) that Claude already knows, an ASCII art diagram, and extensive boilerplate YAML/JSON that could be referenced externally. The Grafana dashboard JSON alone is massive and adds little instructional value.

1 / 3

Actionability

The templates provide real, deployable YAML manifests and executable PromQL queries, which is good. However, much of it is boilerplate configuration rather than targeted guidance on what to customize or adapt. The Linkerd CLI commands are concrete and useful, but the skill reads more like a reference dump than actionable instructions for specific tasks.

2 / 3

Workflow Clarity

There is no clear workflow or sequencing. The skill presents templates in isolation without explaining the order of operations (e.g., install Prometheus first, then configure Istio telemetry, then set up Jaeger, then verify traces are flowing). There are no validation checkpoints or feedback loops for verifying that observability is working correctly after each step.

1 / 3

Progressive Disclosure

This is a monolithic wall of content with everything inlined. The massive Grafana dashboard JSON, full Jaeger deployment spec, and OTel collector config should all be in separate referenced files. There are no internal cross-references or navigation aids beyond flat section headers. The external resources at the end are just links, not structured references to companion files.

1 / 3

Total

5

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
Dicklesworthstone/pi_agent_rust
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.