CtrlK
BlogDocsLog inGet started
Tessl Logo

service-mesh-observability

Implement comprehensive observability for service meshes including distributed tracing, metrics, and visualization. Use when setting up mesh monitoring, debugging latency issues, or implementing SLOs for service communication.

85

1.24x
Quality

64%

Does it follow best practices?

Impact

98%

1.24x

Average score across 6 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/cloud-infrastructure/skills/service-mesh-observability/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly defines its domain (service mesh observability), lists specific capabilities (distributed tracing, metrics, visualization), and provides explicit trigger guidance via a 'Use when' clause with practical scenarios. It uses proper third-person voice and includes domain-specific terminology that practitioners would naturally use.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'distributed tracing, metrics, and visualization' as capabilities, and 'setting up mesh monitoring, debugging latency issues, implementing SLOs for service communication' as use cases. These are concrete, actionable items.

3 / 3

Completeness

Clearly answers both 'what' (implement observability for service meshes including distributed tracing, metrics, visualization) and 'when' (explicit 'Use when' clause covering mesh monitoring setup, debugging latency, implementing SLOs).

3 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'service mesh', 'distributed tracing', 'metrics', 'monitoring', 'latency issues', 'SLOs', 'service communication', 'observability'. These are terms practitioners naturally use when dealing with this domain.

3 / 3

Distinctiveness Conflict Risk

The combination of 'service mesh' + 'observability' creates a clear niche. Terms like 'distributed tracing', 'SLOs for service communication', and 'mesh monitoring' are highly specific and unlikely to conflict with general monitoring or generic infrastructure skills.

3 / 3

Total

12

/

12

Passed

Implementation

29%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill provides excellent concrete, executable templates for service mesh observability tools but suffers from poor organization and verbosity. It lacks any workflow sequencing or validation steps, making it unclear how to actually set up observability end-to-end. The monolithic structure with conceptual explanations Claude doesn't need wastes significant token budget.

Suggestions

Add a clear numbered workflow showing the order of deployment (e.g., 1. Deploy Prometheus, 2. Verify scraping works with `kubectl port-forward`, 3. Deploy Jaeger, 4. Verify traces appear, etc.) with explicit validation at each step.

Remove the 'Core Concepts' section entirely (three pillars diagram, golden signals table) — Claude already knows these concepts and they consume ~30 lines of tokens.

Move large configuration blocks (Grafana dashboard JSON, Jaeger deployment, OTel collector config) to separate referenced files like GRAFANA_DASHBOARDS.md and OTEL_CONFIG.md, keeping only the most essential template inline.

Add verification commands after each template (e.g., `kubectl get pods -n istio-system` to confirm deployment, `curl localhost:9090/api/v1/targets` to verify Prometheus scraping).

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~300+ lines, with significant content Claude already knows (three pillars of observability diagram, golden signals table, basic concepts). The ASCII art diagram and conceptual explanations waste tokens on well-known DevOps concepts. Much of this could be condensed to just the templates and queries.

1 / 3

Actionability

The skill provides fully executable YAML manifests, PromQL queries, bash commands, and JSON dashboard configurations that are copy-paste ready. Each template is concrete and deployable with specific image versions, port numbers, and configuration values.

3 / 3

Workflow Clarity

There is no clear sequenced workflow for setting up observability. Templates are presented as isolated blocks without ordering, dependencies between them (e.g., Prometheus must be running before alerts work), or validation steps. No verification checkpoints exist to confirm each component is working before proceeding.

1 / 3

Progressive Disclosure

The entire content is a monolithic wall of templates and configurations with no references to external files. The Grafana dashboard JSON alone is ~40 lines that could be in a separate file. There's no overview-then-drill-down structure; everything is dumped inline.

1 / 3

Total

6

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
wshobson/agents
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.