Golang everyday observability — the always-on signals in production. Covers structured logging with slog, Prometheus metrics, OpenTelemetry distributed tracing, continuous profiling with pprof/Pyroscope, server-side RUM event tracking, alerting, and Grafana dashboards. Apply when instrumenting Go services for production monitoring, setting up metrics or alerting, adding OpenTelemetry tracing, correlating logs with traces, migrating legacy loggers (zap/logrus/zerolog) to slog, adding observability to new features, or implementing GDPR/CCPA-compliant tracking with Customer Data Platforms (CDP). Not for temporary deep-dive performance investigation (→ See golang-benchmark and golang-performance skills).
87
86%
Does it follow best practices?
Impact
91%
1.10xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that covers all evaluation dimensions at the highest level. It lists specific tools and technologies, includes abundant natural trigger terms, explicitly states both what it does and when to use it, and even delineates boundaries with related skills to minimize conflict. The 'Not for' clause with skill cross-references is a particularly strong practice.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: structured logging with slog, Prometheus metrics, OpenTelemetry distributed tracing, continuous profiling with pprof/Pyroscope, server-side RUM event tracking, alerting, and Grafana dashboards. Very detailed and actionable. | 3 / 3 |
Completeness | Clearly answers both 'what' (structured logging, metrics, tracing, profiling, alerting, dashboards) and 'when' with an explicit 'Apply when...' clause listing specific trigger scenarios. Also includes a 'Not for' exclusion clause that further clarifies scope. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural terms users would say: slog, Prometheus, OpenTelemetry, pprof, Pyroscope, Grafana, metrics, alerting, tracing, logs, zap, logrus, zerolog, GDPR, CCPA, CDP. These are terms developers naturally use when seeking observability help for Go services. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with a clear niche (Go production observability) and explicit boundary-setting via the 'Not for temporary deep-dive performance investigation' note with cross-references to related skills. This significantly reduces conflict risk with adjacent golang-benchmark and golang-performance skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
72%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured observability skill with strong progressive disclosure and actionable code examples. Its main weaknesses are moderate verbosity (some explanatory text Claude doesn't need, repeated information across sections) and missing validation checkpoints in the instrumentation workflow. The common mistakes section and signal correlation examples are particularly valuable.
Suggestions
Remove the opening definition of observability and trim the best practices summary to avoid repeating what's covered in the detailed guides and reference files.
Add explicit validation steps to the instrumentation workflow (e.g., 'verify metrics appear at /metrics endpoint', 'confirm traces in collector UI') to close the feedback loop gap.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably efficient but includes some unnecessary explanation that Claude would already know (e.g., 'Observability is the ability to understand a system's internal state from its external outputs', explaining what each signal answers in prose when the table already covers it). The best practices summary repeats information found in the detailed sections. However, it's not egregiously verbose. | 2 / 3 |
Actionability | The skill provides concrete, executable Go code examples for signal correlation (otelslog bridge, exemplars), common mistakes with clear good/bad patterns, a specific migration strategy with named bridge packages, and a checklist for definition of done. Code examples are copy-paste ready with real imports and API calls. | 3 / 3 |
Workflow Clarity | The migration strategy has clear sequential steps (1-4), and the definition of done checklist provides a verification framework. However, the main instrumentation workflow lacks explicit validation checkpoints — there's no 'verify your metrics are being scraped' or 'confirm traces appear in your collector' step. The modes section mentions a 'sequential instrumentation guide' but doesn't provide one in the body. | 2 / 3 |
Progressive Disclosure | Excellent progressive disclosure — the main file provides a concise overview with the five signals table, best practices summary, correlation examples, and common mistakes, then clearly links to seven dedicated reference files (logging.md, metrics.md, tracing.md, etc.) with descriptive summaries of what each contains. References are one level deep and well-signaled. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_field | 'metadata' should map string keys to string values | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
b88f91d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.