Observability patterns for Python applications. Triggers on: logging, metrics, tracing, opentelemetry, prometheus, observability, monitoring, structlog, correlation id.
78
67%
Does it follow best practices?
Impact
100%
1.17xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./data/skills-md/0xdarkmatter/claude-mods/python-observability-patterns/SKILL.mdQuality
Discovery
62%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description has excellent trigger term coverage and clear distinctiveness, making it easy for Claude to identify when this skill is relevant. However, it is severely lacking in specificity—it names no concrete actions or capabilities—and the completeness suffers from a weak 'what' component and only implicit trigger guidance rather than explicit 'Use when...' scenarios.
Suggestions
Add specific concrete actions the skill performs, e.g., 'Configures structured logging with structlog, sets up distributed tracing with OpenTelemetry, instruments Prometheus metrics, and implements correlation ID propagation.'
Replace the 'Triggers on:' list with an explicit 'Use when...' clause that describes scenarios, e.g., 'Use when the user needs to add observability to a Python application, configure logging pipelines, set up tracing, or integrate monitoring tools like Prometheus or OpenTelemetry.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description says 'Observability patterns for Python applications' which is vague and abstract. It does not list any concrete actions like 'configure structured logging', 'set up distributed tracing', or 'instrument metrics collection'. It only names a domain without describing what the skill actually does. | 1 / 3 |
Completeness | The 'what' is weak (just 'observability patterns') and the 'when' is partially addressed via the 'Triggers on:' list, which serves as an implicit trigger clause. However, there is no explicit 'Use when...' guidance explaining the scenarios, and the 'what' lacks substance, capping this at 2. | 2 / 3 |
Trigger Term Quality | The description includes a strong set of natural trigger terms: 'logging', 'metrics', 'tracing', 'opentelemetry', 'prometheus', 'observability', 'monitoring', 'structlog', 'correlation id'. These cover both general and specific terms users would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | The combination of Python-specific observability with explicit trigger terms like 'opentelemetry', 'prometheus', 'structlog', and 'correlation id' creates a clear niche that is unlikely to conflict with other skills. The domain is well-scoped. | 3 / 3 |
Total | 9 / 12 Passed |
Implementation
72%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid observability patterns skill with strong actionability—every section provides real, executable code. The progressive disclosure is well-structured with clear references to deeper content. The main weaknesses are a lack of integration workflow (how to combine logging + metrics + tracing) and missing validation/verification steps for confirming the observability stack is working correctly.
Suggestions
Add a brief integration section showing how to combine structlog, Prometheus, and OpenTelemetry in a single application with correlation IDs flowing through all three.
Include verification steps for each pattern (e.g., 'curl /metrics to confirm counters increment', 'check OTLP collector logs for spans').
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is mostly efficient with executable code examples, but includes some unnecessary elements like the Quick Reference tables that restate what's already demonstrated in the code sections. The code examples themselves are lean and well-chosen, though the missing `import time` and `import logging` are minor issues. | 2 / 3 |
Actionability | All code examples are concrete, executable, and copy-paste ready. The structlog configuration, Prometheus metrics middleware, and OpenTelemetry tracing setup are complete, real-world patterns with specific library usage, metric names, and endpoint configurations. | 3 / 3 |
Workflow Clarity | The skill presents individual patterns clearly but lacks workflow sequencing—there's no guidance on how to combine these patterns together, no validation steps (e.g., verifying metrics endpoint works, confirming traces appear), and no error handling guidance for when exporters fail or connections drop. | 2 / 3 |
Progressive Disclosure | The skill provides a clear overview with executable quick-start examples for each pattern, then points to one-level-deep references for detailed content (./references/structured-logging.md, ./references/metrics.md, etc.). Navigation is well-signaled with related skills and prerequisites. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
9f4534c
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.