Observability patterns for Python applications. Triggers on: logging, metrics, tracing, opentelemetry, prometheus, observability, monitoring, structlog, correlation id.
76
63%
Does it follow best practices?
Impact
100%
1.17xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./data/skills-md/0xdarkmatter/claude-mods/python-observability-patterns/SKILL.mdQuality
Discovery
54%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description has excellent trigger term coverage with specific technology names and common user vocabulary, but it severely lacks specificity about what concrete actions the skill performs. The 'what' portion is essentially just a domain label ('observability patterns') without listing actionable capabilities like configuring logging, setting up tracing pipelines, or instrumenting metrics. Adding concrete actions and an explicit 'Use when...' clause would significantly improve this description.
Suggestions
Replace the vague 'Observability patterns for Python applications' with specific actions like 'Configures structured logging with structlog, instruments distributed tracing with OpenTelemetry, sets up Prometheus metrics collection, and implements correlation IDs for Python applications.'
Add an explicit 'Use when...' clause describing scenarios, e.g., 'Use when the user needs to add observability to a Python service, configure logging frameworks, set up distributed tracing, or integrate monitoring tools.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description says 'Observability patterns for Python applications' which is vague and abstract. It does not list any concrete actions like 'configure structured logging', 'set up distributed tracing', or 'instrument metrics collection'. It only names a domain without describing what the skill actually does. | 1 / 3 |
Completeness | The 'what' is weak (just 'observability patterns') and the 'when' is partially addressed via 'Triggers on:' with a list of keywords. While the trigger list serves as implicit guidance for when to use it, there is no explicit 'Use when...' clause describing scenarios, which caps this at 2. | 2 / 3 |
Trigger Term Quality | The description includes a strong set of natural trigger terms: 'logging', 'metrics', 'tracing', 'opentelemetry', 'prometheus', 'observability', 'monitoring', 'structlog', 'correlation id'. These cover both general and specific terms users would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | The Python + observability niche is somewhat specific, and terms like 'opentelemetry', 'prometheus', 'structlog', and 'correlation id' help distinguish it. However, 'logging', 'monitoring', and 'metrics' are generic enough to potentially overlap with general Python development or DevOps skills. | 2 / 3 |
Total | 8 / 12 Passed |
Implementation
72%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid observability patterns skill with excellent actionability—every section contains production-ready, executable code. The progressive disclosure is well-structured with clear references to deeper content. The main weaknesses are minor verbosity (the quick reference tables add little value given the code examples) and the lack of an integration workflow showing how to combine logging, metrics, and tracing together with verification steps.
Suggestions
Add a brief integration workflow section showing the recommended order for adding observability to a project, with verification steps (e.g., 'confirm logs appear in JSON format', 'verify /metrics endpoint returns data').
Remove or condense the Quick Reference tables since the information is already demonstrated in the code examples above them.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is mostly efficient with executable code examples, but includes some unnecessary elements like the Quick Reference tables that restate what's already demonstrated in the code sections. The code examples themselves are lean and well-chosen, though the overall document is somewhat long for what it teaches. | 2 / 3 |
Actionability | All code examples are fully executable and copy-paste ready, covering structlog configuration, FastAPI middleware for request context and metrics, and OpenTelemetry tracing setup. Each section provides concrete, working code with realistic usage patterns including output examples. | 3 / 3 |
Workflow Clarity | The sections are well-organized by concern (logging, context propagation, metrics, tracing) but there's no explicit workflow for integrating these together, no sequencing guidance for setting up observability in a new project, and no validation steps to verify that logging/metrics/tracing are working correctly. | 2 / 3 |
Progressive Disclosure | The skill provides a clear overview with working examples for each major area, then points to one-level-deep references for detailed content (structured-logging.md, metrics.md, tracing.md). The See Also section with prerequisites, related skills, and integration skills is well-organized for navigation. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
f772de4
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.