Use when setting up monitoring systems, logging, metrics, tracing, or alerting. Invoke for dashboards, Prometheus/Grafana, load testing, profiling, capacity planning.
Install with Tessl CLI
npx tessl i github:jeffallan/claude-skills --skill monitoring-expert65
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillAgent success when using this skill
Validation for skill structure
Discovery
72%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description excels at trigger term coverage and distinctiveness, providing excellent keywords for skill selection. However, it lacks specificity about what actions the skill actually performs - it reads more like a topic list than a capability description. The 'when' guidance is present but the 'what' is underdeveloped.
Suggestions
Add concrete action verbs describing what the skill does, e.g., 'Configures monitoring systems, sets up logging pipelines, creates dashboards, and defines alerting rules.'
Transform the topic list into capability statements, e.g., 'Implements Prometheus/Grafana stacks, conducts load testing, performs application profiling, and plans infrastructure capacity.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (monitoring systems) and lists several related areas (logging, metrics, tracing, alerting, dashboards), but doesn't describe concrete actions - it only lists topics/tools without specifying what actions can be performed with them. | 2 / 3 |
Completeness | Has explicit 'Use when' and 'Invoke for' clauses addressing when to use it, but the 'what does this do' is weak - it lists topics but never explains what actions or capabilities the skill provides (e.g., 'configure', 'set up', 'analyze', 'create'). | 2 / 3 |
Trigger Term Quality | Excellent coverage of natural terms users would say: 'monitoring', 'logging', 'metrics', 'tracing', 'alerting', 'dashboards', 'Prometheus', 'Grafana', 'load testing', 'profiling', 'capacity planning' - these are all terms users would naturally use when needing this skill. | 3 / 3 |
Distinctiveness Conflict Risk | Clear niche in observability/monitoring domain with specific tool mentions (Prometheus, Grafana) and distinct concepts (tracing, capacity planning) that are unlikely to conflict with other skills. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill excels at progressive disclosure with a well-organized reference table, but fails to provide any actionable content in the main file itself. It reads more like a table of contents than a skill, with all concrete guidance deferred to reference files. The lack of even a single code example or specific command significantly limits its immediate utility.
Suggestions
Add at least one concrete, executable code example for each pillar (e.g., a Pino structured logging setup, a Prometheus counter implementation)
Replace the abstract workflow steps with specific commands or code snippets that demonstrate each phase
Remove the 'Role Definition' section and 'Knowledge Reference' list - Claude doesn't need to be told it's an SRE or given a keyword list
Add validation checkpoints to the workflow, such as 'Verify logs appear in aggregator before proceeding to dashboards'
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is reasonably efficient but includes some unnecessary elements like the 'Role Definition' section explaining what an SRE does, and the 'Knowledge Reference' list at the end which adds little value. The 'When to Use This Skill' section largely duplicates the description. | 2 / 3 |
Actionability | The skill provides no concrete code examples, commands, or executable guidance. It describes what to do at a high level ('Add logging, metrics, traces') but never shows how. All actionable content is deferred to reference files. | 1 / 3 |
Workflow Clarity | The 5-step core workflow provides a clear sequence but lacks any validation checkpoints, feedback loops, or specific actions. Steps like 'Instrument' and 'Collect' are too abstract to guide actual implementation. | 2 / 3 |
Progressive Disclosure | Excellent use of a reference table with clear topics, file paths, and 'Load When' guidance. References are one level deep and well-organized for discovery. The structure appropriately separates overview from detailed content. | 3 / 3 |
Total | 8 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.