CtrlK
CommunityDocumentationLog inGet started
Tessl Logo

monitoring-expert

tessl i github:jeffallan/claude-skills --skill monitoring-expert

Use when setting up monitoring systems, logging, metrics, tracing, or alerting. Invoke for dashboards, Prometheus/Grafana, load testing, profiling, capacity planning.

60%

Overall

SKILL.md
Review
Evals

Validation

69%
CriteriaDescriptionResult

metadata_version

'metadata' field is not a dictionary

Warning

license_field

'license' field is missing

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

body_examples

No examples detected (no code fences and no 'Example' wording)

Warning

body_output_format

No obvious output/return/format terms detected; consider specifying expected outputs

Warning

Total

11

/

16

Passed

Implementation

42%

This skill excels at progressive disclosure with a well-structured reference table, but critically lacks actionability - it's essentially a table of contents with no executable examples. The MUST DO/MUST NOT DO constraints are useful but abstract. The skill would benefit greatly from at least one concrete code example for each pillar (logging, metrics, tracing) to make it immediately useful.

Suggestions

Add executable code examples for at least one logging, metrics, and tracing implementation (e.g., Pino structured logging setup, Prometheus counter creation, OpenTelemetry span creation)

Include a concrete health check endpoint example that can be copy-pasted

Add validation steps to the core workflow, such as 'Verify metrics are being scraped: curl localhost:9090/metrics'

Remove or condense the 'Role Definition' and 'Knowledge Reference' sections - Claude doesn't need to be told it's an SRE or given a keyword list

DimensionReasoningScore

Conciseness

The skill includes some unnecessary content like the 'Role Definition' section explaining what an SRE does, and the 'Knowledge Reference' section is just a keyword list. However, the core workflow and reference table are reasonably efficient.

2 / 3

Actionability

The skill provides no concrete code examples, commands, or executable guidance. It describes what to do at a high level ('Add logging, metrics, traces') but never shows how. All actual implementation details are deferred to reference files.

1 / 3

Workflow Clarity

The 5-step core workflow provides a clear sequence, but lacks validation checkpoints or feedback loops. For monitoring setup involving production systems, there's no guidance on verifying instrumentation works before proceeding.

2 / 3

Progressive Disclosure

Excellent use of a reference table with clear 'Load When' guidance. References are one level deep, well-organized by topic, and the main skill serves as a proper overview pointing to detailed materials.

3 / 3

Total

8

/

12

Passed

Activation

72%

The description excels at trigger term coverage and distinctiveness, providing excellent keywords for skill selection. However, it lacks specificity about what concrete actions the skill enables - it reads more like a topic list than a capability description. The 'when' guidance is present but the 'what' is underdeveloped.

Suggestions

Add concrete action verbs describing what the skill does, e.g., 'Configure monitoring systems, set up logging pipelines, create alerting rules, build observability dashboards'

Transform the topic list into capability statements, e.g., 'Configures Prometheus/Grafana stacks, designs metrics collection, implements distributed tracing'

DimensionReasoningScore

Specificity

Names the domain (monitoring systems) and lists several related areas (logging, metrics, tracing, alerting, dashboards), but doesn't describe concrete actions - it only lists topics/tools without specifying what actions can be performed with them.

2 / 3

Completeness

Has explicit 'Use when' and 'Invoke for' clauses addressing when to use it, but the 'what does this do' is weak - it lists topics but never explains what actions or capabilities the skill provides (e.g., 'configure', 'set up', 'analyze', 'create').

2 / 3

Trigger Term Quality

Excellent coverage of natural terms users would say: 'monitoring', 'logging', 'metrics', 'tracing', 'alerting', 'dashboards', 'Prometheus', 'Grafana', 'load testing', 'profiling', 'capacity planning' - these are all terms users naturally use when needing this skill.

3 / 3

Distinctiveness Conflict Risk

Clear niche in observability/monitoring domain with specific tool mentions (Prometheus, Grafana) and distinct concepts (tracing, capacity planning) that are unlikely to conflict with other skills.

3 / 3

Total

10

/

12

Passed

Reviewed

Table of Contents

ValidationImplementationActivation

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.