This skill should be used when the user says "set up monitoring", "arn infra monitor", "infra monitor", "configure monitoring", "set up observability", "add logging", "configure alerting", "set up alerts", "infrastructure monitoring", "add metrics", "set up cloudwatch", "configure grafana", "observability setup", "logging setup", "alerting setup", "health checks", "monitor infrastructure", "arn-infra-monitor", "set up cloud monitoring", "configure notifications", "prometheus", "datadog", "new relic", "sentry", or wants to set up logging, metrics collection, and alerting for their deployed infrastructure.
66
58%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/arn-infra/skills/arn-infra-monitor/SKILL.mdQuality
Discovery
54%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is essentially a long list of trigger terms with almost no explanation of what the skill actually does. While the trigger term coverage is excellent, the complete absence of specific capability descriptions makes it impossible to understand the skill's concrete actions or differentiate it from other infrastructure-related skills. The description reads as a keyword dump rather than a useful skill summary.
Suggestions
Add specific concrete actions the skill performs, e.g., 'Configures CloudWatch alarms, sets up Grafana dashboards, deploys Prometheus exporters, and creates alerting rules for deployed infrastructure.'
Restructure to lead with a 'what it does' section listing capabilities, then follow with a 'Use when...' clause containing the trigger terms, rather than making the entire description a trigger term list.
Narrow the scope or clarify which monitoring platforms are actually supported to reduce conflict risk with potential tool-specific skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description lacks concrete actions. It never states what the skill actually does — no specific capabilities like 'configures CloudWatch alarms', 'sets up Grafana dashboards', or 'deploys Prometheus exporters' are mentioned. The only hint of actions is the vague phrase 'set up logging, metrics collection, and alerting for their deployed infrastructure' at the very end. | 1 / 3 |
Completeness | The 'when' is thoroughly covered with explicit trigger phrases, but the 'what' is extremely weak — the description never clearly explains what concrete actions the skill performs beyond the vague mention of 'logging, metrics collection, and alerting'. It answers when but barely answers what. | 2 / 3 |
Trigger Term Quality | The description includes an extensive list of natural trigger terms users would say, covering multiple monitoring tools (CloudWatch, Grafana, Prometheus, Datadog, New Relic, Sentry) and common phrases like 'set up monitoring', 'add logging', 'configure alerting', 'health checks', and 'add metrics'. This provides excellent keyword coverage. | 3 / 3 |
Distinctiveness Conflict Risk | While the monitoring/observability domain is somewhat specific, the description is so broad (covering CloudWatch, Grafana, Prometheus, Datadog, New Relic, Sentry, logging, alerting, metrics) that it could easily conflict with more specialized skills for individual tools or with general infrastructure/DevOps skills. | 2 / 3 |
Total | 8 / 12 Passed |
Implementation
62%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured orchestration skill with a clear 8-step workflow, good error handling, and appropriate experience-level adaptation. Its main weaknesses are verbosity (the full document is quite long with repeated experience-level patterns) and lack of executable code examples — the skill describes what to generate but doesn't include concrete IaC snippets. The referenced bundle files are not provided, making it impossible to verify the progressive disclosure structure works in practice.
Suggestions
Include at least one concrete, executable IaC example (e.g., a minimal CloudWatch alarm in Terraform) to demonstrate the expected output format rather than relying entirely on the specialist agent.
Extract the detailed alert threshold table, metrics lists, and log retention table into a reference file (e.g., `monitoring-defaults.md`) to reduce the main skill's length while keeping it as a concise orchestration guide.
Consolidate the repeated beginner/intermediate/expert presentation patterns into a single reusable pattern or reference, rather than spelling them out in full for each step.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is quite long (~250+ lines) with some sections that could be tightened. The tables and stack recommendations are useful but there's redundancy in the experience-level presentation patterns (beginner/intermediate/expert blocks repeated across steps). The prerequisites section is efficient, but the overall document could be more concise while preserving the same guidance. | 2 / 3 |
Actionability | The skill provides concrete glob patterns, specific alert thresholds, and structured task delegation to the specialist agent. However, it lacks executable code examples — no actual IaC snippets (Terraform, CloudFormation, etc.) are shown. The specialist agent invocation is described in prose/template format rather than executable form. The monitoring configuration is described at a high level rather than with copy-paste-ready code. | 2 / 3 |
Workflow Clarity | The 8-step workflow is clearly sequenced with logical progression from assessment → recommendation → configuration → generation → approval → summary. It includes validation checkpoints (Step 7 approval gate with options to adjust/regenerate), error handling with fallback patterns, and explicit re-run safety notes. The feedback loop for specialist agent failure is well-defined. | 3 / 3 |
Progressive Disclosure | The skill references external files like `observability-stack-guide.md`, `alerting-patterns.md`, `experience-derivation.md`, and various manifest files, which is good progressive disclosure design. However, no bundle files are provided to verify these references exist, and the main SKILL.md itself is quite long — the detailed logging configuration, metrics lists, and alert threshold tables could potentially be extracted into reference files to keep the main skill leaner. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
1fe948f
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.