Setup synthetic monitoring for proactive performance tracking including uptime checks, transaction monitoring, and API health. Use when implementing availability monitoring or tracking critical user journeys. Trigger with phrases like "setup synthetic monitoring", "monitor uptime", or "configure health checks".
57
50%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/performance/synthetic-monitoring-setup/skills/setting-up-synthetic-monitoring/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly defines its scope around synthetic monitoring, provides concrete capabilities, and includes explicit trigger guidance with both a 'Use when' clause and example phrases. It uses proper third-person voice and is concise without being vague. Minor improvement could include mentioning specific file types or tools, but overall it performs well across all dimensions.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'setup synthetic monitoring', 'uptime checks', 'transaction monitoring', and 'API health'. These are distinct, concrete capabilities within the monitoring domain. | 3 / 3 |
Completeness | Clearly answers both 'what' (setup synthetic monitoring for uptime checks, transaction monitoring, API health) and 'when' (explicit 'Use when' clause plus 'Trigger with phrases like' providing concrete trigger examples). | 3 / 3 |
Trigger Term Quality | Includes natural keywords users would say: 'synthetic monitoring', 'monitor uptime', 'configure health checks', 'availability monitoring', 'critical user journeys'. Good coverage of natural terms and variations. | 3 / 3 |
Distinctiveness Conflict Risk | Clearly scoped to synthetic monitoring specifically, with distinct triggers like 'synthetic monitoring', 'uptime checks', and 'health checks' that are unlikely to conflict with general application monitoring or logging skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads like a high-level product description or marketing document rather than actionable instructions for Claude. It contains no executable code, no concrete configuration examples, no tool-specific commands, and no validation steps. The content is padded with sections that explain obvious concepts and generic best practices while failing to provide the specific, copy-paste-ready guidance that would make it useful.
Suggestions
Replace abstract descriptions with concrete, executable configuration examples (e.g., actual Datadog synthetic test YAML, Pingdom API calls, or New Relic monitor definitions) for at least one monitoring platform.
Add explicit validation steps with commands to verify monitoring is working (e.g., 'Run `datadog-ci synthetics run-tests --config synthetics.json` to validate test configuration before deploying').
Remove the 'Overview', 'How It Works', 'When to Use This Skill', 'Integration', and generic 'Best Practices' sections entirely—these waste tokens on information Claude already knows or can infer from the skill description.
Create bundle files (e.g., example endpoint YAML, sample transaction scripts, alert rule templates) and reference them from a concise SKILL.md that serves as a quick-start guide with pointers to detailed examples.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with sections like 'Overview', 'How It Works', 'When to Use This Skill', and 'Integration' that explain concepts Claude already knows or restate the description. The 'Best Practices' are generic platitudes. Nearly every section could be cut or condensed dramatically—the entire file reads like a product brochure rather than actionable instructions. | 1 / 3 |
Actionability | No concrete code, commands, configuration snippets, or executable examples anywhere. The 'Examples' section describes what the skill 'will do' in abstract terms without showing any actual monitoring configuration (no YAML, no API calls, no CLI commands). The 'Instructions' are vague steps like 'Configure monitoring frequency and locations' with no specifics. | 1 / 3 |
Workflow Clarity | The workflow steps are abstract and lack any validation checkpoints. Steps like 'Design monitoring scenarios' and 'Configure alerting for failures and degradation' provide no concrete sequence, no tool-specific commands, and no feedback loops. The error handling section is a generic checklist of things to verify without actionable remediation steps. | 1 / 3 |
Progressive Disclosure | The content is a monolithic wall of text with no bundle files to reference. The 'Resources' section lists generic topic names without actual links or file references. There's no separation of concerns—everything is inline but nothing is detailed enough to be useful. The reference to '${CLAUDE_SKILL_DIR}/monitoring/endpoints.yaml' is mentioned but no bundle file exists to support it. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3a2d27d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.