Build real-time API monitoring dashboards with metrics, alerts, and health checks. Use when tracking API health and performance metrics. Trigger with phrases like "monitor the API", "add API metrics", or "setup API monitoring".
78
75%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/api-development/api-monitoring-dashboard/skills/monitoring-apis/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly communicates its purpose, lists concrete capabilities, and provides explicit trigger guidance with natural user phrases. It covers all key dimensions well—specificity, trigger terms, completeness, and distinctiveness. Minor improvement could include mentioning specific file types or technologies, but overall it's well-crafted.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Build real-time API monitoring dashboards', 'metrics', 'alerts', and 'health checks'. These are distinct, concrete capabilities. | 3 / 3 |
Completeness | Clearly answers both 'what' (build real-time API monitoring dashboards with metrics, alerts, and health checks) and 'when' (explicit 'Use when' clause and 'Trigger with phrases' providing concrete trigger guidance). | 3 / 3 |
Trigger Term Quality | Includes natural trigger phrases users would say: 'monitor the API', 'add API metrics', 'setup API monitoring'. Also contains keywords like 'dashboards', 'alerts', 'health checks', 'API health', and 'performance metrics' that users would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Clearly scoped to API monitoring dashboards specifically, with distinct triggers around 'API metrics', 'API monitoring', and 'API health'. Unlikely to conflict with general dashboard skills or generic API skills due to the specific monitoring/metrics/alerts focus. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
50%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a well-structured overview of API monitoring setup with good coverage of topics (metrics, health checks, dashboards, alerting, SLOs) and a useful error handling table. However, it lacks executable code examples throughout—the instructions read more like a requirements document than actionable guidance. The referenced bundle files (implementation.md, errors.md, examples.md) don't exist, which undermines the progressive disclosure strategy and leaves the skill without the concrete implementation details it promises.
Suggestions
Add executable code snippets for at least the metrics middleware and health check endpoint (e.g., a complete Express.js middleware with prom-client, or a Python FastAPI middleware with prometheus_client) to raise actionability.
Include validation checkpoints in the workflow, such as 'Verify metrics are being scraped: curl localhost:9090/api/v1/targets' and 'Confirm dashboard loads: check for non-empty panels after sending test traffic'.
Provide the referenced bundle files (implementation.md, errors.md, examples.md) or remove the references and inline the essential content to avoid broken progressive disclosure.
Replace the conceptual examples section with concrete, copy-paste-ready Grafana dashboard JSON snippets or PromQL queries (e.g., the actual PromQL for p99 latency or error rate percentage).
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is moderately efficient but includes some unnecessary verbosity. The overview restates what the instructions cover, the prerequisites list tools Claude already knows about, and the examples section describes dashboards conceptually rather than providing concrete configurations. Some tightening is possible. | 2 / 3 |
Actionability | Instructions describe what to implement at a conceptual level (e.g., 'implement metrics middleware that records...', 'configure histogram buckets...') but provide no executable code, no concrete configuration snippets, and no copy-paste-ready examples. The references to implementation.md could contain the actual code, but no bundle files are provided to verify this. | 2 / 3 |
Workflow Clarity | Steps are sequenced logically (examine existing setup → implement middleware → create endpoints → configure dashboards → define alerts → add synthetic monitoring → SLO tracking), but there are no validation checkpoints, no feedback loops for verifying that metrics are actually being collected correctly, and no verification step after dashboard/alert configuration. | 2 / 3 |
Progressive Disclosure | The skill references three external files (implementation.md, errors.md, examples.md) which is good structure, but none of these bundle files actually exist. The main SKILL.md also contains substantial inline content (error handling table, examples section) that partially duplicates what the referenced files would cover, creating an awkward split. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3a2d27d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.