Build real-time API monitoring dashboards with metrics, alerts, and health checks. Use when tracking API health and performance metrics. Trigger with phrases like "monitor the API", "add API metrics", or "setup API monitoring".
75
71%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/api-development/api-monitoring-dashboard/skills/monitoring-apis/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly communicates what the skill does (builds real-time API monitoring dashboards with metrics, alerts, and health checks), when to use it, and includes explicit trigger phrases. It is well-scoped to a specific niche, uses third-person voice correctly, and provides enough detail for Claude to distinguish it from other skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Build real-time API monitoring dashboards', 'metrics', 'alerts', and 'health checks'. These are distinct, concrete capabilities. | 3 / 3 |
Completeness | Clearly answers both 'what' (build real-time API monitoring dashboards with metrics, alerts, and health checks) and 'when' (explicit 'Use when' clause and 'Trigger with phrases like' providing concrete trigger guidance). | 3 / 3 |
Trigger Term Quality | Includes natural trigger phrases users would say: 'monitor the API', 'add API metrics', 'setup API monitoring'. Also contains keywords like 'dashboards', 'alerts', 'health checks', 'API health', and 'performance metrics' that users would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Clearly scoped to API monitoring dashboards specifically, with distinct triggers around 'API metrics', 'API monitoring', and 'API health'. Unlikely to conflict with general dashboard skills or general API skills due to the specific combination of monitoring + API + dashboards. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is well-organized with good progressive disclosure and a logical structure, but critically lacks actionable, executable content. The instructions read as a high-level architecture document rather than a skill that Claude can follow to produce working code. The error handling table is a strength, but the absence of any concrete code examples, configuration snippets, or validation steps significantly undermines its utility.
Suggestions
Add executable code examples for at least the metrics middleware and health check endpoint (e.g., a complete Express.js middleware with prom-client, or a Python FastAPI middleware with prometheus_client)
Include a concrete Grafana dashboard JSON snippet or PromQL queries for the key panels (request rate, latency percentiles, error rate) rather than just describing them
Add explicit validation checkpoints: e.g., 'Verify metrics endpoint returns data: curl localhost:9090/metrics | grep http_request_duration', 'Test health endpoint: curl localhost:3000/health and confirm dependency statuses'
Replace the conceptual examples section with actual input/output examples showing what the generated files should contain
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is moderately efficient but includes some unnecessary verbosity. The overview restates what the description already covers, the prerequisites list tools Claude already knows about, and the examples section describes dashboards conceptually rather than providing concrete configurations. However, it avoids explaining basic concepts like what Prometheus or Grafana are. | 2 / 3 |
Actionability | Despite listing 9 steps, the skill provides no executable code, no concrete configuration snippets, no actual Grafana dashboard JSON, and no real middleware implementation. Everything is described abstractly ('implement metrics middleware that records...', 'configure histogram buckets...') without copy-paste ready examples. The actual implementation is deferred to a referenced file. | 1 / 3 |
Workflow Clarity | Steps are listed in a logical sequence from examining existing setup through implementation to alerting and SLO tracking. However, there are no validation checkpoints—no step says 'verify metrics are being scraped', 'confirm health endpoint returns expected format', or 'test alerts fire correctly'. For a multi-step process involving infrastructure configuration, this lack of verification steps is a significant gap. | 2 / 3 |
Progressive Disclosure | The skill effectively uses progressive disclosure with clear references to separate files: implementation.md for the full guide, errors.md for comprehensive error patterns, and examples.md for additional examples. References are one level deep and clearly signaled. The main file serves as a well-structured overview with appropriate sections. | 3 / 3 |
Total | 8 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3e83543
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.