Monitor use when you need to work with monitoring and observability. This skill provides health monitoring and alerting with comprehensive guidance and automation. Trigger with phrases like "monitor system health", "set up alerts", or "track metrics".
51
41%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/database/database-transaction-monitor/skills/monitoring-database-transactions/SKILL.mdQuality
Discovery
40%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is too vague in its capability listing, relying on generic phrases like 'comprehensive guidance and automation' without specifying concrete actions (e.g., configuring Prometheus alerts, setting up health check endpoints, creating Grafana dashboards). While it does include some trigger phrases, the overall lack of specificity makes it difficult to distinguish from other DevOps or infrastructure-related skills.
Suggestions
Replace 'health monitoring and alerting with comprehensive guidance and automation' with specific concrete actions, e.g., 'Configures health check endpoints, sets up alerting rules in Prometheus/Grafana, creates monitoring dashboards, defines SLIs/SLOs, and troubleshoots metric collection pipelines.'
Expand trigger terms to include common variations users would naturally say, such as 'observability', 'uptime monitoring', 'dashboards', 'Prometheus', 'Grafana', 'logging', 'APM', 'service health', or 'incident alerting'.
Clarify the scope to distinguish from related skills—specify whether this covers infrastructure monitoring, application monitoring, log aggregation, or all of the above, to reduce conflict risk with other DevOps skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague language like 'health monitoring and alerting with comprehensive guidance and automation' without listing concrete actions. No specific capabilities are enumerated—what kind of monitoring? What metrics? What alerting systems? | 1 / 3 |
Completeness | It has a weak 'what' (health monitoring and alerting) and does include trigger phrases suggesting 'when', but the 'what' is too vague to be truly useful. The 'Use when' equivalent is present via 'Trigger with phrases like...' but the capability description lacks substance. | 2 / 3 |
Trigger Term Quality | Includes some relevant trigger phrases like 'monitor system health', 'set up alerts', and 'track metrics', which are natural terms users might say. However, it misses common variations like 'observability', 'dashboards', 'Prometheus', 'Grafana', 'uptime', 'logging', 'APM', or specific tool names. | 2 / 3 |
Distinctiveness Conflict Risk | The term 'monitoring and observability' provides some domain specificity, but the description is broad enough that it could overlap with infrastructure management, DevOps, logging, or performance tuning skills. No clear niche is carved out. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides genuinely useful database monitoring knowledge with real SQL queries and good error handling coverage, but suffers from being overly monolithic and inconsistently actionable. Several steps describe what to build rather than providing executable code, and the lack of validation checkpoints for destructive operations (session termination) is a notable gap. The content would benefit significantly from splitting engine-specific details into separate files and providing complete, executable monitoring scripts rather than descriptions of what scripts should do.
Suggestions
Split database-engine-specific queries and instructions into separate reference files (e.g., POSTGRESQL.md, MYSQL.md, MONGODB.md) and keep SKILL.md as a concise overview with links
Provide a complete, executable monitoring script (shell or Python) for at least one database engine instead of describing what the script should do in steps 6, 8, and 10
Add explicit validation checkpoints before destructive operations in step 9—e.g., verify session age and confirm the session is truly idle before terminating, and log the result of termination
Condense the narrative examples into structured input/output format showing the monitoring query output and the corresponding alert or action taken
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is reasonably focused on database-specific knowledge Claude wouldn't inherently know (system catalog queries, specific privilege requirements), but includes some unnecessary verbosity in the examples section (narrative descriptions of scenarios) and the output section restates what the instructions already cover. The error handling table, while useful, could be tighter. | 2 / 3 |
Actionability | Provides real SQL queries for PostgreSQL and MySQL which are executable, but several steps remain at the description level without concrete code (steps 6, 8, 10 describe what to build without providing actual scripts or query templates). The monitoring scripts and dashboard queries promised in the output section are never actually provided as executable examples. | 2 / 3 |
Workflow Clarity | Steps are numbered and sequenced logically from baseline establishment through alerting and remediation. However, there are no explicit validation checkpoints—step 9 involves destructive operations (terminating sessions) without a verification/confirmation step, and there's no feedback loop for confirming that monitoring is working correctly before relying on it. | 2 / 3 |
Progressive Disclosure | The content is a monolithic wall of text with no references to external files for detailed content. The full SQL queries, error handling table, multiple database engine variations, and narrative examples are all inline, making this quite long. Engine-specific queries could be split into separate reference files, and the examples section could be a separate document. | 1 / 3 |
Total | 7 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
c8a915c
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.