tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill monitoring-database-healthMonitor use when you need to work with monitoring and observability. This skill provides health monitoring and alerting with comprehensive guidance and automation. Trigger with phrases like "monitor system health", "set up alerts", or "track metrics".
Validation
81%| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 13 / 16 Passed | |
Implementation
20%This skill is a generic template with no database health monitoring-specific content. It reads like a project management checklist that could apply to any task, lacking executable queries, specific metrics to monitor, alert thresholds, or actual health check implementations. The content wastes significant tokens on obvious guidance while providing zero actionable database monitoring instructions.
Suggestions
Add concrete SQL queries for common health checks (connection counts, slow queries, replication lag, disk usage, lock contention)
Include specific alert threshold examples and monitoring tool configurations (e.g., Prometheus queries, Grafana dashboard JSON snippets)
Replace generic steps with database-specific workflows, such as 'Check replication status: SELECT * FROM pg_stat_replication;' with expected output interpretation
Remove boilerplate sections like 'Understanding of the system architecture' and 'Review plan with team or stakeholders' that Claude already knows to consider
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with generic boilerplate that Claude already knows. Phrases like 'Review current configuration, setup, and baseline metrics' and 'Identify specific requirements, goals, and constraints' are vague filler that waste tokens without providing database-specific guidance. | 1 / 3 |
Actionability | No concrete code, commands, or database-specific examples. Everything is abstract guidance like 'Execute implementation in non-production environment first' without showing actual health check queries, monitoring scripts, or alert configurations. | 1 / 3 |
Workflow Clarity | Steps are numbered and sequenced, but they're generic project management steps rather than database health monitoring workflows. No specific validation checkpoints for database operations, no actual health check procedures, and no concrete feedback loops. | 2 / 3 |
Progressive Disclosure | References external files appropriately (templates, docs, examples directories), but the main content is a wall of generic text. The 'Overview' and 'Examples' sections at the bottom are empty placeholders that add no value. | 2 / 3 |
Total | 6 / 12 Passed |
Activation
67%The description adequately covers the 'what' and 'when' with explicit trigger phrases, which is its main strength. However, it lacks specificity in concrete actions and could include more natural keyword variations that users commonly use when discussing monitoring and observability tools.
Suggestions
Add more specific concrete actions like 'configure alerting thresholds', 'create monitoring dashboards', 'integrate with Prometheus/Grafana', or 'set up health checks'
Expand trigger terms to include common variations like 'observability', 'uptime monitoring', 'logging', 'APM', or specific tool names users might mention
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (monitoring/observability) and mentions some actions ('health monitoring', 'alerting', 'track metrics'), but lacks concrete specific actions like 'configure dashboards', 'set threshold alerts', or 'integrate with Prometheus'. | 2 / 3 |
Completeness | Explicitly answers both what ('health monitoring and alerting with comprehensive guidance and automation') and when ('Trigger with phrases like...' provides clear usage triggers), meeting the requirement for explicit trigger guidance. | 3 / 3 |
Trigger Term Quality | Includes some natural trigger phrases ('monitor system health', 'set up alerts', 'track metrics') but missing common variations users might say like 'observability', 'logging', 'uptime', 'Grafana', 'Datadog', or 'performance monitoring'. | 2 / 3 |
Distinctiveness Conflict Risk | Somewhat specific to monitoring/observability domain, but 'track metrics' and 'alerting' could overlap with analytics, logging, or DevOps skills. Could benefit from more specific tool names or use cases. | 2 / 3 |
Total | 9 / 12 Passed |
Reviewed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.