Execute this skill enables AI assistant to create application performance monitoring (apm) dashboards. it is triggered when the user requests the creation of a new apm dashboard, monitoring dashboard, or a dashboard for application performance. the skill helps ... Use when generating or creating new content. Trigger with phrases like 'generate', 'create', or 'scaffold'.
40
27%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/performance/apm-dashboard-creator/skills/creating-apm-dashboards/SKILL.mdQuality
Discovery
42%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description identifies a clear domain (APM dashboards) but suffers from an incomplete capability list (truncated with '...') and overly generic trigger terms that would cause conflicts with other creation-focused skills. The 'Use when' clause exists but provides triggers too broad to be useful for skill selection.
Suggestions
Complete the truncated description and list specific actions like 'configure metrics panels', 'set up alerting thresholds', 'define service dependencies'
Replace generic triggers ('generate', 'create') with APM-specific terms like 'monitor application', 'track performance', 'observability dashboard', 'metrics visualization'
Add file types or tool integrations if applicable (e.g., 'Datadog', 'Grafana', 'New Relic') to improve distinctiveness
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (APM dashboards) and mentions 'create' as an action, but lacks specific concrete actions like 'configure metrics', 'set up alerts', or 'define thresholds'. The description is cut off with 'helps ...' leaving capabilities incomplete. | 2 / 3 |
Completeness | Has a 'Use when' clause with trigger phrases, but the 'what' portion is incomplete (truncated with '...'). The triggers are also generic ('generate', 'create') rather than specific to APM dashboards, weakening the guidance. | 2 / 3 |
Trigger Term Quality | Includes some relevant terms like 'apm dashboard', 'monitoring dashboard', 'application performance', 'generate', 'create', 'scaffold', but missing common variations users might say like 'metrics', 'observability', 'performance tracking', or specific tool names. | 2 / 3 |
Distinctiveness Conflict Risk | The generic triggers 'generate', 'create', 'scaffold' would conflict with virtually any content creation skill. While 'APM dashboard' is specific, the broad trigger terms undermine distinctiveness significantly. | 1 / 3 |
Total | 7 / 12 Passed |
Implementation
12%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill content is highly verbose and abstract, explaining concepts Claude already understands while failing to provide any concrete, executable guidance. It describes what the skill does rather than instructing how to do it, with no actual dashboard configuration examples, JSON schemas, or platform-specific code. The content reads like marketing copy rather than actionable technical documentation.
Suggestions
Replace abstract examples with actual executable dashboard configurations (e.g., complete Grafana JSON, Datadog dashboard YAML) that Claude can use as templates
Remove verbose sections like 'Overview', 'When to Use This Skill', and 'Integration' that explain obvious concepts - start directly with concrete usage
Add validation steps showing how to verify generated configurations before deployment (e.g., grafana-cli validate, datadog-ci dashboards validate)
Create separate reference files for platform-specific configurations (GRAFANA.md, DATADOG.md, NEWRELIC.md) and link to them from a concise main skill file
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with unnecessary explanations of concepts Claude already knows. Sections like 'Overview', 'How It Works', 'When to Use This Skill', and 'Integration' explain obvious concepts without adding actionable value. The 'Prerequisites' and 'Instructions' sections are generic boilerplate. | 1 / 3 |
Actionability | No concrete code, commands, or executable examples provided. The examples describe what the skill 'will do' abstractly but never show actual dashboard configurations, JSON schemas, or copy-paste ready templates for Grafana/Datadog/New Relic. | 1 / 3 |
Workflow Clarity | Steps are listed in 'How It Works' section (Identify, Define, Generate, Deploy) but lack validation checkpoints, specific commands, or feedback loops. No guidance on how to verify the generated configuration is valid before deployment. | 2 / 3 |
Progressive Disclosure | Monolithic wall of text with no references to external files for detailed configurations, API references, or platform-specific guides. All content is inline despite being lengthy, and 'Resources' section mentions documentation without actual links. | 1 / 3 |
Total | 5 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
227604c
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.