CtrlK
BlogDocsLog inGet started
Tessl Logo

creating-apm-dashboards

Execute this skill enables AI assistant to create application performance monitoring (apm) dashboards. it is triggered when the user requests the creation of a new apm dashboard, monitoring dashboard, or a dashboard for application performance. the skill helps ... Use when generating or creating new content. Trigger with phrases like 'generate', 'create', or 'scaffold'.

Install with Tessl CLI

npx tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill creating-apm-dashboards
What are skills?

34

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Discovery

50%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description has a clear domain focus (APM dashboards) but is undermined by an incomplete sentence ('helps ...') and overly generic trigger terms that would conflict with other creation skills. The 'Use when' clause exists but provides triggers too broad to reliably select this skill over others.

Suggestions

Complete the truncated sentence to fully describe what the skill does (e.g., 'helps configure metrics, set up alerts, and create performance visualizations').

Replace generic triggers ('generate', 'create') with APM-specific phrases like 'monitor application performance', 'track response times', 'observability dashboard', or 'metrics visualization'.

Add specific file types or tool integrations if applicable (e.g., 'Datadog', 'Grafana', 'New Relic') to improve distinctiveness.

DimensionReasoningScore

Specificity

Names the domain (APM dashboards) and mentions 'create' as an action, but the description is incomplete ('helps ...') and doesn't list specific concrete actions like configuring metrics, setting up alerts, or defining visualizations.

2 / 3

Completeness

Has a 'Use when' clause with trigger phrases, but the 'what' portion is incomplete (literally cuts off at 'helps ...'), and the triggers are generic ('generate', 'create') rather than specific to APM dashboards.

2 / 3

Trigger Term Quality

Includes some relevant keywords like 'apm dashboard', 'monitoring dashboard', 'application performance', 'generate', 'create', 'scaffold', but missing common variations users might say like 'metrics dashboard', 'observability', 'performance tracking', or specific tool names.

2 / 3

Distinctiveness Conflict Risk

The APM/monitoring dashboard focus provides some distinctiveness, but the generic triggers 'generate', 'create', 'scaffold' would conflict with many other creation-focused skills. The incomplete description weakens its ability to stand out.

2 / 3

Total

8

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill content is almost entirely boilerplate with no actionable guidance. It describes what an APM dashboard skill would do conceptually but provides zero executable code, configuration examples, or specific instructions. The content wastes tokens on explanations Claude doesn't need while failing to deliver the concrete templates and commands that would make this skill useful.

Suggestions

Add complete, executable dashboard configuration examples for at least one platform (e.g., a full Grafana JSON dashboard config or Datadog dashboard definition)

Remove boilerplate sections like 'Prerequisites', 'Instructions', 'Output', 'Error Handling', and 'Resources' that contain only generic placeholder text

Provide specific metric queries and panel configurations (e.g., Prometheus queries for latency percentiles, error rate calculations)

Include a concrete workflow with validation steps, such as how to test the generated configuration before deployment

DimensionReasoningScore

Conciseness

Extremely verbose with extensive padding and explanations of concepts Claude already knows. Sections like 'Overview', 'How It Works', 'When to Use This Skill', 'Best Practices', 'Integration', 'Prerequisites', 'Instructions', 'Output', 'Error Handling', and 'Resources' are mostly filler with no actionable content.

1 / 3

Actionability

No concrete code, commands, or executable guidance provided. Examples describe what the skill 'will do' abstractly but never show actual dashboard configurations, JSON schemas, or copy-paste ready templates for Grafana or Datadog.

1 / 3

Workflow Clarity

Steps are vague and abstract ('Identify Requirements', 'Define Dashboard Components', 'Generate Configuration', 'Deploy Dashboard') with no specific instructions, validation checkpoints, or concrete actions. No feedback loops or error recovery guidance.

1 / 3

Progressive Disclosure

Monolithic wall of text with no references to external files for detailed configurations, API references, or platform-specific guides. All content is inline but lacks substance, and 'Resources' section points to nothing specific.

1 / 3

Total

4

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation13 / 16 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

metadata_version

'metadata' field is not a dictionary

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

13

/

16

Passed

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.