CtrlK
BlogDocsLog inGet started
Tessl Logo

dt-obs-hosts

Host and process metrics including CPU, memory, disk, network, containers, and process-level telemetry. Monitor infrastructure health and resource utilization.

49

Quality

52%

Does it follow best practices?

Impact

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/dt-obs-hosts/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description provides a reasonable enumeration of metric categories but reads more like a topic label than an actionable skill description. It lacks concrete actions (what does it actually do with these metrics?) and has no explicit trigger guidance for when Claude should select this skill. The second sentence is generic and adds little discriminative value.

Suggestions

Add a 'Use when...' clause with explicit triggers, e.g., 'Use when the user asks about server health, CPU usage, memory consumption, disk space, network throughput, or container resource limits.'

Replace the vague 'Monitor infrastructure health and resource utilization' with concrete actions like 'Query host metrics, set up alerts for resource thresholds, diagnose performance bottlenecks, and analyze container resource consumption.'

Include common user-facing synonyms and variations such as 'server monitoring', 'RAM usage', 'load average', 'I/O', and 'system performance' to improve trigger term coverage.

DimensionReasoningScore

Specificity

Names the domain (host/process metrics) and lists specific metric categories (CPU, memory, disk, network, containers, process-level telemetry), but doesn't describe concrete actions—it lists what it covers rather than what it does (e.g., 'collect', 'alert on', 'visualize', 'query').

2 / 3

Completeness

Describes 'what' at a high level (host and process metrics monitoring) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and the 'what' is also somewhat weak (more of a topic listing than action description), so this scores a 1.

1 / 3

Trigger Term Quality

Includes relevant keywords like CPU, memory, disk, network, containers, infrastructure, and resource utilization that users might naturally use. However, it's missing common variations like 'server monitoring', 'host metrics', 'load average', 'RAM', 'bandwidth', 'I/O', or tool-specific terms.

2 / 3

Distinctiveness Conflict Risk

The mention of specific metric types (CPU, memory, disk, network, containers, process-level telemetry) provides some distinctiveness, but 'monitor infrastructure health and resource utilization' is broad enough to overlap with application monitoring, cloud monitoring, or observability skills.

2 / 3

Total

7

/

12

Passed

Implementation

72%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured infrastructure monitoring skill with strong actionability through executable DQL queries and excellent progressive disclosure. Its main weaknesses are moderate verbosity (redundant sections, some unnecessary enumeration) and lack of validation/feedback loops in workflows. The skill would benefit from trimming redundant overview sections and adding verification steps to key workflows.

Suggestions

Merge or eliminate the 'What This Skill Does' and 'When to Use This Skill' sections — the workflows themselves make the skill's purpose clear, and these sections add ~30 lines of redundant content.

Add validation steps to workflows, such as checking for empty results, verifying expected entity counts, or confirming metric data freshness before drawing conclusions from query output.

DimensionReasoningScore

Conciseness

The skill is reasonably well-organized but includes some unnecessary content like the 'What This Skill Does' and 'When to Use This Skill' sections that largely duplicate each other and explain things Claude could infer. The 'Common Query Patterns' section partially repeats patterns already shown in the workflows. However, the DQL examples themselves are lean and the reference structure avoids bloating the main file.

2 / 3

Actionability

The skill provides fully executable DQL queries for every workflow, with specific metric names, field references, filter thresholds, and concrete examples. The queries are copy-paste ready with real attribute names and functions like `getNodeName()`, `arrayAvg()`, and proper aggregation patterns.

3 / 3

Workflow Clarity

The 8 workflows are clearly sequenced and well-labeled, but they lack validation checkpoints and feedback loops. For infrastructure monitoring operations, there's no guidance on verifying query results, handling empty results, or iterating when data quality issues are found. The workflows are more like individual query recipes than true multi-step processes with error recovery.

2 / 3

Progressive Disclosure

Excellent progressive disclosure structure with a clear main file covering 80% of use cases, well-signaled references to 4 specific reference files, and a dedicated 'When to Load References' section that gives precise criteria for when to consult each file. Navigation is one level deep and clearly organized.

3 / 3

Total

10

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
Dynatrace/dynatrace-for-ai
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.