Problem entities, root cause analysis (RCA), impact assessment, and problem correlation. Query and analyze Dynatrace-detected problems and incidents.
58
66%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/dt-obs-problems/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description does a good job listing specific capabilities and anchoring itself to the Dynatrace platform, which provides strong distinctiveness. However, it lacks an explicit 'Use when...' clause, which limits completeness, and could benefit from more natural trigger terms that users would commonly say when experiencing incidents or outages.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about Dynatrace problems, incidents, outages, alerts, or wants to investigate root causes of issues.'
Include more natural user-facing trigger terms such as 'alerts', 'outages', 'errors', 'downtime', 'what caused the issue', or 'incident investigation' to improve keyword coverage.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'problem entities', 'root cause analysis (RCA)', 'impact assessment', 'problem correlation', and 'query and analyze Dynatrace-detected problems and incidents'. | 3 / 3 |
Completeness | Clearly answers 'what does this do' with specific capabilities, but lacks an explicit 'Use when...' clause or equivalent trigger guidance for when Claude should select this skill. | 2 / 3 |
Trigger Term Quality | Includes relevant terms like 'problems', 'incidents', 'root cause analysis', 'RCA', and 'Dynatrace', but misses common user variations like 'alerts', 'outages', 'errors', 'downtime', or 'what went wrong'. | 2 / 3 |
Distinctiveness Conflict Risk | The mention of 'Dynatrace-detected problems' creates a clear niche tied to a specific platform, making it highly distinguishable from generic monitoring or incident management skills. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid, highly actionable skill with excellent executable DQL query examples covering a wide range of problem analysis scenarios. Its main weakness is verbosity—the introductory/explanatory sections add tokens without proportional value for Claude, and the document could be more concise. The progressive disclosure structure is reasonable with references to external files, though the main body carries more content than ideal.
Suggestions
Remove or drastically shorten the 'What are Problems?' and 'Event Kinds' sections—Claude doesn't need conceptual explanations of Dynatrace problems; focus on the query patterns and field references.
Add a brief investigation workflow (e.g., 'Triage active problems → Identify root cause → Assess blast radius → Correlate with logs') to give a clear sequence for multi-step problem analysis.
Move the Problem Categories table and Event Kinds table to a reference file to reduce the main skill's token footprint.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill includes some unnecessary explanatory content (e.g., 'What are Problems?' section explaining what Dynatrace problems are, the event.kind table, and the problem lifecycle section) that Claude likely doesn't need. However, the query patterns and field references are dense and useful. The 'Common Field Name Mistakes' table is valuable. Overall, it could be tightened by ~30%. | 2 / 3 |
Actionability | The skill provides numerous fully executable DQL queries covering active problems, root cause analysis, blast radius, recurring causes, entity filtering, and more. The DO/DON'T examples for entity filtering and the correct/incorrect status values are highly actionable. Every pattern includes copy-paste ready code. | 3 / 3 |
Workflow Clarity | The best practices section provides good guidance on query development order ('start simple', 'test fields first', 'test incrementally'), but there's no explicit validation workflow with feedback loops for problem analysis. For a query/analysis skill rather than a destructive operation skill, this is adequate but could benefit from a clearer step-by-step investigation workflow (e.g., triage → identify root cause → assess blast radius → correlate with logs). | 2 / 3 |
Progressive Disclosure | The skill references several external files (references/problem-trending.md, problem-correlation.md, impact-analysis.md, problem-merging.md) and related skills, which is good structure. However, no bundle files were provided, so we can't verify these references exist. The Problem Trending section includes a substantial inline summary before pointing to the reference file, which is a good pattern, but the main body is quite long (~250 lines of content) and some sections like the event.kind table and problem categories could potentially be moved to reference files. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
4991356
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.