CtrlK
BlogDocsLog inGet started
Tessl Logo

datadog-automation

Automate Datadog tasks via Rube MCP (Composio): query metrics, search logs, manage monitors/dashboards, create events and downtimes. Always search tools first for current schemas.

75

1.28x
Quality

66%

Does it follow best practices?

Impact

89%

1.28x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./.trae/skills/datadog-automation/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

82%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong description that clearly enumerates specific Datadog capabilities and includes excellent natural trigger terms. Its main weakness is the absence of an explicit 'Use when...' clause, which would help Claude know precisely when to select this skill. The procedural note about searching tools first is a nice operational detail.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about Datadog monitoring, log searches, metric queries, alert management, or dashboard operations.'

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'query metrics, search logs, manage monitors/dashboards, create events and downtimes.' Also includes a procedural instruction to 'search tools first for current schemas.'

3 / 3

Completeness

Clearly answers 'what does this do' with specific Datadog actions, but lacks an explicit 'Use when...' clause or equivalent trigger guidance. The when is only implied by the domain terms.

2 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'Datadog', 'metrics', 'logs', 'monitors', 'dashboards', 'events', 'downtimes', 'Rube MCP', 'Composio'. These cover the main terms a user working with Datadog would naturally use.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive due to the specific combination of 'Datadog', 'Rube MCP (Composio)', and the enumerated Datadog-specific operations. Unlikely to conflict with other skills.

3 / 3

Total

11

/

12

Passed

Implementation

50%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides a comprehensive reference for Datadog automation via Rube MCP with good structural organization and useful pitfall warnings. However, it's overly verbose given that the skill itself instructs users to call RUBE_SEARCH_TOOLS for current schemas, making much of the parameter documentation redundant. It would benefit from more concrete executable examples and validation steps for destructive operations.

Suggestions

Add concrete, complete tool invocation examples for at least 2-3 key workflows (e.g., show a full DATADOG_QUERY_METRICS call with actual parameter values and expected response structure)

Add explicit validation/confirmation steps before destructive operations like DATADOG_DELETE_DASHBOARD (e.g., 'First GET the dashboard to confirm identity, then confirm with user before deleting')

Trim parameter documentation significantly since the skill already instructs to call RUBE_SEARCH_TOOLS for current schemas — focus only on non-obvious gotchas rather than listing all parameters

Consider splitting detailed workflow sections into a separate reference file, keeping SKILL.md as a concise overview with the quick reference table and setup instructions

DimensionReasoningScore

Conciseness

The skill is reasonably well-structured but quite verbose for what it conveys. Much of the parameter documentation and pitfalls sections repeat information that Claude could discover by calling RUBE_SEARCH_TOOLS (which the skill itself instructs to do first). The quick reference table at the end largely duplicates the workflow sections above it.

2 / 3

Actionability

The skill provides tool names, parameter lists, and some query syntax examples, but lacks fully executable step-by-step examples showing actual tool invocations with concrete parameter values. The monitor query syntax examples are helpful but most workflows describe what to do abstractly rather than showing complete, copy-paste-ready tool calls.

2 / 3

Workflow Clarity

Workflows are clearly sequenced with numbered steps and labeled as Required/Optional, which is good. However, destructive operations like DELETE dashboard lack validation checkpoints or confirmation steps, and there are no feedback loops for error recovery (e.g., what to do if a monitor creation fails due to type mismatch). Missing validation for destructive operations caps this at 2.

2 / 3

Progressive Disclosure

The content is organized with clear sections and headers, but it's a monolithic document (~200 lines) that could benefit from splitting detailed workflow sections into separate files. The quick reference table and common patterns could serve as the main SKILL.md with workflows linked out. The external toolkit docs link is good but underutilized.

2 / 3

Total

8

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
Lingjie-chen/MT5
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.