Automate Datadog tasks via Rube MCP (Composio): query metrics, search logs, manage monitors/dashboards, create events and downtimes. Always search tools first for current schemas.
Install with Tessl CLI
npx tessl i github:davepoon/buildwithclaude --skill datadog-automation69
Quality
56%
Does it follow best practices?
Impact
89%
1.28xAverage score across 3 eval scenarios
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/all-skills/skills/datadog-automation/SKILL.mdDiscovery
50%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description excels at listing specific Datadog capabilities and is clearly distinguishable due to explicit platform naming. However, it critically lacks any 'Use when...' guidance, which is essential for Claude to know when to select this skill. The trigger terms are adequate but could include more user-facing vocabulary.
Suggestions
Add a 'Use when...' clause with explicit triggers like 'Use when the user asks about Datadog monitoring, APM, log analysis, or observability dashboards.'
Include additional natural trigger terms users might say: 'alerts', 'APM', 'observability', 'monitoring', 'incident management'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'query metrics, search logs, manage monitors/dashboards, create events and downtimes.' These are clear, actionable capabilities. | 3 / 3 |
Completeness | Describes what the skill does well, but completely lacks a 'Use when...' clause or any explicit trigger guidance. The rubric states missing trigger guidance should cap completeness at 2, and this has no when guidance at all. | 1 / 3 |
Trigger Term Quality | Includes relevant keywords like 'Datadog', 'metrics', 'logs', 'monitors', 'dashboards', 'events', 'downtimes', but missing common variations users might say like 'alerts', 'APM', 'observability', or 'monitoring'. | 2 / 3 |
Distinctiveness Conflict Risk | Very clear niche with 'Datadog' and 'Rube MCP (Composio)' as distinct identifiers. Unlikely to conflict with other monitoring or logging skills due to the specific platform mention. | 3 / 3 |
Total | 9 / 12 Passed |
Implementation
62%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured skill with excellent workflow clarity and good organization. The main weaknesses are the lack of executable code examples (only query syntax snippets provided) and the document length which could benefit from progressive disclosure to separate files. The pitfalls sections add genuine value for avoiding common mistakes.
Suggestions
Add complete, executable tool invocation examples showing full parameter objects for at least one workflow (e.g., a complete DATADOG_CREATE_MONITOR call with all required fields)
Consider splitting the Core Workflows section into separate files (e.g., MONITORS.md, DASHBOARDS.md) and keeping only brief summaries with links in the main SKILL.md
Remove the Quick Reference table or the detailed parameter lists in Core Workflows - having both is redundant
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably efficient but includes some redundancy - the Quick Reference table duplicates information already covered in the Core Workflows sections. Some explanations like 'Muting a monitor suppresses notifications but the monitor still evaluates' are useful context Claude might not know, but the overall document could be tightened. | 2 / 3 |
Actionability | Provides tool names and parameter lists, but lacks executable code examples. The 'Common Patterns' section shows query syntax snippets but not complete tool invocation examples. Users would benefit from concrete JSON payloads or full tool call examples rather than just parameter descriptions. | 2 / 3 |
Workflow Clarity | Excellent workflow structure with clear numbered sequences, explicit [Required] vs [Optional] markers, and well-documented pitfalls for each workflow. The Setup section includes a validation checkpoint (confirm ACTIVE status before proceeding). Each workflow clearly indicates when to use it and the tool sequence. | 3 / 3 |
Progressive Disclosure | The document is well-organized with clear sections, but it's quite long (200+ lines) and could benefit from splitting detailed workflow sections into separate files. The Quick Reference table at the end is good, but the Core Workflows section is dense and could be externalized with links from a leaner overview. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.