Create a new Go core check that collects metrics and sends them to Datadog
62
55%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.claude/skills/create-core-check/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a specific technology stack (Go + Datadog) and a general action (creating a core check that collects metrics), but it lacks explicit trigger guidance ('Use when...'), comprehensive action details, and sufficient keyword coverage. It reads more like a task summary than a skill description optimized for selection among many skills.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to create a new Datadog Agent core check, scaffold a Go-based metric collector, or add a custom integration to the Datadog Agent.'
List more specific concrete actions such as 'scaffolds Go check boilerplate, configures check YAML, registers metrics, writes unit tests, and integrates with the Datadog Agent.'
Include natural trigger term variations like 'Datadog Agent', 'custom check', 'golang check', 'agent integration', and 'metric collector' to improve keyword coverage.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Go core check, Datadog) and a couple of actions (collects metrics, sends them), but doesn't list multiple specific concrete actions like scaffolding files, configuring YAML, writing tests, etc. | 2 / 3 |
Completeness | Describes what the skill does but has no explicit 'Use when...' clause or equivalent trigger guidance, and per the rubric a missing 'Use when' clause caps completeness at 2—but the 'what' itself is also thin, so this lands at 1. | 1 / 3 |
Trigger Term Quality | Includes relevant keywords like 'Go', 'core check', 'metrics', and 'Datadog' that users might say, but misses common variations like 'agent check', 'custom check', 'Datadog Agent', 'golang', or 'metric collection'. | 2 / 3 |
Distinctiveness Conflict Risk | The mention of 'Go core check' and 'Datadog' provides some specificity, but it could overlap with other Datadog-related skills (e.g., Python checks, integration checks, general Datadog configuration) since the scope isn't precisely delineated. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-crafted skill with excellent actionability and workflow clarity. It provides a clear 7-step process for creating Datadog core checks, with smart use of codebase references as living documentation. The main weaknesses are moderate verbosity (the Sender Methods Reference and Important Notes sections could be externalized) and lack of bundle files to support progressive disclosure for a skill of this length (~150 lines).
Suggestions
Move the Sender Methods Reference table and Important Notes section into a separate bundle file (e.g., REFERENCE.md) and link to it from the main skill to improve progressive disclosure and reduce token usage.
Remove or condense the opening paragraph explaining what core checks are — Claude can infer this from the instructions themselves.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is mostly efficient and well-structured, but includes some information Claude already knows (e.g., explaining what core checks are, the Sender Methods Reference table which could be discovered from code, and the 'Important Notes' section partially restates what's already in the step-by-step instructions). The reference table and notes section add ~40 lines that could be trimmed or moved to a bundle file. | 2 / 3 |
Actionability | The skill provides highly concrete guidance: specific file paths to read as references, exact directory structures, specific commands for testing/building/linting, a clear table mapping check types to reference files, and concrete test flow steps with mock sender patterns. The instructions tell Claude to read actual codebase files and follow their patterns, which is the most actionable approach for a codebase-specific skill. | 3 / 3 |
Workflow Clarity | The 7-step workflow is clearly sequenced with logical dependencies (gather info → read references → create check → register → config → test → verify). Step 7 includes explicit verification with three distinct validation commands (test, build, lint) and instructs reporting results back to the user. The multi-instance BuildID ordering constraint and platform stub requirements are called out as explicit checkpoints. | 3 / 3 |
Progressive Disclosure | The skill is a single monolithic file with no bundle files for supporting content. The Sender Methods Reference table and Important Notes section could be split into separate reference files. The skill references codebase files effectively as external references, but the inline reference material makes the SKILL.md longer than necessary. For a skill of this complexity, having at least the API reference in a separate bundle file would improve organization. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
0f36ad4
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.