CtrlK
BlogDocsLog inGet started
Tessl Logo

quality-gate-size-analysis

Analyze static quality gate on-disk size changes, correlate with Confluence exception records and GitHub PRs by milestone

65

Quality

57%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./.claude/skills/quality-gate-size-analysis/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

50%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is highly specific and distinctive, naming concrete actions and a unique combination of tools (quality gates, Confluence, GitHub PRs). However, it critically lacks a 'Use when...' clause, making it unclear when Claude should select this skill. The trigger terms are domain-specific jargon that may not match how users naturally phrase requests.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about binary size regressions, quality gate failures related to size, or needs to cross-reference size changes with Confluence exceptions or GitHub milestones.'

Include natural language trigger variations users might say, such as 'size regression', 'build size', 'binary bloat', 'size budget', or 'artifact size tracking'.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: analyzing static quality gate on-disk size changes, correlating with Confluence exception records, and correlating with GitHub PRs by milestone. These are distinct, concrete operations.

3 / 3

Completeness

Describes what the skill does but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and since the 'when' is entirely absent, this scores a 1.

1 / 3

Trigger Term Quality

Contains some relevant domain-specific keywords like 'quality gate', 'on-disk size', 'Confluence', 'GitHub PRs', 'milestone', but these are fairly technical terms. Missing common user-facing variations or simpler trigger phrases a user might naturally say (e.g., 'size regression', 'binary size', 'build size tracking').

2 / 3

Distinctiveness Conflict Risk

Highly specific niche combining static quality gates, on-disk size changes, Confluence exception records, and GitHub PRs by milestone. This very particular combination of tools and concerns makes it extremely unlikely to conflict with other skills.

3 / 3

Total

9

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid, domain-specific skill that provides highly actionable guidance with concrete metric names, queries, CLI commands, and specific identifiers (folder IDs, tag values). Its main weaknesses are the lack of explicit validation checkpoints between steps (important given the multi-source correlation workflow) and the monolithic structure that could benefit from supporting reference files for detailed extraction rules and report templates.

Suggestions

Add explicit validation checkpoints between steps — e.g., 'Verify at least N metric data points returned before proceeding' or 'Confirm exception page count matches expected range' — to catch data quality issues early in the pipeline.

Add a feedback loop for the correlation step: if a metric spike has no matching PR within the expected date range, expand the search window and document the mismatch before flagging it as a gap.

DimensionReasoningScore

Conciseness

The skill is reasonably efficient and provides domain-specific details Claude wouldn't know (metric names, tag filters, Confluence folder IDs, CQL syntax). However, some sections are slightly verbose — e.g., the introductory sentence repeats the title, and some explanatory text could be trimmed. Overall mostly efficient but not maximally lean.

2 / 3

Actionability

The skill provides concrete, executable commands (Datadog metric queries, CQL queries, gh CLI commands with exact flags and JSON fields), specific metric names, tag values, folder IDs, and field names to extract from Confluence pages. The guidance is highly specific and copy-paste ready.

3 / 3

Workflow Clarity

The five-step workflow is clearly sequenced and logically ordered (query metrics → fetch exceptions → pull PRs → correlate → report). However, there are no explicit validation checkpoints or feedback loops — e.g., no step to verify Datadog query results before proceeding, no error handling for missing Confluence pages or PRs without milestones. The 'Identify gaps' section in Step 4c partially serves as validation but is positioned as report content rather than a verification gate.

2 / 3

Progressive Disclosure

The content is well-structured with clear headers and sub-sections, making it navigable. However, at ~120 lines it's a substantial single file with no references to supporting documents. The detailed Confluence field extraction instructions and the full report template specification could potentially be split into separate reference files, but given no bundle files exist, the monolithic approach is the only option. The organization within the file is good but could benefit from separation.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
DataDog/datadog-agent
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.