CtrlK
BlogDocsLog inGet started
Tessl Logo

agent-ops-docker-review

Docker image reviews, optimization, and step-building guidance. Analyzes Dockerfiles for best practices, security issues, and anti-patterns.

70

Quality

63%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./analysis/agent-ops-docker-review/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

57%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description adequately identifies its domain and lists several relevant capabilities around Docker image analysis and optimization. However, it lacks an explicit 'Use when...' clause, which limits its completeness score and makes it harder for Claude to know exactly when to select this skill. Adding natural trigger terms and explicit usage guidance would meaningfully improve selection accuracy.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to review, optimize, or troubleshoot a Dockerfile, or mentions Docker image size, build performance, or container security.'

Include additional natural trigger terms users might say, such as 'container', 'multi-stage build', 'image size', 'docker-compose', 'layer caching', or '.dockerfile'.

DimensionReasoningScore

Specificity

Names the domain (Docker images/Dockerfiles) and some actions (reviews, optimization, step-building guidance, analyzes for best practices/security/anti-patterns), but doesn't list multiple granular concrete actions like 'reduce image layers, pin base image versions, identify exposed secrets'.

2 / 3

Completeness

Clearly answers 'what does this do' (reviews, optimization, analyzes Dockerfiles for best practices/security/anti-patterns), but lacks an explicit 'Use when...' clause or equivalent trigger guidance, which per the rubric caps completeness at 2.

2 / 3

Trigger Term Quality

Includes relevant keywords like 'Docker image', 'Dockerfiles', 'optimization', 'security issues', and 'best practices', but misses common user variations like 'container', 'docker-compose', '.dockerfile', 'image size', 'multi-stage build', or 'layer caching'.

2 / 3

Distinctiveness Conflict Risk

Docker/Dockerfile analysis is a clear niche that is unlikely to conflict with other skills. The specific mention of Dockerfiles, image optimization, and security anti-patterns makes it distinctly identifiable.

3 / 3

Total

9

/

12

Passed

Implementation

70%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a comprehensive and highly actionable Docker review skill with excellent concrete examples, clear workflows, and good safety constraints. Its main weakness is that it's far too long for a single SKILL.md — the language templates, detailed report formats, and scan procedures should be split into referenced files to improve token efficiency and progressive disclosure. The content quality is high but the structure needs reorganization.

Suggestions

Extract language templates (Python, Node, Go, .NET) into a separate TEMPLATES.md file and reference it from the main skill with a brief summary table

Move the detailed report output formats (review report, scan report, optimize output) into a REPORT-FORMATS.md reference file, keeping only a brief description of each in the main skill

Trim the mode overview table and procedures to be more concise — the current level of detail for each mode could be reduced by ~30% without losing clarity

DimensionReasoningScore

Conciseness

The skill is quite long (~300 lines) with four full language templates that are largely repetitive patterns. The templates could be referenced from a separate file. However, it avoids explaining basic Docker concepts and stays focused on actionable content, so it's not egregiously verbose.

2 / 3

Actionability

Excellent actionability throughout — concrete executable Dockerfiles, specific bash commands for scanning tools, complete language-specific templates with copy-paste ready code, and specific rule IDs with clear descriptions. The before/after optimization example is particularly strong.

3 / 3

Workflow Clarity

Each mode has a clearly numbered procedure with explicit steps. The Review mode has a clear locate→analyze→report flow, Optimize builds on Review then generates comparison, Build mode uses an interview pattern with sequential questions, and Scan mode checks prerequisites before running. The forbidden behaviors section adds important safety constraints. Validation is present (e.g., showing diff before modifying, requiring user confirmation before docker build).

3 / 3

Progressive Disclosure

This is a monolithic wall of content with no references to external files for detailed content. The four language templates, the full scan report format, and the complete optimization output format all live inline. These should be split into separate reference files (e.g., TEMPLATES.md, SCAN-REPORT.md) with the SKILL.md serving as an overview with links.

1 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
majiayu000/claude-skill-registry-data
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.