Comprehensive container image security scanning and remediation. Analyzes Docker images for OS package vulnerabilities, application dependencies, and Dockerfile best practices. Use when: - User asks to scan a Docker image or container - User mentions "container security" or "image vulnerabilities" - User wants to secure a Dockerfile - User asks about base image security - Agent is working with Docker, Kubernetes, or container deployments
76
70%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./command_directives/synchronous_remediation/skills/container-security/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly communicates what the skill does (container image security scanning and remediation across multiple dimensions) and when to use it (with five explicit trigger scenarios). It uses appropriate third-person voice, includes natural trigger terms users would actually say, and occupies a clear, distinct niche.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Analyzes Docker images for OS package vulnerabilities, application dependencies, and Dockerfile best practices.' Also mentions remediation. These are concrete, actionable capabilities. | 3 / 3 |
Completeness | Clearly answers both 'what' (container image security scanning and remediation, analyzing for OS package vulnerabilities, application dependencies, Dockerfile best practices) and 'when' with an explicit 'Use when:' clause listing five specific trigger scenarios. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural terms users would say: 'Docker image', 'container', 'container security', 'image vulnerabilities', 'Dockerfile', 'base image security', 'Kubernetes', 'container deployments'. These are terms users would naturally use when needing this skill. | 3 / 3 |
Distinctiveness Conflict Risk | Clearly occupies a distinct niche around container/Docker image security scanning. The specific triggers around Docker images, container security, and Dockerfile best practices are unlikely to conflict with general security or general Docker skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
39%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill has a well-structured workflow with clear phases and a strong verification loop, but it is far too verbose for its purpose. It inlines extensive reference material, template outputs, and multiple scenarios that should be split into separate files. Many sections explain concepts Claude already understands (Docker basics, what OS packages are, how to categorize vulnerabilities), wasting significant token budget.
Suggestions
Cut the content by at least 50%: remove explanations of concepts Claude knows (what base images are, what app dependencies are, the categorization table in Step 3.1) and trim the Quick Start to actual tool invocations rather than abstract numbered steps.
Extract the Base Image Quick Reference table, Common Scenarios, and Dockerfile Best Practices into separate referenced files (e.g., BASE_IMAGES.md, SCENARIOS.md, DOCKERFILE_PRACTICES.md) with one-line descriptions linking to each.
Consolidate the remediation examples (Steps 4.1-4.4) into a single concise section with one representative example, rather than four separate template blocks with placeholder values.
Replace the Phase 1 'Image Identification' section entirely—Claude doesn't need instructions on how to parse an image name from user input or ask clarifying questions about scan scope.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is extremely verbose at ~250+ lines. It over-explains concepts Claude already knows (e.g., what base images are, what OS packages vs app dependencies are, how Docker builds work). The phased structure with numbered sub-steps (1.1, 1.2, 2.1, etc.) adds significant overhead. Many sections like 'Parse User Input' and 'Determine Scan Scope' describe obvious reasoning steps Claude would naturally perform. The common scenarios section largely repeats the main workflow. | 1 / 3 |
Actionability | The skill provides concrete tool invocations (mcp_snyk_snyk_container_scan with specific parameters) and executable Dockerfile snippets, which is good. However, much of the content is template/placeholder text (e.g., 'CVE-2024-XXXX', summary tables with X/Y/Z placeholders) rather than truly executable guidance. The Quick Start is pseudocode-like numbered steps rather than actual commands. | 2 / 3 |
Workflow Clarity | The workflow is clearly sequenced across 5 phases with explicit validation steps (Phase 5 includes rebuild, re-scan, and comparison). There's a clear feedback loop: scan → analyze → fix → rebuild → re-scan → verify. The end-to-end example reinforces the workflow concretely. | 3 / 3 |
Progressive Disclosure | This is a monolithic wall of text with no references to external files. All content—quick reference tables, common scenarios, error handling, detailed remediation examples, Dockerfile best practices—is inlined in a single massive document. Much of this (e.g., base image quick reference, Dockerfile best practices, common scenarios) should be split into separate referenced files. | 1 / 3 |
Total | 7 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
adb5a9a
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.