Complete dockerfile toolkit with generation and validation capabilities
94
94%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Generates production-ready Dockerfiles with security, optimization, and best practices built-in: multi-stage builds, security-hardened configurations, optimized layer structures, automatic validation, and iterative error fixing.
Use for creating, generating, or optimizing Dockerfiles and containerizing applications. Do not use for validating existing Dockerfiles (use devops-skills:dockerfile-validator), building/running containers, debugging running containers, or managing image registries.
Follow this workflow when generating Dockerfiles. Adapt based on user needs:
Objective: Understand what needs to be containerized and gather all necessary information.
Information to Collect (use AskUserQuestion if missing or unclear):
Objective: Research framework-specific containerization patterns and best practices.
When to Perform This Stage:
Research Process:
Try context7 MCP first (preferred):
Use mcp__context7__resolve-library-id with the framework name
Examples:
- "next.js" for Next.js applications
- "django" for Django applications
- "fastapi" for FastAPI applications
- "spring-boot" for Spring Boot applications
- "express" for Express.js applications
Then use mcp__context7__get-library-docs with:
- context7CompatibleLibraryID from resolve step
- topic: "docker deployment production build"
- page: 1 (fetch additional pages if needed)Fallback to WebSearch if context7 fails:
Search query pattern:
"<framework>" "<version>" dockerfile best practices production 2025
Examples:
- "Next.js 14 dockerfile best practices production 2025"
- "FastAPI dockerfile best practices production 2025"
- "Spring Boot 3 dockerfile best practices production 2025"Extract key information:
Objective: Create a production-ready, multi-stage Dockerfile following best practices.
Core Principles:
Multi-Stage Builds (REQUIRED for compiled languages, RECOMMENDED for all):
Security Hardening (REQUIRED):
Layer Optimization (REQUIRED):
Production Readiness (REQUIRED):
Language-Specific Templates:
For detailed templates and examples, see:
references/language_specific_guides.md - Node.js sectionreferences/language_specific_guides.md - Python sectionreferences/language_specific_guides.md - Go sectionreferences/language_specific_guides.md - Java sectionQuick Template Selection:
Always Include:
# syntax=docker/dockerfile:1Objective: Create comprehensive .dockerignore to reduce build context and prevent secret leaks.
Always create .dockerignore with generated Dockerfile.
Standard .dockerignore Template:
# Git
.git
.gitignore
.gitattributes
# CI/CD
.github
.gitlab-ci.yml
.travis.yml
.circleci
# Documentation
README.md
CHANGELOG.md
CONTRIBUTING.md
LICENSE
*.md
docs/
# Docker
Dockerfile*
docker-compose*.yml
.dockerignore
# Environment
.env
.env.*
*.local
# Logs
logs/
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# Dependencies (language-specific - add as needed)
node_modules/
__pycache__/
*.pyc
*.pyo
*.pyd
.Python
venv/
.venv/
target/
*.class
# IDE
.vscode/
.idea/
*.swp
*.swo
*~
.DS_Store
# Testing
coverage/
.coverage
*.cover
.pytest_cache/
.tox/
test-results/
# Build artifacts
dist/
build/
*.egg-info/Customize based on language:
node_modules/, npm-debug.log, yarn-error.log__pycache__/, *.pyc, .venv/, .pytest_cache/vendor/, *.exe, *.testtarget/, *.class, *.jar (except final artifact)Objective: Ensure generated Dockerfile follows best practices and has no issues.
REQUIRED: Always validate after generation.
Validation Process:
Invoke devops-skills:dockerfile-validator skill:
Use the Skill tool to invoke devops-skills:dockerfile-validator
This will run:
- hadolint (syntax and best practices)
- Checkov (security scanning)
- Custom validation (layer optimization, etc.)Parse validation results:
Expected validation output:
[1/4] Syntax Validation (hadolint)
[2/4] Security Scan (Checkov)
[3/4] Best Practices Validation
[4/4] Optimization AnalysisObjective: Fix any validation errors and re-validate.
REQUIRED: Iterate at least ONCE if validation finds errors.
Iteration Process:
If validation finds errors:
If validation finds warnings:
Common fixes:
Example iteration:
Iteration 1:
- Error: DL3006 - Missing version tag
- Fix: Change FROM node:alpine to FROM node:20-alpine
- Re-validate
Iteration 2:
- Warning: DL3059 - Multiple consecutive RUN commands
- Fix: Combine RUN commands with &&
- Re-validate
Iteration 3:
- All checks passed ✓Objective: Provide comprehensive summary and next steps.
Deliverables:
Generated Files:
Validation Summary:
Usage Instructions:
# Build the image
docker build -t myapp:1.0 .
# Run the container
docker run -p 3000:3000 myapp:1.0
# Test health check (if applicable)
curl http://localhost:3000/healthOptimization Metrics (REQUIRED - provide explicit estimates):
Always include a summary like this:
## Optimization Metrics
| Metric | Estimate |
|--------|----------|
| Image Size | ~150MB (vs ~500MB without multi-stage, 70% reduction) |
| Build Cache | Layer caching enabled for dependencies |
| Security | Non-root user, minimal base image, no secrets |Language-specific size estimates:
Next Steps (REQUIRED - always include as bulleted list):
Always provide explicit next steps:
## Next Steps
- [ ] Test the build locally: `docker build -t myapp:1.0 .`
- [ ] Run and verify the container works as expected
- [ ] Update CI/CD pipeline to use the new Dockerfile
- [ ] Consider BuildKit cache mounts for faster builds (see Modern Docker Features)
- [ ] Set up automated vulnerability scanning with `docker scout` or `trivy`
- [ ] Add to container registry and deployThe scripts/ directory contains standalone bash scripts for manual Dockerfile generation outside of this skill:
generate_nodejs.sh - CLI tool for Node.js Dockerfilesgenerate_python.sh - CLI tool for Python Dockerfilesgenerate_golang.sh - CLI tool for Go Dockerfilesgenerate_java.sh - CLI tool for Java Dockerfilesgenerate_dockerignore.sh - CLI tool for .dockerignore generationPurpose: These scripts are reference implementations and manual tools for users who want to generate Dockerfiles via command line without using Claude Code. They demonstrate the same best practices embedded in this skill.
When using this skill: Claude generates Dockerfiles directly using the templates and patterns documented in this skill.md, rather than invoking these scripts. The templates in this document are the authoritative source.
Script usage example:
# Manual Dockerfile generation
cd .claude/skills/dockerfile-generator/scripts
./generate_nodejs.sh --version 20 --port 3000 --output DockerfileFor detailed best practices, see the reference documentation:
references/security_best_practices.md - Non-root users, minimal images, secrets managementreferences/optimization_patterns.md - Layer caching, multi-stage builds, BuildKit featuresreferences/multistage_builds.md - Comprehensive multi-stage patternsQuick Security Checklist:
node:20-alpine, not node:latest)Quick Optimization Checklist:
For complete pattern examples, see references/language_specific_guides.md:
For BuildX multi-platform builds, SBOM generation, and BuildKit cache mounts, see references/modern_docker_features.md.
Missing dependency files:
Unknown framework:
Validation failures:
This skill works well in combination with:
latest as the base image tagFROM node:latest resolves to a different image on each build, producing non-reproducible images and silently pulling in breaking changes or vulnerabilities.FROM node:latestFROM node:20.18-alpine3.20 (specific version and variant)RUN layersRUN creates a new layer; separate apt-get update and apt-get install layers can leak stale package lists and cache files into the image, increasing size unnecessarily.RUN apt-get update\nRUN apt-get install -y curl\nRUN rm -rf /var/lib/apt/lists/*RUN apt-get update && apt-get install -y --no-install-recommends curl && rm -rf /var/lib/apt/lists/*USER instruction (defaults to root)RUN addgroup -S appgroup && adduser -S appuser -G appgroup followed by USER appusernpm install or pip install busts the layer cache on every code change, making builds slower than necessary.COPY . . followed by RUN npm ciCOPY package*.json ./ then RUN npm ci then COPY . . — dependency layer is cached until package.json changesADD when COPY is sufficientADD has implicit behaviors (auto-extracting tarballs, fetching URLs) that make Dockerfiles harder to reason about; COPY is explicit and predictable.ADD app.tar.gz /app/COPY app.tar.gz /app/ and explicitly extract if needed: RUN tar -xzf /app/app.tar.gz -C /app/