Complete terraform toolkit with generation and validation capabilities
93
Quality
93%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
This skill enables the generation of production-ready Terraform configurations following best practices and current standards. Automatically integrates validation and documentation lookup for custom providers and modules.
Work through these steps in order. Do not skip any step.
Analyze the user's request to determine:
Standard providers (no lookup needed):
Custom/third-party providers/modules (require documentation lookup):
When custom providers/modules are detected:
Use WebSearch to find version-specific documentation:
Search query format: "[provider/module name] terraform [version] documentation [specific resource]"
Example: "datadog terraform provider v3.30 monitor resource documentation"
Example: "terraform-aws-modules vpc version 5.0 documentation"Focus searches on official documentation (registry.terraform.io, provider websites): required/optional arguments, attribute references, example usage, version compatibility notes.
If Context7 MCP is available and the provider/module is supported, use it as an alternative:
mcp__context7__resolve-library-id → mcp__context7__get-library-docsBefore generating configuration, read the relevant reference files:
Read(file_path: ".claude/skills/terraform-generator/references/terraform_best_practices.md")
Read(file_path: ".claude/skills/terraform-generator/references/provider_examples.md")When to consult each reference:
| Reference | Read When |
|---|---|
terraform_best_practices.md | Always - contains required patterns |
common_patterns.md | Multi-environment, workspace, or complex setups |
provider_examples.md | Generating AWS, Azure, GCP, or K8s resources |
modern_features.md | Using Terraform 1.8+ features (ephemeral resources, write-only args, actions, etc.) |
File Organization:
terraform-project/
├── main.tf # Primary resource definitions
├── variables.tf # Input variable declarations
├── outputs.tf # Output value declarations
├── versions.tf # Provider version constraints
├── terraform.tfvars # Variable values (optional, for examples)
└── backend.tf # Backend configuration (optional)Best Practices to Follow:
Provider Configuration:
terraform {
required_version = ">= 1.10, < 2.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 6.0" # Latest: v6.23.0 (Dec 2025)
}
}
}
provider "aws" {
region = var.aws_region
}Resource Naming: Use descriptive snake_case names; include resource type in name when helpful.
resource "aws_instance" "web_server" { ... }Variable Declarations:
variable "instance_type" {
description = "EC2 instance type for web servers"
type = string
default = "t3.micro"
validation {
condition = contains(["t3.micro", "t3.small", "t3.medium"], var.instance_type)
error_message = "Instance type must be t3.micro, t3.small, or t3.medium."
}
}Output Values:
output "instance_public_ip" {
description = "Public IP address of the web server"
value = aws_instance.web_server.public_ip
}Use Data Sources for References:
data "aws_ami" "ubuntu" {
most_recent = true
owners = ["099720109477"] # Canonical
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"]
}
}Module Usage:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.0.0"
name = "my-vpc"
cidr = "10.0.0.0/16"
azs = ["us-east-1a", "us-east-1b"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24"]
}Use locals for Computed Values:
locals {
common_tags = {
Environment = var.environment
ManagedBy = "Terraform"
Project = var.project_name
}
}Lifecycle Rules When Appropriate:
resource "aws_instance" "example" {
lifecycle {
create_before_destroy = true
prevent_destroy = true
ignore_changes = [tags]
}
}Dynamic Blocks for Repeated Configuration:
resource "aws_security_group" "example" {
dynamic "ingress" {
for_each = var.ingress_rules
content {
from_port = ingress.value.from_port
to_port = ingress.value.to_port
protocol = ingress.value.protocol
cidr_blocks = ingress.value.cidr_blocks
}
}
}Comments and Documentation: Add comments explaining complex logic, document why certain values are used, and include examples in variable descriptions.
Security Best Practices:
Always use data sources for dynamic infrastructure values instead of hardcoding:
| Use Case | Data Source | Example |
|---|---|---|
| Current region | data "aws_region" "current" {} | data.aws_region.current.name |
| Current account | data "aws_caller_identity" "current" {} | data.aws_caller_identity.current.account_id |
| Available AZs | data "aws_availability_zones" "available" {} | data.aws_availability_zones.available.names |
| Latest AMI | data "aws_ami" "..." | With most_recent = true and filters |
| Existing VPC | data "aws_vpc" "..." | Reference existing infrastructure |
Add prevent_destroy = true on resources that could cause data loss or service disruption if accidentally destroyed:
# KMS Keys - protect from deletion
resource "aws_kms_key" "encryption" {
lifecycle { prevent_destroy = true }
}
# Databases - protect from deletion
resource "aws_db_instance" "main" {
lifecycle { prevent_destroy = true }
}
# S3 Buckets with data - protect from deletion
resource "aws_s3_bucket" "data" {
lifecycle { prevent_destroy = true }
}Resources that must have prevent_destroy = true:
aws_kms_key)aws_db_instance, aws_rds_cluster)When creating S3 buckets with lifecycle configurations, always include a rule to abort incomplete multipart uploads:
resource "aws_s3_bucket_lifecycle_configuration" "main" {
bucket = aws_s3_bucket.main.id
# Abort incomplete multipart uploads to prevent storage costs (Checkov CKV_AWS_300)
rule {
id = "abort-incomplete-uploads"
status = "Enabled"
filter {}
abort_incomplete_multipart_upload {
days_after_initiation = 7
}
}
rule {
id = "transition-to-ia"
status = "Enabled"
filter {
prefix = ""
}
transition {
days = 90
storage_class = "STANDARD_IA"
}
noncurrent_version_transition {
noncurrent_days = 30
storage_class = "STANDARD_IA"
}
noncurrent_version_expiration {
noncurrent_days = 365
}
}
}After generating Terraform files, validate them using the devops-skills:terraform-validator skill:
Invoke: Skill(devops-skills:terraform-validator)The devops-skills:terraform-validator skill will:
terraform fmt -checkterraform initterraform validateterraform planIf validation fails, fix the issues and re-run the validator. Do NOT proceed to Step 5 until all checks pass.
┌─────────────────────────────────────────────────────────┐
│ VALIDATION FAILED? │
│ │
│ ┌─────────┐ ┌─────────┐ ┌─────────────────────┐ │
│ │ Fix │───▶│ Re-run │───▶│ All checks pass? │ │
│ │ Issue │ │ Skill │ │ YES → Step 5 │ │
│ └─────────┘ └─────────┘ │ NO → Loop back │ │
│ ▲ └─────────────────────┘ │
│ │ │ │
│ └────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────┘Common validation failures to fix:
| Check | Issue | Fix |
|---|---|---|
CKV_AWS_300 | Missing abort multipart upload | Add abort_incomplete_multipart_upload rule |
CKV_AWS_24 | SSH open to 0.0.0.0/0 | Restrict to specific CIDR |
CKV_AWS_16 | RDS encryption disabled | Add storage_encrypted = true |
terraform validate | Invalid resource argument | Check provider documentation |
If custom providers are detected during validation:
After successful generation and validation with all checks passing, provide the user with:
## Generated Files
| File | Description |
|------|-------------|
| `path/to/main.tf` | Main resource definitions |
| `path/to/variables.tf` | Input variables |
| `path/to/outputs.tf` | Output values |
| `path/to/versions.tf` | Provider version constraints |
## Next Steps
1. Review and customize `terraform.tfvars` with your values
2. Initialize Terraform: `terraform init`
3. Review the execution plan: `terraform plan`
4. Apply the configuration: `terraform apply`
## Customization Notes
- [ ] Update `variable_name` in terraform.tfvars
- [ ] Configure backend in backend.tf for remote state
- [ ] Adjust resource names/tags as needed
## Security Reminders
⚠️ Before applying:
- Review IAM policies and permissions
- Ensure sensitive values are NOT committed to version control
- Configure state backend with encryption enabled
- Set up state locking for team collaborationUser request: "Create an AWS S3 bucket with versioning"
Generated files:
main.tf - S3 bucket resource with versioning enabledvariables.tf - Bucket name, tags variablesoutputs.tf - Bucket ARN and name outputsversions.tf - AWS provider version constraintsUser request: "Set up a VPC using the official AWS VPC module"
Actions:
User request: "Create infrastructure across AWS and Datadog"
Actions:
User request: "Create an ECS cluster with ALB and auto-scaling"
Generated structure:
Terraform-specific gotchas:
Provider Not Found: Verify source address format (namespace/name) and version constraint syntax.
Circular Dependencies: Use explicit depends_on or break into separate modules.
Validation Failures: Run devops-skills:terraform-validator for detailed errors, fix iteratively.
Always consider version compatibility:
Terraform Version:
required_version constraint with both lower and upper bounds>= 1.10, < 2.0 for modern features (ephemeral resources, write-only)>= 1.14, < 2.0 for latest features (actions, query command)Provider Versions (as of December 2025):
~> 6.0 (latest: v6.23.0)~> 4.0 (latest: v4.54.0)~> 7.0 (latest: v7.12.0) - 7.0 includes ephemeral resources & write-only attributes~> 2.23~> for minor version flexibility, pin major versionsModule Versions: Always pin module versions, review module documentation for compatibility, test updates in non-production first.
Version feature matrix and modern Terraform features (1.8+): See
references/modern_features.mdfor the full feature matrix, ephemeral resources, write-only arguments, actions blocks, import blocks withfor_each, list resources, and query command examples.
variables.tfdefault = "us-east-1" on a region variable in a shared module.default on environment-specific variables; pass values via .tfvars files or environment variables (TF_VAR_region).count to manage resources that differ only by configurationcount creates indexed resources (e.g., aws_instance.web[0]) that break when items are removed from the middle of a list, forcing unwanted destroys and recreates.count = length(var.instance_names) combined with var.instance_names[count.index] for named resources.for_each with a map or set to create named resources (e.g., for_each = toset(var.instance_names)) that survive insertions and deletions.backend {} block (defaults to local state in terraform.tfstate).backend "s3" {} (or equivalent) with a dynamodb_table argument for state locking..tf filesprovider "aws" { region = "us-east-1" access_key = "AKIA..." secret_key = "..." }.AWS_REGION, AWS_ACCESS_KEY_ID) or IAM instance/execution roles; set region via var.aws_region.terraform apply without a saved plan in CI/CDapply without a saved plan allows Terraform to pick up state changes that occurred between plan and apply, producing a different result than reviewed.terraform apply -auto-approve as a single pipeline step.terraform plan -out=tfplan then terraform apply tfplan as two separate, sequential pipeline steps.terraform_best_practices.md - Comprehensive best practices guidecommon_patterns.md - Common Terraform patterns and examplesprovider_examples.md - Example configurations for popular providersmodern_features.md - Terraform 1.8+ features: ephemeral resources, write-only args, actions, import for_each, version feature matrixTo load a reference, use the Read tool:
Read(file_path: ".claude/skills/terraform-generator/references/[filename].md")Template files for common setups:
minimal-project/ - Minimal Terraform project templateaws-web-app/ - AWS web application infrastructure templatemulti-env/ - Multi-environment configuration template