Comprehensive toolkit for validating, linting, testing, and automating Terraform configurations and HCL files. Use this skill when working with Terraform files (.tf, .tfvars), validating infrastructure-as-code, debugging Terraform configurations, performing dry-run testing with terraform plan, or working with custom providers and modules.
Overall
score
100%
Does it follow best practices?
Validation for skill structure
Coding standards and best practices for writing maintainable, scalable, and reliable Terraform configurations.
terraform/
├── environments/
│ ├── dev/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ ├── outputs.tf
│ │ ├── terraform.tfvars
│ │ └── backend.tf
│ ├── staging/
│ └── production/
├── modules/
│ ├── networking/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ ├── outputs.tf
│ │ └── README.md
│ ├── compute/
│ └── database/
├── global/
│ ├── iam/
│ └── route53/
└── README.mdStandard Files:
main.tf - Primary resource definitionsvariables.tf - Input variable declarationsoutputs.tf - Output value declarationsversions.tf - Terraform and provider version constraintsbackend.tf - Backend configurationlocals.tf - Local value definitions (if many)data.tf - Data source definitions (if many)terraform.tfvars - Variable values (not committed for secrets)When to Split Files:
networking.tf, compute.tf)Pattern: <resource-type>_<descriptive-name>
# Good
resource "aws_instance" "web_server" {}
resource "aws_s3_bucket" "application_logs" {}
resource "aws_security_group" "database_access" {}
# Avoid
resource "aws_instance" "instance1" {}
resource "aws_s3_bucket" "bucket" {}Pattern: snake_case with descriptive names
# Good
variable "vpc_cidr_block" {}
variable "instance_type" {}
variable "environment_name" {}
# Avoid
variable "VPCCIDR" {}
variable "type" {}
variable "env" {}Pattern: kebab-case for directories, snake_case for module calls
# Directory: modules/vpc-networking/
module "vpc_networking" {
source = "./modules/vpc-networking"
}Consistent Tagging Strategy:
locals {
common_tags = {
Environment = var.environment
ManagedBy = "Terraform"
Project = var.project_name
Owner = var.owner_email
CostCenter = var.cost_center
}
}
resource "aws_instance" "web" {
# ... other config ...
tags = merge(local.common_tags, {
Name = "${var.environment}-web-server"
Role = "webserver"
})
}Always Include:
variable "instance_type" {
description = "EC2 instance type for web servers"
type = string
default = "t3.micro"
validation {
condition = contains(["t3.micro", "t3.small", "t3.medium"], var.instance_type)
error_message = "Instance type must be t3.micro, t3.small, or t3.medium."
}
}
variable "vpc_cidr" {
description = "CIDR block for VPC"
type = string
validation {
condition = can(cidrhost(var.vpc_cidr, 0))
error_message = "VPC CIDR must be a valid IPv4 CIDR block."
}
}
variable "db_password" {
description = "Database master password"
type = string
sensitive = true # Prevents display in logs
}Use Specific Types:
# Primitive types
variable "instance_count" {
type = number
}
variable "enable_monitoring" {
type = bool
}
# Collection types
variable "availability_zones" {
type = list(string)
}
variable "tags" {
type = map(string)
}
# Object types
variable "database_config" {
type = object({
engine = string
engine_version = string
instance_class = string
allocated_storage = number
})
}Use .tfvars Files:
# environments/dev/terraform.tfvars
environment = "dev"
instance_type = "t3.micro"
instance_count = 1
enable_backup = false
# environments/production/terraform.tfvars
environment = "production"
instance_type = "t3.large"
instance_count = 3
enable_backup = trueSingle Responsibility: Each module should have one clear purpose.
# Good: Focused module
module "vpc" {
source = "./modules/vpc"
# VPC-specific config
}
# Avoid: Kitchen-sink module
module "infrastructure" {
source = "./modules/everything"
# VPC, databases, compute, monitoring, etc.
}Required vs Optional Variables:
# modules/database/variables.tf
# Required - no default
variable "database_name" {
description = "Name of the database"
type = string
}
# Optional - has sensible default
variable "backup_retention_days" {
description = "Number of days to retain backups"
type = number
default = 7
}Output Everything Useful:
# modules/vpc/outputs.tf
output "vpc_id" {
description = "ID of the VPC"
value = aws_vpc.main.id
}
output "private_subnet_ids" {
description = "List of private subnet IDs"
value = aws_subnet.private[*].id
}
output "public_subnet_ids" {
description = "List of public subnet IDs"
value = aws_subnet.public[*].id
}README.md Template:
# VPC Module
Creates a VPC with public and private subnets across multiple availability zones.
## Usage
```hcl
module "vpc" {
source = "./modules/vpc"
vpc_cidr = "10.0.0.0/16"
availability_zones = ["us-east-1a", "us-east-1b"]
environment = "production"
}| Name | Version |
|---|---|
| terraform | >= 1.0 |
| aws | >= 5.0 |
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| vpc_cidr | CIDR block for VPC | string | n/a | yes |
| availability_zones | List of AZs | list(string) | n/a | yes |
| Name | Description |
|---|---|
| vpc_id | ID of the VPC |
| private_subnet_ids | List of private subnet IDs |
## State Management
### Remote State
**Always Use Remote State for Teams:**
```hcl
terraform {
backend "s3" {
bucket = "company-terraform-state"
key = "production/vpc/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "terraform-state-locks"
# Workspace-specific state
workspace_key_prefix = "workspaces"
}
}DynamoDB Table for S3 Backend:
resource "aws_dynamodb_table" "terraform_locks" {
name = "terraform-state-locks"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
tags = {
Name = "Terraform State Locks"
ManagedBy = "Terraform"
}
}Separate State Files by Environment and Component:
s3://terraform-state/
├── production/
│ ├── vpc/terraform.tfstate
│ ├── database/terraform.tfstate
│ └── compute/terraform.tfstate
├── staging/
│ ├── vpc/terraform.tfstate
│ └── compute/terraform.tfstate
└── dev/
└── all/terraform.tfstate# Instead of hardcoding
resource "aws_instance" "web" {
subnet_id = "subnet-12345" # Avoid
}
# Use data sources
data "aws_subnet" "private" {
filter {
name = "tag:Name"
values = ["${var.environment}-private-subnet"]
}
}
resource "aws_instance" "web" {
subnet_id = data.aws_subnet.private.id
}Implicit Dependencies (Preferred):
resource "aws_instance" "web" {
subnet_id = aws_subnet.private.id # Implicit dependency
security_groups = [aws_security_group.web.id]
}Explicit Dependencies (When Needed):
resource "aws_iam_role_policy" "example" {
# ... config ...
# Ensure role exists before attaching policy
depends_on = [aws_iam_role.example]
}Use for_each for Map-Like Resources:
# Good: for_each with maps
locals {
subnets = {
public_a = { cidr = "10.0.1.0/24", az = "us-east-1a" }
public_b = { cidr = "10.0.2.0/24", az = "us-east-1b" }
private_a = { cidr = "10.0.3.0/24", az = "us-east-1a" }
private_b = { cidr = "10.0.4.0/24", az = "us-east-1b" }
}
}
resource "aws_subnet" "main" {
for_each = local.subnets
vpc_id = aws_vpc.main.id
cidr_block = each.value.cidr
availability_zone = each.value.az
tags = {
Name = each.key
}
}Use count for Simple Conditionals:
resource "aws_cloudwatch_log_group" "app" {
count = var.enable_logging ? 1 : 0
name = "/aws/app/logs"
}terraform {
required_version = ">= 1.0, < 2.0"
}terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0" # Allow patch updates, lock minor version
}
random = {
source = "hashicorp/random"
version = "~> 3.5"
}
}
}Version Constraint Operators:
= - Exact version!= - Exclude version>, >=, <, <= - Comparison~> - Pessimistic constraint (allow rightmost version component to increment)Terraform 1.1+ introduced declarative blocks for managing state without manual terraform state commands.
The import block allows config-driven import of existing resources into Terraform state.
Basic Usage:
# Import an existing VPC
import {
to = aws_vpc.main
id = "vpc-0123456789abcdef0"
}
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "main-vpc"
}
}Dynamic Import (Terraform 1.6+):
# Import with expressions
variable "vpc_id" {
type = string
}
import {
to = aws_vpc.main
id = var.vpc_id
}
# Import with string interpolation
import {
to = aws_s3_bucket.logs
id = "${var.environment}-logs-bucket"
}Generate Configuration:
# Generate config for imported resources
terraform plan -generate-config-out=generated.tfWorkflow:
import block with target resource address and IDterraform plan to see what will be importedterraform apply to importimport block after successful importThe moved block enables refactoring without manual state manipulation.
Rename a Resource:
# Old: aws_instance.web
# New: aws_instance.web_server
moved {
from = aws_instance.web
to = aws_instance.web_server
}
resource "aws_instance" "web_server" {
ami = "ami-12345678"
instance_type = "t3.micro"
}Move to a Module:
# Move resource into a module
moved {
from = aws_vpc.main
to = module.networking.aws_vpc.main
}
module "networking" {
source = "./modules/networking"
}Move from count to for_each:
# Old: aws_instance.web[0], aws_instance.web[1]
# New: aws_instance.web["web-1"], aws_instance.web["web-2"]
moved {
from = aws_instance.web[0]
to = aws_instance.web["web-1"]
}
moved {
from = aws_instance.web[1]
to = aws_instance.web["web-2"]
}
resource "aws_instance" "web" {
for_each = toset(["web-1", "web-2"])
ami = "ami-12345678"
instance_type = "t3.micro"
tags = {
Name = each.key
}
}Rename a Module:
moved {
from = module.old_name
to = module.new_name
}
module "new_name" {
source = "./modules/compute"
}Best Practices for moved:
moved blocks until all team members have applied the changesmoved blocks after state migration is complete across all environmentsThe removed block allows declarative removal of resources from Terraform management.
Remove Without Destroying:
# Stop managing resource but keep it in cloud
removed {
from = aws_instance.legacy_server
lifecycle {
destroy = false
}
}Remove and Destroy:
# Remove from state and destroy the resource
removed {
from = aws_s3_bucket.old_logs
lifecycle {
destroy = true
}
}Remove Module:
# Remove entire module from management
removed {
from = module.deprecated_service
lifecycle {
destroy = false
}
}Use Cases:
| Block | Version | Purpose | Use Case |
|---|---|---|---|
import | 1.5+ | Bring existing resources into Terraform | Adopting existing infrastructure |
moved | 1.1+ | Refactor without state surgery | Renaming, restructuring modules |
removed | 1.7+ | Stop managing resources declaratively | Ownership transfer, cleanup |
Old Way (CLI):
# Import
terraform import aws_vpc.main vpc-12345
# Move
terraform state mv aws_instance.web aws_instance.web_server
# Remove
terraform state rm aws_instance.legacyNew Way (Config-Driven):
# All operations are declarative and version-controlled
import {
to = aws_vpc.main
id = "vpc-12345"
}
moved {
from = aws_instance.web
to = aws_instance.web_server
}
removed {
from = aws_instance.legacy
lifecycle {
destroy = false
}
}Benefits of Config-Driven Approach:
locals {
name_prefix = "${var.environment}-${var.project}"
common_tags = {
Environment = var.environment
ManagedBy = "Terraform"
}
# Computed values
is_production = var.environment == "production"
instance_type = local.is_production ? "t3.large" : "t3.micro"
}Use Sparingly and Only When Necessary:
resource "aws_security_group" "example" {
name = "example"
dynamic "ingress" {
for_each = var.ingress_rules
content {
from_port = ingress.value.from_port
to_port = ingress.value.to_port
protocol = ingress.value.protocol
cidr_blocks = ingress.value.cidr_blocks
}
}
}# Use count for conditional creation
resource "aws_kms_key" "encryption" {
count = var.enable_encryption ? 1 : 0
description = "Encryption key"
}
# Reference with [0] and handle with try()
resource "aws_s3_bucket" "example" {
# ...
kms_master_key_id = try(aws_kms_key.encryption[0].arn, null)
}# Format check
terraform fmt -check -recursive
# Validation
terraform validate
# Plan review
terraform plan
# Compliance testing
terraform-compliance -p terraform.plan -f compliance/Create .pre-commit-config.yaml:
repos:
- repo: https://github.com/antonbabenko/pre-commit-terraform
rev: v1.83.0
hooks:
- id: terraform_fmt
- id: terraform_validate
- id: terraform_docs
- id: terraform_tflintterraform plan -target=module.vpc-parallelism flag: terraform apply -parallelism=20# Cache data source results in locals
data "aws_ami" "ubuntu" {
most_recent = true
# ... filters ...
}
locals {
ami_id = data.aws_ami.ubuntu.id
}
# Reuse local value
resource "aws_instance" "web" {
count = 10
ami = local.ami_id # Don't repeat data source
instance_type = var.instance_type
}# Create VPC with DNS support enabled for private hosted zones
resource "aws_vpc" "main" {
cidr_block = var.vpc_cidr
enable_dns_hostnames = true # Required for Route53 private zones
enable_dns_support = true
tags = merge(local.common_tags, {
Name = "${var.environment}-vpc"
})
}Use terraform-docs to auto-generate documentation:
terraform-docs markdown table . > README.md.tfstate files.tfvars files with secrets.gitignore:
.terraform/
*.tfstate
*.tfstate.backup
*.tfvars
.terraform.lock.hclsensitive = true for sensitive variables and outputsterraform fmtterraform validateterraform plan and review# .github/workflows/terraform.yml
name: Terraform
on: [pull_request]
jobs:
terraform:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: hashicorp/setup-terraform@v2
- name: Terraform Format
run: terraform fmt -check -recursive
- name: Terraform Init
run: terraform init
- name: Terraform Validate
run: terraform validate
- name: Terraform Plan
run: terraform planInstall with Tessl CLI
npx tessl i pantheon-ai/terraform-validator