Every Senior DevOps engineer has experienced the specific panic induced by a Terraform plan output showing 1 to destroy, 1 to add on a stateful resource. You simply wanted to rename a resource variable for consistency or move a spaghetti-code main.tf resource into a reusable child module.
However, Terraform interprets this change not as a migration, but as the deletion of the old resource and the provisioning of a new one. For stateless EC2 instances in an Auto Scaling Group, this is an inconvenience. For an RDS instance, an S3 bucket, or a production Load Balancer with hardcoded DNS pointers, this is catastrophic.
Historically, the solution was the imperative terraform state mv command. This approach is brittle, manual, effectively invisible to code review, and breaks CI/CD automation. Since Terraform 1.1, the declarative moved block is the standard for solving this problem safely.
The Root Cause: State Identity vs. Configuration Identity
To understand why Terraform defaults to destruction, we must look at how the DAG (Directed Acyclic Graph) maps to the terraform.tfstate file.
Terraform identifies resources by their address in the configuration (e.g., aws_dynamodb_table.users). The state file maps this specific string address to a provider-specific ID (e.g., arn:aws:dynamodb:us-east-1:123456789012:table/users).
When you refactor code:
- Old Configuration: Terraform looks for
aws_dynamodb_table.users. It is gone. Conclusion: User deleted the resource. - New Configuration: Terraform sees
module.db.aws_dynamodb_table.this. It has no mapping in the state file. Conclusion: User wants a new resource.
Terraform does not inherently infer that the new block of HCL represents the same physical infrastructure as the old block of HCL. We must explicitly bridge this gap.
The Fix: Declarative Refactoring with moved
Let's walk through a common scenario: refactoring a root-level DynamoDB table into a dedicated child module.
Phase 1: The Initial State (Pre-Refactor)
Currently, your main.tf defines the table directly. This is deployed and holds production data.
# main.tf
resource "aws_dynamodb_table" "users" {
name = "production-users-table"
billing_mode = "PAY_PER_REQUEST"
hash_key = "UserId"
attribute {
name = "UserId"
type = "S"
}
tags = {
Environment = "Production"
}
}
Phase 2: The Refactor
We want to move this into a module located at ./modules/dynamodb.
1. Create the Module Move the logic to modules/dynamodb/main.tf and generalize the resource name.
# modules/dynamodb/main.tf
resource "aws_dynamodb_table" "this" {
name = var.table_name
billing_mode = "PAY_PER_REQUEST"
hash_key = var.hash_key
attribute {
name = var.hash_key
type = "S"
}
tags = var.tags
}
2. Update Root main.tf Replace the resource with the module call.
# main.tf
module "user_db" {
source = "./modules/dynamodb"
table_name = "production-users-table"
hash_key = "UserId"
tags = {
Environment = "Production"
}
}
Phase 3: The moved Block
If you run terraform plan now, Terraform will attempt to delete the table. To prevent this, add the moved block to your root main.tf.
# main.tf
module "user_db" {
source = "./modules/dynamodb"
table_name = "production-users-table"
# ... vars
}
# The Critical Fix
moved {
from = aws_dynamodb_table.users
to = module.user_db.aws_dynamodb_table.this
}
Phase 4: Validation
Run terraform plan. Terraform reads the moved block, locates the from address in the current state file, and updates its internal pointer to the to address before calculating the graph delta.
Output:
Terraform will perform the following actions:
# aws_dynamodb_table.users has moved to module.user_db.aws_dynamodb_table.this
resource "aws_dynamodb_table" "this" {
id = "production-users-table"
name = "production-users-table"
# ... (attributes unchanged)
}
Plan: 0 to add, 0 to change, 0 to destroy.
Why This Approach is Superior
The moved block is vastly superior to CLI state manipulation for three reasons:
1. Version Control and Code Review
Using terraform state mv happens on an engineer's laptop. It is an ephemeral command that leaves no trace in git. A moved block is committed code. A reviewer can see exactly what is being refactored and verify that from and to align correctly.
2. CI/CD Pipeline Continuity
If you rely on CLI commands, you have to break your deployment pipeline to manually intervene and manipulate the state file before the next run. With moved blocks, your standard terraform plan and terraform apply workflow remains uninterrupted. The state migration happens automatically during the apply phase.
3. Chained Moves
Terraform supports chained moves. If you rename A to B in commit 1, and B to C in commit 2, Terraform can resolve the history A -> B -> C as long as the previous moved blocks remain in the code.
Handling Complex Modules (Double Refactoring)
A frequent edge case occurs when you are refactoring both the module call name and the resource name inside the module simultaneously.
Scenario:
- Old:
module.db.aws_db_instance.master - New:
module.database.aws_db_instance.primary
You can perform this in a single moved block within the root module:
moved {
from = module.db.aws_db_instance.master
to = module.database.aws_db_instance.primary
}
However, if you are a module author distributing a module where internal resource names are changing, you should place the moved block inside the child module itself.
Inside modules/database/main.tf:
# This allows consumers of your module to upgrade versions
# without their state breaking, unaware that you renamed
# internal resources.
moved {
from = aws_db_instance.master
to = aws_db_instance.primary
}
Conclusion
Infrastructure as Code requires us to treat state management with the same rigor as database schema migrations. The moved block transforms state migration from a risky, manual operation into a declarative, version-controlled code artifact.
When refactoring, always write the moved block immediately after changing the HCL. Run a speculative plan to verify the move is recognized (0 to destroy), merge, and apply. You can remove the moved block in a subsequent PR once the state has settled, though leaving it for a period acts as valuable documentation of the infrastructure's evolution.