S3 Lifecycle rules with Terraform
Typically, when working with Terraform, there are multiple ways to achieve the same result, which can be particularly useful with more complex AWS configurations. In this blog post, I will show different approaches to defining S3 bucket lifecycle rules with Terraform.
S3 lifecycle policies are a great example, they involve optional nested arguments and also have required ones, making them a perfect use case for comparing different Terraform constructs that can be used:
- dynamic block -> for handling nested arguments conditionally
- for_each and for expressions -> for iterating over rules
- try() -> for dealing with optional attributes gracefully
- locals -> for preprocessing and keeping the resource block clean
Lifecycle Rules for S3 Buckets
Lifecycle rules in S3 are useful for:
- Automatically deleting old objects after a certain number of days
- Cleaning up failed multipart uploads to avoid unnecessary storage costs
- Applying rules only to objects with specific prefixes
Terraform provides resource aws_s3_bucket_lifecycle_configuration to mange these. Here is an example of hardcoded lifecycle configuration with three rules:
- cleaning up incomplete multipart uploads
- expiring logs after 30 days
- deleting temporary files after 7 days
resource "aws_s3_bucket_lifecycle_configuration" "example" {
bucket = aws_s3_bucket.bucket.id
# Rule 1: Cleanup failed multipart uploads
rule {
id = "cleanup-multipart"
status = "Enabled"
filter {
prefix = ""
}
abort_incomplete_multipart_upload {
days_after_initiation = 7
}
}
# Rule 2: Expire logs after 30 days
rule {
id = "expire-logs"
status = "Enabled"
filter {
prefix = "logs/"
}
expiration {
days = 30
}
}
# Rule 3: Expire temporary files after 7 days
rule {
id = "expire-tmp"
status = "Enabled"
filter {
prefix = "tmp/"
}
expiration {
days = 7
}
}
}
This is fine if you want fixed rules, but if your solution has multiple S3 buckets, it quickly becomes repetitive and hard to maintain as the number of rules and buckets can grow. That is why we need a more generic approach in Terraform so that we can define rules as variables and let Terraform dynamically generate the right configuration.
Using dynamic Blocks
This approach relies heavily on dynamic blocks. This is powerful if your rules have optional nested attributes (filter, expiration, abort, etc.) and you want to handle them conditionally.
resource "aws_s3_bucket_lifecycle_configuration" "default" {
bucket = aws_s3_bucket.this.bucket
dynamic "rule" {
for_each = var.additional_lifecycle_rules
content {
id = rule.value.id
status = rule.value.status
dynamic "filter" {
for_each = lookup(rule.value, "filter", null) != null ? [rule.value.filter] : []
content {
prefix = filter.value.prefix
}
}
dynamic "expiration" {
for_each = lookup(rule.value, "expiration", null) != null ? [rule.value.expiration] : []
content {
days = expiration.value.days
}
}
dynamic "abort_incomplete_multipart_upload" {
for_each = lookup(rule.value, "abort_incomplete_multipart_upload", null) != null ? [rule.value.abort_incomplete_multipart_upload] : []
content {
days_after_initiation = abort_incomplete_multipart_upload.value.days_after_initiation
}
}
}
}
}
variable "additional_lifecycle_rules" {
type = list(any)
default = []
}Then your Terraform module call could look something like:
module "test_bucket" {
source = "../modules/s3"
bucket_name = var.bucket_name
kms_arn = aws_kms_key.this.arn
additional_lifecycle_rules = [
{
id = "cleanup-multipart"
status = "Enabled"
filter = { prefix = "" }
abort_incomplete_multipart_upload = {
days_after_initiation = 7
}
},
{
id = "expire-logs"
status = "Enabled"
filter = { prefix = "logs/" }
expiration = { days = 30 }
},
{
id = "expire-tmp"
status = "Enabled"
filter = { prefix = "tmp/" }
expiration = { days = 7 }
}
]
}The for_each iterates over var.additional_lifecycle_rules. For each rule, Terraform “renders” a rule { … } block and inside that, the content { … } block describes the actual attributes for each rule. As you can see, some attributes are static, such as :
id = rule.value.id
status = rule.value.statusand some other are dynamic :
dynamic "expiration" {
for_each = lookup(rule.value, "expiration", null) != null ? [rule.value.expiration] : []
content {
days = expiration.value.days
}
}
That is because not every attribute behaves the same way:
- id and status -> always required and single values
- filter, expiration, and abort_incomplete_multipart_upload -> those are optional and nested blocks, so they may or may not exist in the given rule when we call the S3 TF module. That is why they are wrapped in their own dynamic block, each with a conditional for_each that only renders the block if the value is present.
You might have noticed that variable additional_lifecycle_rules is type list(any) because each lifecycle rule is represented as a map with potentially different nested keys, and therefore, we want as much flexibility as possible.
Additionally, with this lookup construct:
lookup(rule.value, "expiration", null) != null ? [rule.value.expiration] : []we want to say Terraform: “create this only if an expiration block is in the rule, and if it does use the provided days value, and if the rule doesn’t include expiration, then set to null”.
A special case could be if we want all S3 buckets created of this S3 Terraform module to automatically get a default rule, while still allowing engineers to pass in additional rules through the variable additional_lifecycle_rules. This guarantees one policy is always applied but keeps the configuration flexible for different use cases.
resource "aws_s3_bucket_lifecycle_configuration" "default" {
bucket = aws_s3_bucket.this.bucket
dynamic "rule" {
for_each = concat(
[
{
id = "cleanup-failed-multipart-uploads"
status = "Enabled"
filter = {
prefix = ""
}
abort_incomplete_multipart_upload = {
days_after_initiation = 7
}
}
],
var.additional_lifecycle_rules
)
content {
id = rule.value.id
status = rule.value.status
dynamic "filter" {
for_each = lookup(rule.value, "filter", null) != null ? [rule.value.filter] : []
content {
prefix = filter.value.prefix
}
}
dynamic "expiration" {
for_each = lookup(rule.value, "expiration", null) != null ? [rule.value.expiration] : []
content {
days = expiration.value.days
}
}
dynamic "abort_incomplete_multipart_upload" {
for_each = lookup(rule.value, "abort_incomplete_multipart_upload", null) != null ? [rule.value.abort_incomplete_multipart_upload] : []
content {
days_after_initiation = abort_incomplete_multipart_upload.value.days_after_initiation
}
}
}
}
}In this case, for_each iterates over the list produced by concat, so that the resource block is rendered once for the default rule and once for each additionally provided rule. Based on that, if we call the S3 TF module without any rules:
module "test_bucket" {
source = "../modules/s3"
bucket_name = var.bucket_name
kms_arn = aws_kms_key.this.arn
}it would only apply default lifecycle rule from concat and that is it.
Using for loop with try()
Terraform constructs: for loops and try() expressions, will let you write cleaner and more compact configurations.
resource "aws_s3_bucket_lifecycle_configuration" "default" {
bucket = aws_s3_bucket.this.bucket
rule = [
for r in var.lifecycle_rules : {
id = r.id
status = r.status
filter = try(r.filter, [])
expiration = try(r.expiration, [])
abort_incomplete_multipart_upload = try(r.abort_incomplete_multipart_upload, [])
}
]
}
variable "lifecycle_rules" {
type = list(any)
}
The S3 TF module call would look like:
module "test_bucket" {
source = "../modules/s3"
bucket_name = var.bucket_name
kms_arn = aws_kms_key.this.arn
lifecycle_rules = [
{
id = "cleanup-multipart"
status = "Enabled"
filter = { prefix = "" }
abort_incomplete_multipart_upload = {
days_after_initiation = 7
}
},
{
id = "expire-logs"
status = "Enabled"
filter = { prefix = "logs/" }
expiration = { days = 30 }
},
{
id = "expire-tmp"
status = "Enabled"
filter = { prefix = "tmp/" }
expiration = { days = 7 }
}
]
}
As you noticed, the module call is very similar to the approach with dynamics, but instead of the variable additional_lifecycle_rules, you would pass a single variable lifecycle_rules. The for expression inside the resource just loops over everything in the variable with no need for concat since we are not enforcing any default rules. Nevertheless, inside a variable, you can also put a default value that all buckets will have but if you override that in env.tfvars file without repeating the default value, the default rule won’t be applied.
This approach has much simpler syntax, but it is less adaptable to irregular rule structures, and applying default rules can be more prone to errors.
Using locals for Preprocessing
Another neat trick is to preprocess your rules in locals before passing them into the resource. This helps keep your resource blocks clean and pushes logic to a separate section.
locals {
lifecycle_rules = concat(
[
{
id = "cleanup-failed-multipart-uploads"
status = "Enabled"
filter = { prefix = "" }
abort_incomplete_multipart_upload = {
days_after_initiation = 7
}
}
],
var.additional_lifecycle_rules
)
}
resource "aws_s3_bucket_lifecycle_configuration" "default" {
bucket = aws_s3_bucket.this.bucket
rule = [
for r in local.lifecycle_rules : {
id = r.id
status = r.status
filter = try(r.filter, [])
expiration = try(r.expiration, [])
abort_incomplete_multipart_upload = try(r.abort_incomplete_multipart_upload, [])
}
]
}
Important
This approach is very similar to the for + try() method, but it uses locals to merge a default rule with any additional rules that are provided. The main benefit is that your resource block stays clean and simple. However, as you add more buckets, your locals.tf file can grow quickly and you may end up defining rules for each bucket separately, or managing one large map of rules, which makes the module harder to track and maintain.
Another drawback is that defaults defined in locals can become “hidden.” It’s not always obvious which default rules apply to which bucket, unlike the dynamic block approach where everything is inline and visible in one place.
There are a few trade-offs to consider:
- Provider schema validation: with the for + try() pattern inside locals, you need to handle optional repeatable blocks carefully. Using try(…, []) ensures Terraform always receives an empty list when a block is missing, which avoids schema validation errors in some provider versions. By contrast, dynamic blocks naturally skip rendering if no value is provided, so they don’t need this workaround.
- Flexibility: locals combined with for + try() work well for simple, predictable rules. But if you need multiple nested transition blocks (e.g., move to Standard-IA after 30 days and then to Glacier after 90 days), a plain for + try() cannot express that. In those cases, a dynamic block is required because it can generate multiple nested structures conditionally.
The locals-based approach keeps your Terraform configuration compact and clean, but it introduces trade-offs in maintainability and flexibility. It’s best suited for straightforward, consistent rules across buckets. For more complex or irregular structures, the dynamic block method is usually the safer and more scalable choice.
Conclusion
S3 lifecycle rules show how important it is to set up Terraform (tf modules) with flexibility in mind. As rules often mix required and optional arguments and also repeatable nested blocks by using dynamic blocks, you can model this complexity in a clean and reusable way so that Terraform configuration is less prone to errors, as engineers might skip rules completely or extend them for different buckets. In a very simple use case, another approaches might also be a perfect fit since they are much simpler to write and they are more readable for new colleagues, especially if you have just a few buckets (and therefore lifecycle rules).
If this blog saved you time, support me with a coffee!
Thanks to everyone who’s supported!







