How to deploy new AWS Lambda & Lambda Layer versions with Terraform

Delivering new versions of AWS Lambda functions and Lambda layers can get tricky, especially when you have a lot of native dependencies such as psycopg2 and multiple functions. This blog shows how, with a minimal setup, you can create production-ready Terraform configuration to deploy new AWS Lambda versions with Terraform quickly and reliably. Please note that I will use a Python Lambda setup as an example, but the principles can be applied to all runtimes.

Most important steps are:

  • zip your Lambda handler(s)
  • build a compatible Lambda Layer in Docker
  • connect both into AWS with clean versioning and repeatable applies

Directory structure

repo/
├─ lambda/
│  ├─ create-user-lambda.py      
│  ├─ delete-user-lambda.py      
│  └─ requirements.txt          
├─ Dockerfile                   
└─ terraform/
   ├─ lambda.tf
   ├─ variables.tf               
   └─ outputs.tf    

This setup is not written in stone but you can follow it or adopt it a bit to your personal style. Your function zip should include only create-user-lambda.py and any other pure python helper files.

Also your layer zip should include all dependencies under /opt/python, so the Lambda runtime finds them automatically.

Here is one example of Lambda function:

/lambda/create-user-lambda.py

import os
import json
import psycopg2

def lambda_handler(event, context):
    try:
        # fallbacks if missing
        user_id = event.get("user_id", "default_id")
        username = event.get("username", "guest")

        conn = psycopg2.connect(
            host=os.environ.get("DB_HOST", "localhost"),
            database=os.environ.get("DB_NAME", "exampledb"),
            user=os.environ.get("DB_USER", "postgres"),
            password=os.environ.get("DB_PASS", "postgres")
        )
        cur = conn.cursor()

        cur.execute(
            "INSERT INTO users (id, username) VALUES (%s, %s) ON CONFLICT (id) DO NOTHING",
            (user_id, username)
        )
        conn.commit()

        cur.close()
        conn.close()

        return {
            "statusCode": 200,
            "body": json.dumps({"ok": True, "created_user": {"id": user_id, "username": username}})
        }

    except Exception as e:
        return {
            "statusCode": 500,
            "body": json.dumps({"error": str(e)})
        }

/lambda/requirements.txt

requests==2.31.0
urllib3<1.27,>=1.25.4
psycopg2-binary==2.9.10
boto3==1.39.4

The requirements.txt contains all third-party libraries.

Dockerfile

FROM public.ecr.aws/lambda/python:3.12

WORKDIR /tmp
COPY lambda/requirements.txt .

RUN python3.12 -m pip install --upgrade pip \
 && python3.12 -m pip install -r requirements.txt -t /opt/python \
 && rm -rf /root/.cache/pip

Base image should match the Lambda Python 3.12 runtime (which is used in this example). If we assume that you are not using pipelines to deploy this Terraform, rather you are using local machine that you might have issues such as:

No module named 'psycopg2._psycopg'

because for example libraries built on MacOS won’t just load on Amazon Linux. Building inside this Docker image ensures compatible .so artifacts.

Terraform with minimal setup

Within locals.tf file you can have:

locals {
  lambda_sources = {
    # setup as many as you need
    create_user_lambda = "../lambda/create-user-lambda.py"
    delete_user_lambda = "../lambda/delete-user-lambda.py"
  }
}

Then in lambda.tf:

# Zip the Lambda(function code only)
data "archive_file" "lambda_zip" {
  for_each    = local.lambda_sources
  type        = "zip"
  source_file = "${path.module}/${each.value}"
  output_path = "${path.module}/../lambda/${each.key}.zip"
}

Before Terraform can deploy Lambda, the function source code must be packaged into a .zip file. The archive_file data source automates this step and instead of doing manual zipping every time you change your code, Terraform creates .zip during plan/apply. Additionally here we use for_each to loop over multiple lambda_sources from locals so that each .py file (each Lambda function) gets its own zip inside the lambda folder.

Now we build Lambda Layer with Docker ( and it will be redeployed each time requirements.txt changes ):

resource "null_resource" "build_layer" {
  triggers = {
    requirements_md5 = filemd5("${path.module}/../lambda/requirements.txt")
    dockerfile_md5   = filemd5("${path.module}/../Dockerfile")
  }

  provisioner "local-exec" {
    command = <<EOT
      set -euo pipefail
      docker build --platform=linux/amd64 -t lambda-layer -f ${path.module}/../Dockerfile ${path.module}/..
      id=$(docker create lambda-layer)
      rm -rf ${path.module}/../lambda/build && mkdir -p ${path.module}/../lambda/build
      docker cp "$id":/opt/python ${path.module}/../lambda/build/
      docker rm "$id"
      (cd ${path.module}/../lambda/build && zip -qr ../python_helper.zip python)
    EOT
  }
}

Sometimes Lambda needs libraries that are not included by default and instead of bundling them in every function you can put them in Lambda Layer and share it across functions.

This null_resource automates building that layer by using Docker to install dependencies into the /opt/python directory (which Python expects) and then zips it into the python_helper.zip.

The triggers section makes sure the layer rebuilds only when requirements.txt or Dockerfile changes. Without this automation you would need to maintain bash scripts or rebuild the layer manually every time dependencies change, which is easy to forget and harder to keep consistent across environments.

We need to publish a new Layer version from the zip we just created in previous state:

resource "aws_lambda_layer_version" "python_helper" {
  filename            = "${path.module}/../lambda/python_helper.zip"
  layer_name          = "python-helper"
  compatible_runtimes = ["python3.12"]
  # Force new version when file changes
  source_code_hash    = filebase64sha256("${path.module}/../lambda/python_helper.zip")
  depends_on          = [null_resource.build_layer]
}

Once the dependencies are packaged into zip file, Terraform needs to publish it as a Lambda Layer Version and that is simply put what this resource do. It points to the generated zip file and important part is source_code_hash, which forces Terraform to create new version anytime the zip file changes so that Lambda always gets the latest dependencies. The filebase64sha256 calculates a Base64-encoded SHA-256 hash of the file so Terraform can track its exact content meaning if the file changes the hash also changes which tells Terraform to redeploy it.

The depends_on just ensures that layer is rebuilt by the null_resource before being uploaded.

Now we can setup Terraform resource for Lambda functions:

resource "aws_lambda_function" "create_user_lambda" {
  function_name    = var.create_user_lambda.name
  handler          = "create-user-lambda.lambda_handler"
  runtime          = var.create_user_lambda.runtime
  role             = aws_iam_role.lambda_exec.arn
  filename         = data.archive_file.lambda_zip["get_data"].output_path
  source_code_hash = data.archive_file.lambda_zip["get_data"].output_base64sha256
  memory_size      = var.create_user_lambda.memory_size
  timeout          = var.create_user_lambda.timeout
  layers           = [aws_lambda_layer_version.python_helper.arn]
  environment {
    variables = {
      EXAMPLE = "test"
    }
  }
  vpc_config {
    subnet_ids         = [...]
    security_group_ids = [...]
  }
  tags = local.common_tags
}

In variables.tf:

variable "create_user_lambda" {
  description = "Creates new user in database."
  type = object({
    name        = string
    runtime     = string
    memory_size = number
    timeout     = number
  })
  default = {
    name        = "create-user"
    runtime     = "python3.12"
    memory_size = 128
    timeout     = 60
  }
}

Additionally Lambda function needs permissions to run. IAM Role must be attached so that Lambda function know what it is allowed to do. The custom policy gives it permission to write logs to CloudWatch and also AWS-managed IAM policy which allows function to connect to the resources inside the VPC such as RDS or Redis etc.

# IAM Role for Lambda
resource "aws_iam_role" "lambda_exec" {
  name = "lamdorfbau-lambda-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17",
    Statement = [{
      Action    = "sts:AssumeRole",
      Principal = { Service = "lambda.amazonaws.com" },
      Effect    = "Allow"
    }]
  })

  tags = local.common_tags
}

# Inline policy for CloudWatch Logs and all other Lambda required permissions
resource "aws_iam_policy" "lambda_combined_policy" {
  name = "lambda-combined-policy"

  policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Effect = "Allow",
        Action = [
          "logs:CreateLogGroup",
          "logs:CreateLogStream",
          "logs:PutLogEvents"
        ],
        Resource = "*"
      }
    ]
  })

  tags = local.common_tags
}

# Attach inline policy to Lambda role
resource "aws_iam_role_policy_attachment" "lambda_combined_attach" {
  role       = aws_iam_role.lambda_exec.name
  policy_arn = aws_iam_policy.lambda_combined_policy.arn
}

# Attach AWS managed policy for VPC networking (required for Lambda in VPC)
resource "aws_iam_role_policy_attachment" "lambda_vpc_execution" {
  role       = aws_iam_role.lambda_exec.name
  policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"
}

Highlights

Here is a short list that are highlights of the setup:

  • Using locals and for_each keeps the configuration DRY, so you can easily add or remove Lambda functions without duplicating larger blocks of the Terraform code
  • Packaging Lambda code and dependencies into zips and layers ensures consistent, repeatable deployments across environments
  • Docker-based layer builds guarantee native dependencies are compiled for the correct runtime
  • Versioning with source_code_hash and filebase64sha256 ensures every change creates new version

Conclusion

In this post, we walked through a simple yet production-ready workflow to package functions, build a shared layer with Docker, and deploy them using Terraform. By automating zipping, hashing, and versioning, you avoid manual steps and ensure that every change is tracked and reproducible. This approach shows how to deploy new AWS Lambda versions with Terraform, along with Lambda Layers, in a way that is reliable, repeatable, and easy to maintain across environments.

If this blog saved you time, support me with a coffee!

Thanks to everyone who’s supported!