Stream CloudWatch Metrics to third party with Terraform
Why would there be a need to stream metrics to 3rd party if you are already using CloudWatch ? Well large companies often use one place to monitor everything and every team is obligated to integrate with that specific tool. Also tools designed purely for monitoring often offers more features and smarter alerting and analytics. We could say the same for the Dashboards and integrations because usually those tools integrate well with Slack, Jira, PagerDuty etc.
In this blog post, I will show you how to Stream CloudWatch Metrics to third party with Terraform by building pipeline that streams to Dynatrace using CloudWatch Metric Streams, Kinesis Firehose and S3 bucket.
What components we need for this?
To stream AWS CloudWatch metrics to external platform like Dynatrace, we need few AWS services working together:
CloudWatch Metric Stream – service used to continuously push selected metrics in near real-time.
Kinesis Firehose Delivery Stream – pretty much acts as a pipeline that takes metrics from CloudWatch and delivers them to an external destination which is in this case a HTTP endpoint (Dynatrace or anything else). This one also helps with the retry and backup logic.
S3 bucket – used to store failed deliveries from Firehose Stream so that no data is lost.
IAM Roles & Policies – as always used to control which services can talk to each other:
- CloudWatch needs permission to push to Firehose Stream
- Firehose Stream needs permission to call Dynatrace API and write to S3
Metrics flow
This is very simple: CloudWatch Metric Stream pushed selected namespaces (e.g., AWS/ApiGateway, AWS/Timestream) to Kinesis Firehose. Firehose then forwards the data to a Dyntrace HTTP endpoint, while backing up failed data to an encrypted S3 bucket.
CloudWatch Metric Stream
resource "aws_cloudwatch_metric_stream" "main" {
name = var.dynatrace_metric_stream_name
role_arn = aws_iam_role.metric_stream_to_firehose_role.arn
firehose_arn = aws_kinesis_firehose_delivery_stream.dynatrace_metric_stream_firehose.arn
output_format = "opentelemetry1.0"
dynamic "include_filter" {
for_each = var.metric_namespaces
content {
namespace = include_filter.value
}
}
}
variable "metric_namespaces" {
type = list(string)
default = ["AWS/Timestream", "AWS/ApiGateway", "AWS/Kafka"]
}This resource expects arn of the Kinesis Firehose to which it should stream the data and also IAM Role arn that allows it to PutRecord.
resource "aws_iam_role" "metric_stream_to_firehose_role" {
name = "MetricStreamToFirehoseRole"
description = "IAM role to allow putting records to Kinesis Firehose"
assume_role_policy = data.aws_iam_policy_document.streams_assume_role.json
}This IAM Role is assumed by CloudWatch Metric Stream service:
data "aws_iam_policy_document" "streams_assume_role" {
statement {
effect = "Allow"
principals {
type = "Service"
identifiers = [
"streams.metrics.cloudwatch.amazonaws.com"
]
}
actions = ["sts:AssumeRole"]
condition {
test = "StringEquals"
variable = "sts:ExternalId"
values = [data.aws_caller_identity.current.account_id]
}
}
}This defines the trust relationship that allows CloudWatch Metric Stream to assume the above role.
Now we have the IAM policy:
data "aws_iam_policy_document" "metric_stream_to_firehose" {
statement {
effect = "Allow"
actions = [
"firehose:PutRecord",
"firehose:PutRecordBatch"
]
resources = [aws_kinesis_firehose_delivery_stream.dynatrace_metric_stream_firehose.arn]
}
}This defines the permissions allowing the role to write to the Firehose stream.
At the end we need to attach above policy to the role:
resource "aws_iam_role_policy" "metric_stream_to_firehose" {
name = "DynatraceMetricStreamToFirehosePolicy"
role = aws_iam_role.metric_stream_to_firehose_role.id
policy = data.aws_iam_policy_document.metric_stream_to_firehose.json
}Kinesis Firehose Delivery Stream
This resource sets up a Kinesis Firehose that sends metrics directly to the 3rd party HTTP API (Dynatrace in this case).
resource "aws_kinesis_firehose_delivery_stream" "dynatrace_metric_stream_firehose" {
name = var.dynatrace_metric_stream_firehose
destination = "http_endpoint"
http_endpoint_configuration {
url = var.dynatrace_url
name = "Dynatrace"
access_key = local.dynatrace_api_token
buffering_size = var.http_endpoint_buffering_size
buffering_interval = var.http_endpoint_buffering_interval
role_arn = aws_iam_role.firehose_http_role.arn
s3_backup_mode = "FailedDataOnly"
retry_duration = var.http_endpoint_retry_duration
s3_configuration {
role_arn = aws_iam_role.firehose_to_s3.arn
bucket_arn = module.metric_stream_s3.bucket_arn
buffering_size = var.s3_buffering_size
buffering_interval = var.s3_buffering_interval
}
request_configuration {
content_encoding = "GZIP"
common_attributes {
name = "dt-url"
value = var.dynatrace_api_url
}
}
}
}As you can see it delivers metrics using Dynatrace API URL and access key. Data is buffered before sending and you can provide values for buffer size and interval depending on your needs. S3 configuration is related for the backup, so if it fails to deliver data is backed up in S3. The request configuration ensures that data is compressed (GZIP) and tagged with Dynatrace URL. Additionally as you can see we have new IAM Role – firehose_to_s3 and here is code sample:
resource "aws_iam_role" "firehose_to_s3" {
name = "MetricStreamFirehoseToS3Role"
description = "IAM Role for Kinesis Firehose to S3"
assume_role_policy = data.aws_iam_policy_document.firehose_assume_role.json
}
data "aws_iam_policy_document" "firehose_to_s3_access" {
statement {
effect = "Allow"
actions = [
"s3:AbortMultipartUpload",
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:PutObject",
]
resources = [
module.metric_stream_s3.bucket_arn,
"${module.metric_stream_s3.bucket_arn}/*",
]
}
statement {
effect = "Allow"
actions = [
"kms:Decrypt",
"kms:GenerateDataKey"
]
resources = [
aws_kms_key.this.arn
]
}
}
resource "aws_iam_role_policy" "firehose_to_s3" {
name = "DynatraceMetricStreamFirehoseToS3Policy"
role = aws_iam_role.firehose_to_s3.id
policy = data.aws_iam_policy_document.firehose_to_s3_access.json
}This IAM Role and Policy are needed because Kinesis Firehose needs permission to store data into S3 bucket. Role is assumed by Firehose so we have trust relationship once again. We are allowing Firehose specific actions to the S3 bucket (created by Terraform Module in this case) and all objects inside the bucket. Also we have part in the policy for KMS key because S3 bucket uses a KMS key for encryption.
Conclusion
Stream CloudWatch Metrics with Terraform enables a robust, automated observability pipeline. By integrating CloudWatch Metric Streams, Kinesis Firehose, IAM roles, and S3, you can securely forward metrics to third-party tools like Dynatrace. With proper buffering and fallback strategies, this setup ensures both reliability and data retention in case of delivery failures.
If this blog saved you time, support me with a coffee!
Thanks to everyone who’s supported!







