Skip to main content

💻 Make better budget decisions with Marketing Mix Modelling (MMM) – Learn more >

Cloud cost control with proactive monitoring

CASE STUDY
  • Lavera Naturkosmetik
  • Allianz
  • Fresenius
  • Douglas
  • Aachener Grundvermögen
  • Fielmann
  • logo redbull 6be4fd8c
  • Telefonica
  • logo gore 276ad9c2
  • Roche
  • logo mnet 4e476502
  • How automatic AWS and Snowflake budget alerts help trigger timely remedial actions

    If you run large marketing campaigns, you probably know the problem. Cloud costs often only show up as a surprise when the invoice arrives.

    In this case study, we show how an automatic budget warning triggers as soon as monthly spend exceeds 130 percent of the planned budget. The threshold is deliberate because signals that come too early are often ignored. The system runs in DEV and PROD and removes daily dashboard routines for teams so the focus stays on control and marketing impact.

    The result is more predictable budgets, lower risk, and faster action in day-to-day campaign work.

    01 The Challenge

    Hopmann_Case-Study_Cloud-Cost-Monitoring_EN

    Cloud costs are often an unpredictable item in the marketing budget. They can fluctuate from month to month, especially when your pipelines pull data from multiple sources and require extensive transformations.

    For companies running large-scale marketing campaigns, this volatility can be risky. Data pipelines can scale quickly, and if spending is not closely monitored, invoices for AWS (Amazon Web Services) and Snowflake can spike without warning. Relying exclusively on manual checks is time-consuming and carries the risk of overlooking these costs until it is too late.

    An e-commerce company came to us with exactly this challenge. The goal was to introduce automated cost monitoring to regain control over the cloud budget.

    02 Solution

    Hopmann Website Pfeil gruen links

    To overcome this challenge, we built an automated cost alerting system and deployed it with Terraform.

    Here’s what it does:

    • Tracks monthly spending in AWS and Snowflake.
    • Sends email alerts if spending climbs above 130% of the budget.

    This way, teams don’t need to log in to dashboards daily. Instead, they get notified the moment something unusual happens, so they can act before costs spiral.

    A cautious approach would alert at 70% of the budget, but for this use case we chose a more pragmatic strategy. Repeated 70% notifications over several months tend to be ignored under the assumption that “everything is under control,” so we implemented a higher threshold (130%).

    03 How it works: Step by Step

    In the next section we walk through the process step by step. You can also have a look at the complete code in the GitHub respository.

    From setting up DEV and PROD to defining budgets and configuring email notifications, everything is explained so non-technical readers can follow. The goal is a reliable setup that makes costs transparent and alerts in time.

    Arrow_green

    1. ENVIRONMENT SETUP

    This pipeline is designed to operate across two environments — DEV and PROD.

    • DEV (Development) is used for testing and validating configuration changes, ensuring that updates to Terraform code or alerting logic work correctly before they impact production systems.
    • PROD (Production) is the live environment that monitors and enforces cost alerts on actual cloud resources.

    Each environment corresponds to a separate AWS account, ensuring clear separation of workloads, cost visibility, and infrastructure management.

    In each AWS account (DEV and PROD), we manually created an S3 bucket to store the terraform.tfstate file.

    This ensures that Terraform can maintain a consistent and secure record of your deployed infrastructure.

    Once created, the S3 bucket name, key, and region were hard-coded in:

    • backend/dev.tfbackend
    • backend/prod.tfbackend

    Example backend/dev.tfbackend file:

    bucket = "terraform-s3-state-4637483747"
    key = "terraform.tfstate"
    region = "eu-central-1"
    Arrow_green

    2. DEFINE BUDGET

    Budgets aren’t static. They’re typically calculated based on past spending, so alerts adjust as usage changes over time.

    • For AWS:
      In the AWS Management Console → Billing and Cost Management, you can review spending by service and total monthly costs.
    • For Snowflake:
      In Snowflake → Admin → Cost Management, you can view credit and currency usage over a given time frame.

    Once we had reviewed past expenses and determined the budget limits, we entered those values into the locals.tf file.

    This ensured that the Terraform automation uses the correct budget limits when monitoring and triggering alerts.

    locals {
      aws_limits = {
        dev  = 10  # insert your AWS budget limit for DEV (integer or float)
        prod = 20  # insert your AWS budget limit for PROD (integer or float)
      }
    
      snowflake_limits = {
        dev  = 100  # insert your Snowflake budget limit for DEV (integer or float)
        prod = 200  # insert your Snowflake budget limit for PROD (integer or float)
      }
    
      aws_limit        = lookup(local.aws_limits, var.environment)
      snowflake_limit  = lookup(local.snowflake_limits, var.environment)
    }
    Arrow_green

    3. SET UP NOTIFICATION CHANNELS

    Notifications are handled via AWS SNS (Simple Notification Service).

    This ensures that budget alerts are promptly delivered to the relevant team members.

    • Multiple team members can subscribe to the SNS topic.
    • As soon as spending crosses the defined threshold, an email alert is automatically sent to all configured recipients.

    The Terraform configuration for these notification resources is defined in:

    • aws_alarm.tf → handles AWS budget alerts
    • snowflake_alarm.tf → handles Snowflake budget alerts

    Terraform Code Example:

    resource "aws_sns_topic" "aws_budget_alerts" {
      # AWS SNS topic for AWS budget notifications
      ...
    }
    
    resource "aws_sns_topic_subscription" "aws_budget_emails" {
      # Email subscriptions for AWS budget alerts
      ...
    }
    
    resource "aws_sns_topic" "snowflake_budget_alerts" {
      # AWS SNS topic for Snowflake budget notifications
      ...
    }
    
    resource "aws_sns_topic_subscription" "snowflake_budget_emails" {
      # Email subscriptions for Snowflake budget alerts
      ...
    }

    Each subscriber received an email invitation to confirm their subscription upon deployment of the configuration.

    To define who receives the budget alerts, we added their email addresses to the variables.tf file under the budget_alert_emails variable:

    variable "budget_alert_emails" {
      type    = list(string)
      default = [
        "your-first-email-address",
        "your-second-email-address",
      ]
    }

    It is possible to include as many recipients as needed.

    When Terraform is applied, all listed recipients will receive confirmation emails from AWS SNS and must accept the invitations to start receiving alerts.

    Arrow_green

    4. CONFIGURE AWS BUDGET

    AWS Budgets continuously tracks monthly AWS costs and compares them to the budget.

    We set the alert threshold at 130% of the defined budget limit. This buffer helps catch unusual cost jumps while minimizing false alarms.

    resource "aws_budgets_budget" "aws_monthly_cost_budget" { ...
    
      notification {
        comparison_operator = "GREATER_THAN"
        threshold           = 130
        threshold_type      = "PERCENTAGE"
        notification_type   = "ACTUAL"
    
        subscriber_sns_topic_arns = [
          aws_sns_topic.aws_budget_alerts.arn
        ]
      }}

    Whenever costs exceed 130% of the defined budget limit, AWS Budgets triggers a notification and then resets for the next month.

    Arrow_green

    5. MONITOR SNOWFLAKE COSTS

    To automatically monitor Snowflake costs, a Lambda function queries Snowflake usage data daily and sends an alert if costs exceed the alert threshold.

    5.1. Create a Snowflake User and Role

    First, we created a dedicated Snowflake user and role so that AWS Lambda could securely connect and query cost data.

    To do it, we ran the following SQL commands directly in Snowflake:

    CREATE ROLE IF NOT EXISTS BILLING_MONITOR;
    
    GRANT IMPORTED PRIVILEGES ON DATABASE SNOWFLAKE TO ROLE BILLING_MONITOR;
    GRANT USAGE ON WAREHOUSE COMPUTE_WH TO ROLE BILLING_MONITOR;
    
    CREATE USER IF NOT EXISTS LAMBDA_SNOWFLAKE_COST
      PASSWORD = ''
      DEFAULT_ROLE = BILLING_MONITOR
      DEFAULT_WAREHOUSE = COMPUTE_WH
      MUST_CHANGE_PASSWORD = FALSE;
    
    GRANT ROLE BILLING_MONITOR TO USER LAMBDA_SNOWFLAKE_COST;
    
    ALTER USER LAMBDA_SNOWFLAKE_COST SET RSA_PUBLIC_KEY='ABCD';
    • The public key is stored in Snowflake (RSA_PUBLIC_KEY).

    • The private key is kept securely in AWS Systems Manager Parameter Store (see next step).

    5.2 Store Credentials in AWS Parameter Store

    Next, we created an entry in AWS Systems Manager Parameter Store called snowflake_cost_alarm.

    This parameter securely stores the Snowflake connection credentials (user, account, role, warehouse, and private key).

    Example structure (values omitted for security):

    {
      "user": "lambda_snowflake_cost",
      "account": "your-snowflake-account",
      "warehouse": "default_wh",
      "role": "billing_monitor",
      "private_key": "-----BEGIN PRIVATE KEY-----\nABCDE..."
    }

    This allows the Lambda function to retrieve the connection credentials dynamically, without hard-coding sensitive information in the code.

    5.3 Lambda Function to Query Snowflake Usage

    A Lambda function runs daily to check Snowflake usage.

    It queries the SNOWFLAKE.ACCOUNT_USAGE.METERING_HISTORY table to calculate the total credit consumption for the current month.

    Python code snippet:

    first_day_of_current_month = today.replace(day=1)
    
    cur.execute(f"""
        SELECT SUM(CREDITS_USED) AS total_credits
        FROM SNOWFLAKE.ACCOUNT_USAGE.METERING_HISTORY
        WHERE DATE_TRUNC('month', START_TIME)::DATE = '{first_day_of_current_month}'
    """)
    result = cur.fetchone()
    credits_used = result[0] or 0
    
    logging.info (f"Credits used are {credits_used} for this month {first_day_of_month}")
    
    cost_usd = float(credits_used) * 2.60
    threshold = float(snowflake_limit) * 1.30
    
    # Compare against the threshold and send an SNS alert if exceeded
    if cost_usd > threshold:
        sns = boto3.client("sns")
        sns.publish(
            TopicArn=topic_arn,
            Subject=f"Snowflake cost alert for {environment} environment",
            Message=f"Snowflake cost for {year_month} month is ${cost_usd:.2f}, exceeding threshold ${threshold:.2f}."
        )

    The Lambda function automatically retrieves credentials from the AWS Systems Manager Parameter Store, connects to Snowflake using key-based authentication, and publishes alerts to the configured SNS topic if usage exceeds the alert threshold, which is calculated as 130% of the budget limit defined in locals.tf.

    5.4 Schedule Daily Execution via EventBridge

    The Lambda function is triggered by an Amazon EventBridge rule at 10:00 UTC each day.

    # ---------- EventBridge Rule ----------
    resource "aws_cloudwatch_event_rule" "daily" {
      name                = "snowflake-cost-checker-daily"
      description         = "Triggers Snowflake cost checker Lambda every day at 10:00 UTC"
      schedule_expression = "cron(0 10 * * ? *)"
    }
    
    # ---------- EventBridge Target ----------
    resource "aws_cloudwatch_event_target" "lambda_trigger" {
      rule      = aws_cloudwatch_event_rule.daily.name
      target_id = "SnowflakeCostCheckerLambda"
      arn       = aws_lambda_function.snowflake_cost_checker.arn
    }

    5.5 Permissions

    The Lambda execution role includes the required permissions to:

    • Retrieve parameters from AWS Systems Manager Parameter Store (SSM Parameter Store)
    • Publish messages to AWS SNS topics
    • Interact with Amazon EventBridge for scheduled triggers

    These IAM permissions (Identity and Access Management) ensure the secure, automated operation of the monitoring workflow.

    Arrow_green

    6. DEPLOYING THE INFRASTRUCTURE WITH TERRAFORM

    Once the Terraform code, budgets, SNS topics, and Lambda functions were configured, we could deploy the monitoring pipeline.

    The deployment workflow was identical for DEV and PROD environments, with the only difference being the backend configuration and the environment variable.

    6.1 Install Terraform

    If you would like to try it out yourself, please ensure that Terraform is installed on your machine. You can download it from the official Terraform website and follow the installation instructions for your operating system.

    Verify the installation:

    terraform version

    6.2 Initialize Terraform

    Initialize Terraform in your working directory. This downloads all required modules and providers and sets up the remote backend.

    For the DEV environment:

    terraform init -backend-config="backend/dev.tfbackend" -migrate-state

    For the PROD environment, simply change the backend file:

    terraform init -backend-config="backend/prod.tfbackend" -migrate-state

    The -migrate-state flag ensures that the state is properly migrated to the configured S3 bucket if needed.

    6.3 Review Planned Changes

    Before applying, check which resources will be created, modified, or destroyed:

    terraform plan -var environment="dev"

    For PROD:

    terraform plan -var environment="prod"

    Carefully review the plan output to ensure that the changes align with your expectations.

    6.4 Apply Changes

    Once the plan looks correct, deploy the changes:

    terraform apply -var environment="dev"

    Confirm the apply action when prompted.

    For PROD, run:

    terraform apply -var environment="prod"

    04 Key Takeaways

    Hopmann Line Dot Long

    Proactive monitoring of cloud costs isn’t just about technology; it’s about financial control.

    By automating alerts for AWS and Snowflake, companies can:

    • Avoid unexpected budget overruns

    • Save engineering time previously spent on manual checks

    • Make cloud costs more predictable, enabling better planning and resource allocation

    In short: monitor, save money, and stay in control.

    05 Results

    AWS costs
    are continously monitored

    Snowflake costs
    are checked daily

    Real-time notifications
    for anomalies

    Fewer manual
    dashboard checks

    More predictable
    budgets

    Lower
    operational effort

    Transparency and
    financial control

    06 The right email at the right time

    HMA_Team_Federico-Erroi

    “It is important to us that teams can maintain an overview with our help. Notifications should only be sent when they are really necessary. That’s why we set the threshold at 130 percent. Warnings at 70 percent often lead to a habit of ignoring them. A precise email at the right moment prevents escalation and enables timely countermeasures to be taken before the bill comes as a surprise.”

    Federico Erroi, Senior Data Engineering Specialist
    Hopmann Marketing Analytics

    Curious how this approach could work for your team? Reach out — we’d love to share more.

    Hello!