If you need monitoring, just shout!

Let’s explore how you can send metrics from Lambda to CloudWatch Logs.
On the mic

AWS provides a lot of low-level monitoring for Lambda functions out-of-the-box: invocations, duration, errors, throttles — you name it. But if you want to monitor aspects of your business domain in CloudWatch, you have to do this yourself.


Posting metrics directly from Lambda

Let’s say you’re running a service that allows users to upload files directly to S3. You have a Lambda function in place to process those uploads (e.g. for resizing or copying uploaded files). To do that, you set up a bucket notification, so your Lambda function is invoked with this event on every upload (incomplete payload for brevity):

{
  "Records": [
    {
      "eventTime": "1970-01-01T00:00:00.000Z",
      "s3": {
        "object": {
          "eTag": "0123456789abcdef0123456789abcdef",
          "key": "HappyFace.jpg",
          "size": 1024
        },
        "bucket": {
          "arn": "arn:aws:s3:::mybucket",
          "name": "sourcebucket",
        },
      },
      "eventName": "ObjectCreated:Put",
    }
  ]
}

Notice the file size as part of the input (Records[0].s3.object.size)? Let’s assume you’d like to monitor that for each upload to see if file uploads are growing on average over time.

Assuming you opted for Node.js when developing your Lambda function, you might come up with this code:

const AWS = require('aws-sdk');
const cloudwatch = new AWS.CloudWatch();

exports.handler = async (event) => {

  const params = {
    MetricData: [{
      MetricName: 'upload_size',
      StorageResolution: 60,
      Timestamp: event.Records[0].eventTime,
      Unit: 'Bytes',
      Value: Records[0].s3.object.size
    }],
    Namespace: 'uploader'
  };

  await Promise.resolve()
    .then(() => {
      cloudwatch.putMetricData(params).promise();
    })
    .catch((err) => {
      console.log(err, err.stack);
    });
};

This is pretty straight-forward: Construct the service interface object, assemble the metric payload, and send it to the service.

This solution is fine if your Lambda function is small and posts very few metrics to CloudWatch. But what if your Lambda function is already quite big, or you want to avoid using monitoring-specific parameters like Unit and StorageResolution in the code?

Separating business logic from monitoring

If you want to keep business logic and monitoring nicely separated, consider logging your monitoring to stdout and filtering the logs into CloudWatch metrics using Metric Filters.

That way, monitoring is not yet another piece of code that relies on a network connection where you have to think about control flow or even retries. It’s essentially a single call to console.log that outputs JSON and gets picked up by your infrastructure and turned into metrics.

Revisiting the Lambda function

Here’s the new version of the above function:

exports.handler = async (event) => {

  const record = event.Records[0];
  console.log(JSON.stringify({
    created_at: record.eventTime,
    key: record.s3.object.key,
    metric_name: 'upload_size',
    name: 'uploader:file_uploaded',
    size: record.s3.object.size
  });

};

Note that we can remove the dependencies on aws-sdk, and we can skip the error handling. We also got rid of all CloudWatch-specific parameters.

You’re now shouting out your monitoring needs, hoping somebody hears you.

Turning this JSON output into a CloudWatch metric

What’s missing is the glue between the JSON output and CloudWatch. Effectively, we want AWS to filter the logs for a specific pattern and extract the metrics we need.

The syntax for those patterns is quite powerful:

You can also use conditional operators and wildcards to create exact matches.

– Filter and Pattern Syntax

If you’re using Terraform, the following configuration will create a Metric Filter that turns the JSON output into the same CloudWatch metric that the original Lambda function created (cf. aws_cloudwatch_log_metric_filter resource documentation):

resource "aws_cloudwatch_log_metric_filter" "lambda-uploader-file-size" {
  name = "lambda-uploader-file-size"

  log_group_name = "lambda-notify-uploaded"
  pattern        = "{ $.name = \"uploader:file_uploaded\" && $.metric_name = \"upload_size\" }"

  metric_transformation {
    namespace = "uploader"
    name      = "upload_size"
    value     = "$.size"
  }
}

The given pattern instructs the filter to:

  1. look for JSON
  2. …with the name attribute being “uploader:file_uploaded”
  3. …and the metric_name attribute being “upload_size”

This first part just describes how we want to find what we’re looking for. Once we find it, we describe how we want this to be transformed into a metric. In this case, the size attribute will be taken as the value for CloudWatch Metrics.

Beyond Terraform

If you’re not Terraform, you can also create the same Metric Filter using the AWS CLI. If you prefer using the AWS Console instead, keep reading.

You can access Metric Filters from your Lambda function’s Log Group:

Screenshot of AWS Console

And from there, you can add Metric Filters and connect them to CloudWatch Alarms:

Screenshot of AWS Console

Wrap-up

You can simplify the control flow in your Lambda function by synchronously outputting your monitoring data as JSON. From there, you can let your AWS infrastructure take care of transforming this JSON into a metric.

This approach might help to make your Lambda functions easier to understand: Your code is not cluttered with monitoring-specific details. It also makes them easier to change (for instance, when the interface between your code and your monitoring is JSON instead of network interaction, it is easier to rewrite in another language).

Please be aware that using Metric Filters is an indirection and it comes with a price: This solution adds architectural complexity. You end up with more moving parts and have to carefully consider if it’s worth it.

Last but not least: Be sure to explore the power of CloudWatch Metric Filters, they are a powerful tool.

Photo by Vidar Nordli-Mathisen on Unsplash

Want to join our Engineering team?
Apply today!
Share: