Site icon iDelta

Splunking CloudWatch Metric Streams

AWS CloudWatch metrics provide a very useful means of building out a monitoring solution across your AWS cloud resources. For years now, the Splunk Add-on for Amazon Web Services has provided the ability to ingest these CloudWatch metrics, by polling the AWS API. In this article we will look at a new way of ingesting AWS CloudWatch metrics, namely CloudWatch Metric Streams.

API polling of AWS CloudWatch metrics has always had some issues, namely:

Earlier this year AWS launched CloudWatch metric streams, enabling delivery of CloudWatch metrics to a Kinesis Data Firehose. This results in faster delivery of the metric data and is a more scalable solution. The solution has been created with delivery to partner systems in mind and alongside Splunk, delivery of data to New Relic, Datadog, Dynatrace and Sumo Logic are all supported. In the case of Splunk, this new metric data ingestion method is supported on Splunk Enterprise as well as Splunk Infrastructure Monitoring (previously Signalfx). This AWS Blog Post from March 2021 provides an overview.

Stream CloudWatch Metrics To Splunk

The setup is quite straightforward however there are some key points to note.

Pre-requisites

You’ll need to have the following in place before setting up CloudWatch metric streams:

Steps to implement

There are essentially two steps to complete on the AWS side:

  1. Setup a Kinesis Data Firehose to send data to Splunk
  2. Publish your cloudwatch metrics to the Kinesis Data Firehose
Setup Kinesis Data Firehose

From the AWS Console search for “Kinesis Data Firehose”, move your cursor over “Kinesis” and the “Kinesis Data Firehose” option will appear, click on it.

Accessing the Kinesis Data Firehose section

On the “Delivery Streams” page click on “Create Delivery Stream”:

Create a Kinesis Data Firehose by clicking on Create Delivery Stream
Choose Source and Destination

Now we start to configure Kinesis Data Firehose. The first simple questions are essentially:

For source we choose “Direct PUT”, i.e. something will put data on the data stream. For destination we choose Splunk:

Source and Destination Options for Kinesis Data Firehose
Transform Records

This optional setting allows for a lambda function to process the data prior to sending it to the destination. For now we will skip this section but in part two of this article we will show how it can be used.

Destination Settings

The folllowing settings need to be completed:

Kinesis Data Firehose – Splunk Destination Settings
Backup Settings

Data that fails to transmit will be saved in an S3 bucket. You can choose an existing bucket (browse) or create a new one (create)

Advanced Settings
Finally..

Click create delivery stream.

Publish CloudWatch Metric Streams to Firehose

From “CloudWatch Metrics” in the AWS Console, choose the “Streams” option from the left hand menu, then click Create Metric Stream. There are a small number of settings that need to be completed:

Checking Output

Search your index in Splunk and your now should have events coming through. To parse the incoming data the btool command output below shows the props settings that were added:

opt/splunk/bin/splunk cmd btool props list aws:firehose:json --debug

/opt/splunk/etc/apps/cloudwatch_firehose/default/props.conf [aws:firehose:json]
/opt/splunk/etc/apps/cloudwatch_firehose/default/props.conf LINE_BREAKER = ([\n\r]+){"metric_stream_name"
/opt/splunk/etc/apps/cloudwatch_firehose/default/props.conf MAX_TIMESTAMP_LOOKAHEAD = 10
/opt/splunk/etc/apps/cloudwatch_firehose/default/props.conf SHOULD_LINEMERGE = false
/opt/splunk/etc/apps/cloudwatch_firehose/default/props.conf TIME_FORMAT = %s
/opt/splunk/etc/apps/cloudwatch_firehose/default/props.conf TIME_PREFIX = "timestamp":
/opt/splunk/etc/apps/cloudwatch_firehose/default/props.conf TRUNCATE = 200000

This results in events that are well formed JSON and have the timestamp correctly extracted:

CloudWatch CPU Metric via CloudWatch Streaming and Firehose

This is perfectly usable but leaves us with a couple of potential problems:

  1. The format of the events and the metadata is different from that produced by the Add-on for Amazon Web Services so won’t work with the Splunk App for AWS or any existing searches, dashboards or ITSI KPIs you have defined
  2. The data is delivered as events, not Splunk Metrics

In part two we will look at addressing this through use of a lambda transformation function, plugged in to the Kinesis Data Firehose.

Troubleshooting

You can use the cloudwatch logs from the kinesis data firehose to investigate any issues sending the data to Splunk.

Use CloudWatch Log Insights to check for errors

Need some additional resource to help deliver your AWS Monitoring on Splunk? Click here to get in touch


For 2021 we’ve committed to posting a new Splunk tip every week!

If you want to keep up to date on tips like the one above then sign up below:

Subscribe to our newsletter to receive regular updates from iDelta, including news and updates, information on upcoming events, and Splunk tips and tricks from our team of experts. You can also find us on Twitter and LinkedIn.

Subscribe

* indicates required
Exit mobile version