Logs from other AWS Services: CloudWatch -> Firehose -> S3

Using CloudWatch and Firehose to send logs to S3

Not all AWS services support direct exports to S3, but most support collecting logs in CloudWatch Log Groups. We can set up a CloudWatch -> Firehose -> S3 pipeline to automatically collect, buffer and batch log events into S3 files.

What is CloudWatch?

CloudWatch is an AWS service for monitoring your AWS resources. In this guide, we will focus on CloudWatch Log Groups, which collect log events from other AWS services.

The primary benefit of using CloudWatch is the ease of setup, given that it is the default logs destination for most AWS services.

What is Firehose?

Firehose is an AWS service that buffers and delivers data to a specified destination. In our use case, we will be collecting log events, either directly or from CloudWatch Log Groups, into S3 files.

The primary benefit of using Firehose is to buffer/batch the log events, so that you are not writing a ton of tiny S3 files from every service you are running.

Should I use CloudWatch or Firehose?

  • If the service supports direct export to S3 (e.g. CloudTrail, VPC flow logs, WAF), and you don't need to buffer and batch them, you should export to S3 directly.

  • If the service supports export to either Firehose or CloudWatch (e.g. ElastiCache, ECS), you should export to Firehose directly.

  • If the service supports export only to CloudWatch (e.g. Lambda, SNS), you will need both for a [your service] -> CloudWatch -> Firehose -> S3 pipeline.

  • If the service does not offer out-of-the-box export to S3/Firehose/CloudWatch (e.g. EC2), it will require additional setup/configuration to collect those logs, e.g. via a CloudWatch agent.

  • This guide here shows the S3/Firehose/CloudWatch support for each AWS service.

How to send logs to S3 via CloudWatch and Firehose?

Follow the steps below to set up your CloudWatch -> Firehose -> S3 pipeline:

A. Create an S3 bucket

B. Create a Firehose Stream with an S3 destination

  1. AWS Console -> Amazon Data Firehose -> Create Firehose stream

  2. Firehose Stream configuration:

Firehose Stream configuration

(Use default settings if not mentioned)

  • Choose source and destination

    • Source: Direct PUT

    • Destination: Amazon S3

  • Firehose stream name

    • Firehose stream name: e.g. ecs-logs-stream

  • Transform and convert records

    • Decompress source records from Amazon CloudWatch Logs:

    • No need to Turn on message extraction.

  • Destination Settings

    • S3 Bucket:

    • Newline delimiter: Enabled

    • S3 bucket prefix: e.g. ecs/, lambda/, ec2/ if you plan to point multiple streams to the same bucket

    • Buffer hints, compression, file extension and encyption

      • Buffer Interval: 60 seconds ()

      • Compression for data records: GZIP ()

      • File extension format: .json.gz (if applicable)

  1. Note that you might need to set up IAM roles/policies for services to send logs to Firehose, and for Firehose to write to the specified S3 bucket.

C. If your AWS service can send logs to Firehose directly (AWS service -> Firehose -> S3), you are all set. If not, continue below to create a CloudWatch Log Group.

D. Create a CloudWatch Log Group

  1. Many AWS services will create a new CloudWatch Log Group by default (e.g. when creating a Lambda function).

  2. If not, create one in AWS Console -> CloudWatch -> Log Groups -> Create Log Group. Depending on the service, it might allow you to choose the log group when creating the resource.

  3. CloudWatch Log Group configuration:

CloudWatch Log Group configuration
  • Retention setting: 7 days ()

  • Log class:

  1. Note that you might need to set up IAM roles/policies for services to send logs to CloudWatch.

E. Create Subscription Filter to send CloudWatch logs to Firehose

  1. AWS Console -> CloudWatch -> Log Groups -> [your log group]

  2. Subscription filters -> Create -> Create Amazon Data Firehose subscription filter

  3. CloudWatch Log Group Subscription Filter configuration:

CloudWatch Log Group Subscription Filter configuration
  • Choose destination

    • Amazon Data Firehose Stream: <your stream>

  • Grant permission

    • Select an existing role: <>

  • Configure log format and filters

    • Log Format: JSON (for most cases)

    • Subscription filter name: e.g. ecs-log-group-subscription-filter

  1. Click Start Streaming. It will show an error message if, for example, the IAM role does not have the right permission.

F. If you set up your pipeline (AWS service -> CloudWatch -> Firehose -> S3) properly, you should start seeing your logs in the S3 bucket soon.

How to send logs from my AWS service to CloudWatch or Firehose?

We will add more guides in our documentation for specific services soon. Stay tuned!

How to start indexing the log files in S3?

Once you have your logs in S3, you can set up Scanner to start indexing those files following the guide here.

Last updated