AWS EKS

Scanner supports AWS EKS container logs and control plane logs. These logs can be generated by a few sources, including the containers running your application and the various Kubernetes control plane components of your cluster (eg. api, audit, authenticator, controllerManager, and scheduler).

In order for Scanner to see your EKS logs, you can configure EKS to publish logs to CloudWatch, then configure the CloudWatch log groups to forward logs to a Kinesis Data Firehose, which can then write the logs into an S3 bucket that Scanner is linked to.

Step 1: Configure EKS to publish logs to CloudWatch

You can follow AWS documentation to publish your EKS logs to one or more CloudWatch log groups.

For container logs (eg. your application logs), see: Send logs to CloudWatch Logs.

For control plane logs, see: Send control plane logs to CloudWatch Logs.

Step 2: Set up CloudWatch to push to Kinesis Data Firehose

You can follow the AWS documentation to configure your CloudWatch log groups to push their logs to a Kinesis Data Firehose. See: Send CloudWatch Logs to Firehose.

Step 3: Configure the Kinesis Data Firehose to write logs to S3

A Kinesis Data Firehose can push logs to various destinations. We want to push to an S3 bucket that Scanner is linked to. You can follow the AWS documentation to configure the Firehose to write to an S3 bucket. See: Understand data delivery in Amazon Data Firehose.

If you haven't done so already, link the S3 bucket containing your logs to Scanner using the Linking AWS Accounts guide.

Step 5: Set up an S3 Import Rule in Scanner

  1. Within Scanner, navigate to Settings > S3 Import Rules.

  2. Click Create Rule.

  3. For Rule name, type a name like my_team_name_aws_eks_logs.

  4. For Destination Index, choose the index where you want these logs to be searchable in Scanner.

  5. For Status, set to Active if you want to start indexing the data immediately.

  6. For Source Type, we recommend the following, but you are free to choose any names here. However, out-of-the-box detection rules will expect the following names.

    1. For application logs: aws:eks

    2. For control plane logs: aws:eks_control

  7. For AWS Account, choose the account that contains the S3 bucket containing your EKS logs.

  8. For S3 Bucket, choose the S3 bucket containing your EKS logs.

  9. For S3 Key Prefix, type the prefix (i.e. directory path) of the S3 objects that your Firehose is writing.

  10. For File type, choose CloudWatchLogStream with Gzip compression.

  11. For Timestamp extractors, under Column name, type timestamp. This is the field in each log event that contains the timestamp information.

  12. Click Preview rule to try it out. Check that the S3 keys you expect are appearing, and check that the log events inside are being parsed properly with the timestamp detected properly.

  13. When you're ready, click Create.

Last updated