Other Datasets

Since Scanner supports logs in JSON format, it supports all Cloudflare log datasets. In order for Scanner to see these logs, you can configure Cloudflare to publish them to S3.

Step 1: Configure Logpush to write to S3

You can follow the Cloudflare documentation to configure Cloudflare to write the logs of your choice to S3. See: Enable Amazon S3.

Make sure the time field is exported

You must make sure the timestamp field is included when you configure Cloudflare to export your log dataset. Here are the timestamp fields for different Cloudflare datasets.

  • Audit Logs: When

  • Firewall Events: Datetime

  • HTTP Requests: EdgeStartTimestamp

  • DNS Logs: Timestamp

  • Gateway HTTP: Datetime

  • etc.

If the timestamp field is not exported, Scanner will not be able to determine the time when the log event actually occurred. Scanner will fall back to using the time the log event was indexed by Scanner, which will not be the correct time.

You can read the Cloudflare documentation for more information about the fields that are present in different log data sets. See: Log fields.

If you haven't done so already, link the S3 bucket containing your Cloudflare logs to Scanner using the Linking AWS Accounts guide.

Step 3: Set up an S3 Import Rule in Scanner

  1. Within Scanner, navigate to Settings > S3 Import Rules.

  2. Click Create Rule.

  3. For Rule name, type a name like my_team_name_cloudflare_<dataset_name>.

  4. For Destination Index, choose the index where you want these logs to be searchable in Scanner.

  5. For Status, set to Active if you want to start indexing the data immediately.

  6. For Source Type, we recommend cloudflare:<dataset_name>, but you are free to choose any name. However, out-of-the-box detection rules will expect cloudflare:<dataset_name>.

  7. For AWS Account, choose the account that contains the S3 bucket containing the Cloudflare Firewall logs.

  8. For S3 Bucket, choose the S3 bucket containing the Cloudflare logs.

  9. For S3 Key Prefix, type the prefix (i.e. directory path) where Cloudflare is writing logs. This will be the path you configured in Cloudflare but with /<dataset_name>/ appended to the end.

    1. For example, if you configured Cloudflare to write logs to the S3 path my_team/my_cloudflare_logs/, then the S3 Key Prefix in your S3 Import Rule in Scanner should be my_team/my_cloudflare_logs/<dataset_name>/.

  10. For File type, choose JsonLines with Gzip compression.

  11. For Timestamp extractors, under Column name, type the name of the time-related field for your data set. For example, for the dns_logs dataset, this field is Timestamp.

  12. Click Preview rule to try it out. Check that the S3 keys you expect are appearing, and check that the log events inside are being parsed properly with the timestamp detected properly.

  13. When you're ready, click Create.

Last updated