Other Datasets
Since Scanner supports logs in JSON format, it supports all Cloudflare log datasets. In order for Scanner to see these logs, you can configure Cloudflare to publish them to S3.
Step 1: Configure Logpush to write to S3
You can follow the Cloudflare documentation to configure Cloudflare to write the logs of your choice to S3. See: Enable Amazon S3.
Make sure the time field is exported
You must make sure the timestamp field is included when you configure Cloudflare to export your log dataset. Here are the timestamp fields for different Cloudflare datasets.
Audit Logs:
When
Firewall Events:
Datetime
HTTP Requests:
EdgeStartTimestamp
DNS Logs:
Timestamp
Gateway HTTP:
Datetime
etc.
If the timestamp field is not exported, Scanner will not be able to determine the time when the log event actually occurred. Scanner will fall back to using the time the log event was indexed by Scanner, which will not be the correct time.
You can read the Cloudflare documentation for more information about the fields that are present in different log data sets. See: Log fields.
Step 2: Link the S3 bucket to Scanner
If you haven't done so already, link the S3 bucket containing your Cloudflare logs to Scanner using the Linking AWS Accounts guide.
Step 3: Set up an S3 Import Rule in Scanner
Within Scanner, navigate to Settings > S3 Import Rules.
Click Create Rule.
For Rule name, type a name like
my_team_name_cloudflare_<dataset_name>
.For Destination Index, choose the index where you want these logs to be searchable in Scanner.
For Status, set to Active if you want to start indexing the data immediately.
For Source Type, we recommend
cloudflare:<dataset_name>
, but you are free to choose any name. However, out-of-the-box detection rules will expectcloudflare:<dataset_name>
.For AWS Account, choose the account that contains the S3 bucket containing the Cloudflare Firewall logs.
For S3 Bucket, choose the S3 bucket containing the Cloudflare logs.
For S3 Key Prefix, type the prefix (i.e. directory path) where Cloudflare is writing logs. This will be the path you configured in Cloudflare but with
/<dataset_name>/
appended to the end.For example, if you configured Cloudflare to write logs to the S3 path
my_team/my_cloudflare_logs/
, then the S3 Key Prefix in your S3 Import Rule in Scanner should bemy_team/my_cloudflare_logs/<dataset_name>/
.
For File type, choose JsonLines with Gzip compression.
For Timestamp extractors, under Column name, type the name of the time-related field for your data set. For example, for the
dns_logs
dataset, this field isTimestamp
.Click Preview rule to try it out. Check that the S3 keys you expect are appearing, and check that the log events inside are being parsed properly with the timestamp detected properly.
When you're ready, click Create.
Last updated