Lacework

Scanner supports Lacework logs, which contain information related to monitoring, detection, and response capabilities across cloud environments. In order for Scanner to see these logs, you can configure Lacework to publish them to S3.

Step 1: Configure Lacework to export logs to S3

You can use the Lacework S3 Data Exporter to write logs to S3. You can follow the Lacework documentation to configure your logs to be exported to S3. See: S3 Data Exporter.

If you haven't done so already, link the S3 bucket containing your Lacework logs to Scanner using the Linking AWS Accounts guide.

Step 3: Set up one S3 Import Rule in Scanner for every desired data source

There are many different Lacework data sources available in the exported files. You can configure Scanner to index any number of them.

Lacework exports files in this directory structure:

agent/YYYY-MM-DD/YYYY-MM-DD-HH-00/<data_source>.json.gz

Here are all the possible data source file names (as of writing this guide):

  • Agent_management.json.gz

  • Alert_details.json.gz

  • All_files.json.gz

  • Applications.json.gz

  • Change_files.json.gz

  • Cloud_compliance.json.gz

  • Cloud_configuration.json.gz

  • Cmdline.json.gz

  • Connections.json.gz

  • Container_summary.json.gz

  • Container_vuln_details.json.gz

  • Dns_query.json.gz

  • Host_vuln_details.json.gz

  • Image.json.gz

  • Interfaces.json.gz

  • Internal_ipa.json.gz

  • Machine_details.json.gz

  • Machine_summary.json.gz

  • New_hashes.json.gz

  • Package.json.gz

  • Pod_summary.json.gz

  • Process_summary.json.gz

  • User_details.json.gz

  • User_login.json.gz

For each data source you want Scanner to read, you need to create a separate S3 Import Rule in Scanner. Using the Duplicate button helps streamline this.

Here is how to create an S3 Import Rule for the cloud_compliance data source type, i.e. the data in the files named Cloud_compliance.json.gz.

  1. Within Scanner, navigate to Settings > S3 Import Rules.

  2. Click Create Rule.

  3. For Rule name, type a name like my_team_name_lacework_cloud_compliance_logs.

  4. For Destination Index, choose the index where you want these logs to be searchable in Scanner.

  5. For Status, set to Active if you want to start indexing the data immediately.

  6. For Source Type, we recommend lacework:cloud_compliance, but you are free to choose any name. However, out-of-the-box detection rules will expect lacework:cloud_compliance.

    1. In general, use lacework:<data_set_name> where <data_set_name> is just the lower-cased version of the filename without the .json.gz file extension.

    2. For example, if we are indexing Cloud_compliance.json.gz files, then the source type should be lacework:cloud_compliance.

  7. For AWS Account, choose the account that contains the S3 bucket containing Lacework logs.

  8. For S3 Bucket, choose the S3 bucket containing Lacework logs.

  9. For S3 Key Prefix, type the prefix (i.e. directory path) where the Lacework is writing logs.

  10. Important: Click + Additional Regex, and type: .*/Cloud_compliance\.json\.gz

    1. This will ensure that we only index files with the file name Cloud_compliance.json.gz.

  11. For File type, choose JsonLines with Gzip compression.

  12. For Timestamp extractors, under Column name, type created_time. This is the field in each log event that contains the timestamp information.

  13. Click Preview rule to try it out. Check that the S3 keys you expect are appearing, and check that the log events inside are being parsed properly with the timestamp detected properly.

  14. When you're ready, click Create.

For every data source type you want Scanner to cover, you can duplicate this S3 Import Rule and find/replace Cloud_compliance (mixed-case) and cloud_compliance (lower-case) with the new data source names. For example, to handle files named Machine_summary.json.gz, duplicate the S3 Import Rule, and then find/replace Cloud_compliance with Machine_summary and cloud_compliance with machine_summary.

Last updated