OSSEC

Scanner supports OSSEC logs, which contain a wide range of security-related information gathered from various sources on a system for host-based intrusion detection. In order for Scanner to see them, you need to configure a workflow to can push logs to an S3 bucket that Scanner is linked to.

In this guide, we will show how to use Wazuh to write OSSEC logs to a local file, and then use Fluentd to push these logs to S3.

Step 1: Set up logs to flow from Wazuh to Fluentd to S3

Wazuh has a module called fluent-forward that will publish OSSEC logs to Fluentd. You can then use Fluentd to write logs to an S3 bucket.

You can follow the Wazuh documentation to configure the fluent-forward module. Make sure to set the log format to json. See these Wazuh documentation articles for more information:

You can follow the Fluentd documentation to configure it to write logs to S3. Make sure to configure the output format to be JSON. See: Fluentd s3 output module

If you haven't done so already, link the S3 bucket containing your OSSEC logs to Scanner using the Linking AWS Accounts guide.

Step 3: Set up an S3 Import Rule in Scanner

  1. Within Scanner, navigate to Settings > S3 Import Rules.

  2. Click Create Rule.

  3. For Rule name, type a name like my_team_name_ossec_logs.

  4. For Destination Index, choose the index where you want these logs to be searchable in Scanner.

  5. For Status, set to Active if you want to start indexing the data immediately.

  6. For Source Type, we recommend ossec, but you are free to choose any name. However, out-of-the-box detection rules will expect ossec.

  7. For AWS Account, choose the account that contains the S3 bucket containing OSSEC logs.

  8. For S3 Bucket, choose the S3 bucket containing OSSEC logs.

  9. For S3 Key Prefix, type the prefix (i.e. directory path) of the S3 objects that Fluentd is writing.

  10. For File type, choose JsonLines with Gzip compression.

  11. For Timestamp extractors, under Column name, type timestamp. This is the field in each log event that contains the timestamp information.

  12. Click Preview rule to try it out. Check that the S3 keys you expect are appearing, and check that the log events inside are being parsed properly with the timestamp detected properly.

  13. When you're ready, click Create.

Last updated