scanner
  • About Scanner
  • When to use it
  • Architecture
  • Getting Started
  • Playground Guide
    • Overview
    • Part 1: Search and Analysis
    • Part 2: Detection Rules
    • Wrapping Up
  • Log Data Sources
    • Overview
    • List
      • AWS
        • AWS Aurora
        • AWS CloudTrail
        • AWS CloudWatch
        • AWS ECS
        • AWS EKS
        • AWS GuardDuty
        • AWS Lambda
        • AWS Route53 Resolver
        • AWS VPC Flow
        • AWS VPC Transit Gateway Flow
        • AWS WAF
      • Cloudflare
        • Audit Logs
        • Firewall Events
        • HTTP Requests
        • Other Datasets
      • Crowdstrike
      • Custom via Fluentd
      • Fastly
      • GitHub
      • Jamf
      • Lacework
      • Osquery
      • OSSEC
      • Sophos
      • Sublime Security
      • Suricata
      • Syslog
      • Teleport
      • Windows Defender
      • Windows Sysmon
      • Zeek
  • Indexing Your Logs in S3
    • Linking AWS Accounts
      • Manual setup
        • AWS CloudShell
      • Infra-as-code
        • AWS CloudFormation
        • Terraform
        • Pulumi
    • Creating S3 Import Rules
      • Configuration - Basic
      • Configuration - Optional Transformations
      • Previewing Imports
      • Regular Expressions in Import Rules
  • Using Scanner
    • Query Syntax
    • Aggregation Functions
      • avg()
      • count()
      • countdistinct()
      • eval()
      • groupbycount()
      • max()
      • min()
      • percentile()
      • rename()
      • stats()
      • sum()
      • table()
      • var()
      • where()
    • Detection Rules
      • Event Sinks
      • Out-of-the-Box Detection Rules
      • MITRE Tags
    • API
      • Ad hoc queries
      • Detection Rules
      • Event Sinks
      • Validating YAML files
    • Built-in Indexes
      • _audit
    • Role-Based Access Control (RBAC)
    • Beta features
      • Scanner for Splunk
        • Getting Started
        • Using Scanner Search Commands
        • Dashboards
        • Creating Custom Content in Splunk Security Essentials
      • Scanner for Grafana
        • Getting Started
      • Jupyter Notebooks
        • Getting Started with Jupyter Notebooks
        • Scanner Notebooks on Github
      • Detection Rules as Code
        • Getting Started
        • Writing Detection Rules
        • CLI
        • Managing Synced Detection Rules
      • Detection Alert Formatting
        • Customizing PagerDuty Alerts
      • Scalar Functions and Operators
        • coalesce()
        • if()
        • arr.join()
        • math.abs()
        • math.round()
        • str.uriencode()
  • Single Sign On (SSO)
    • Overview
    • Okta
      • Okta Workforce
      • SAML
  • Self-Hosted Scanner
    • Overview
Powered by GitBook
On this page
  • Step 1: Configure Falcon Data Replicator to push to your S3 bucket
  • Step 2: Link your S3 bucket to Scanner
  • Step 3: Set up an S3 Import Rule in Scanner

Was this helpful?

  1. Log Data Sources
  2. List

Crowdstrike

PreviousOther DatasetsNextCustom via Fluentd

Last updated 7 months ago

Was this helpful?

Scanner supports Crowdstrike log events that are exported by Falcon Data Replicator to S3. These logs contain information about endpoint, cloud workload, and identity data from the Crowdstrike product ecosystem. In order for Scanner to see these logs, you can configure Crowdstrike Falcon Data Replicator to publish them to S3.

Step 1: Configure Falcon Data Replicator to push to your S3 bucket

Within Crowdstrike Falcon, navigate to Support and resources and select Falcon Data Replicator. If this option is not available, you may need to talk with your Crowdstrike support team to enable Falcon Data Replicator.

First, you can configure Falcon Data Replicator to push logs to a new S3 bucket hosted in Crowdstrike's AWS account. Second, you can configure data to be replicated from Crowdstrike's S3 bucket to your own S3 bucket.

You can follow Crowdstrike's to accomplish this. You may also want to use to replicate the logs to your own S3 bucket.

Step 2: Link your S3 bucket to Scanner

If you haven't done so already, link the S3 bucket containing your Crowdstrike logs to Scanner using the Linking AWS Accounts guide. This S3 bucket must be in your AWS account, not in Crowdstrike's AWS account.

Step 3: Set up an S3 Import Rule in Scanner

  1. Within Scanner, navigate to Settings > S3 Import Rules.

  2. Click Create Rule.

  3. For Rule name, type a name like my_team_name_crowdstrike_fdr_logs.

  4. For Destination Index, choose the index where you want these logs to be searchable in Scanner.

  5. For Status, set to Active if you want to start indexing the data immediately.

  6. For Source Type, we recommend crowdstrike:fdr, but you are free to choose any name. However, out-of-the-box detection rules will expect crowdstrike:fdr.

  7. For AWS Account, choose the account that contains the S3 bucket containing Crowdstrike FDR logs.

  8. For S3 Bucket, choose the S3 bucket containing Crowdstrike FDR logs.

  9. For S3 Key Prefix, type the prefix (i.e. directory path) where the Crowdstrike FDR is writing logs.

  10. For File type, choose JsonLines with Gzip compression.

  11. For Timestamp extractors, under Column name, type timestamp. This is the field in each log event that contains the timestamp information.

  12. Click Preview rule to try it out. Check that the S3 keys you expect are appearing, and check that the log events inside are being parsed properly with the timestamp detected properly.

  13. When you're ready, click Create.

documentation about Falcon Data Replicator
Crowdstrike's FDR project on GitHub