scanner
  • About Scanner
  • When to use it
  • Architecture
  • Getting Started
  • Playground Guide
    • Overview
    • Part 1: Search and Analysis
    • Part 2: Detection Rules
    • Wrapping Up
  • Log Data Sources
    • Overview
    • List
      • AWS
        • AWS Aurora
        • AWS CloudTrail
        • AWS CloudWatch
        • AWS ECS
        • AWS EKS
        • AWS GuardDuty
        • AWS Lambda
        • AWS Route53 Resolver
        • AWS VPC Flow
        • AWS VPC Transit Gateway Flow
        • AWS WAF
      • Cloudflare
        • Audit Logs
        • Firewall Events
        • HTTP Requests
        • Other Datasets
      • Crowdstrike
      • Custom via Fluentd
      • Fastly
      • GitHub
      • Jamf
      • Lacework
      • Osquery
      • OSSEC
      • Sophos
      • Sublime Security
      • Suricata
      • Syslog
      • Teleport
      • Windows Defender
      • Windows Sysmon
      • Zeek
  • Indexing Your Logs in S3
    • Linking AWS Accounts
      • Manual setup
        • AWS CloudShell
      • Infra-as-code
        • AWS CloudFormation
        • Terraform
        • Pulumi
    • Creating S3 Import Rules
      • Configuration - Basic
      • Configuration - Transformations
      • Previewing Imports
      • Regular Expressions in Import Rules
  • Using Scanner
    • Query Syntax
    • Aggregation Functions
      • avg()
      • count()
      • countdistinct()
      • eval()
      • groupbycount()
      • max()
      • min()
      • percentile()
      • rename()
      • stats()
      • sum()
      • table()
      • var()
      • where()
    • Detection Rules
      • Event Sinks
      • Out-of-the-Box Detection Rules
      • MITRE Tags
    • API
      • Ad hoc queries
      • Detection Rules
      • Event Sinks
      • Validating YAML files
    • Built-in Indexes
      • _audit
    • Role-Based Access Control (RBAC)
    • Beta features
      • Scanner for Splunk
        • Getting Started
        • Using Scanner Search Commands
        • Dashboards
        • Creating Custom Content in Splunk Security Essentials
      • Scanner for Grafana
        • Getting Started
      • Jupyter Notebooks
        • Getting Started with Jupyter Notebooks
        • Scanner Notebooks on Github
      • Detection Rules as Code
        • Getting Started
        • Writing Detection Rules
        • CLI
        • Managing Synced Detection Rules
      • Detection Alert Formatting
        • Customizing PagerDuty Alerts
      • Scalar Functions and Operators
        • coalesce()
        • if()
        • arr.join()
        • math.abs()
        • math.round()
        • str.uriencode()
  • Single Sign On (SSO)
    • Overview
    • Okta
      • Okta Workforce
      • SAML
  • Self-Hosted Scanner
    • Overview
Powered by GitBook
On this page
  • 1. Ensure that you have logs in an S3 bucket
  • 2. Configure Scanner to index these logs in your S3 bucket
  • 3. Install the Scanner plugin
  • 4. Configure the Scanner plugin
  • 5. Explore data or build a dashboard

Was this helpful?

  1. Using Scanner
  2. Beta features
  3. Scanner for Grafana

Getting Started

1. Ensure that you have logs in an S3 bucket

Using tools like Vector.dev, Cribl, or other log pipeline tools, you can store your logs in S3 instead of sending them directly to Splunk.

Many tools, like Crowdstrike Falcon Data Replicator and the Github Audit system can write logs directly to your S3 buckets.

Once you have logs in your S3 buckets, you can start to index them with Scanner. We support JSON, CSV, Parquet, and plaintext log files. No need to transform them first. Just point Scanner at your raw log files.

2. Configure Scanner to index these logs in your S3 bucket

Following the S3 integration guide, configure Scanner to index these logs in S3. This allows search queries to execute at high speed even as data volumes reach hundreds of terabytes or petabytes.

3. Install the Scanner plugin

Install on Grafana Cloud

  1. In your Grafana instance, navigate to Connections > Add new connection.

  2. Search for Scanner.

  3. Click on the Add new data source button.

Install on local Grafana

Use the grafana-cli tool to install Scanner from the command line:

grafana-cli plugins install scannerdev-scanner-datasource

Or, go to https://grafana.com/grafana/plugins/scannerdev-scanner-datasource/ to download the .zip file and unpack it into your Grafana plugins directory.

4. Configure the Scanner plugin

After installing the Scanner plugin, you will need to configure the plugin to send queries to your Scanner instance by providing the API URL of your Scanner instance and an API key.

You can find your API URL and API keys by visiting Scanner and navigating to Settings > API.

Click on Save & test to validate and save your configuration.

5. Explore data or build a dashboard

From the configuration page, you can go to Explore data to start executing search queries.

You can also go to Build a dashboard to execute search queries and visualize data.

See Query Syntax for more information on how to write Scanner log search queries. You can use Aggregation Functions to produce numerical or tabular results for dashboards.

PreviousScanner for GrafanaNextJupyter Notebooks

Last updated 10 months ago

Was this helpful?

Set up Scanner for Grafana
Explore data
Build a dashboard