Getting Started
1. Ensure that you have logs in an S3 bucket
Using tools like Vector.dev, Cribl, or other log pipeline tools, you can store your logs in S3 instead of sending them directly to Splunk.
Many tools, like Crowdstrike Falcon Data Replicator and the Github Audit system can write logs directly to your S3 buckets.
Once you have logs in your S3 buckets, you can start to index them with Scanner. We support JSON, CSV, Parquet, and plaintext log files. No need to transform them first. Just point Scanner at your raw log files.
2. Configure Scanner to index these logs in your S3 bucket
Following the S3 integration guide, configure Scanner to index these logs in S3. This allows search queries to execute at high speed even as data volumes reach hundreds of terabytes or petabytes.
3. Install the Scanner plugin
Install on Grafana Cloud
In your Grafana instance, navigate to Connections > Add new connection.
Search for Scanner.
Click on the Add new data source button.
Install on local Grafana
Use the grafana-cli tool to install Scanner from the command line:
Or, go to https://grafana.com/grafana/plugins/scannerdev-scanner-datasource/ to download the .zip file and unpack it into your Grafana plugins directory.
4. Configure the Scanner plugin
After installing the Scanner plugin, you will need to configure the plugin to send queries to your Scanner instance by providing the API URL of your Scanner instance and an API key.
You can find your API URL and API keys by visiting Scanner and navigating to Settings > API.
Click on Save & test to validate and save your configuration.
5. Explore data or build a dashboard
From the configuration page, you can go to Explore data to start executing search queries.
You can also go to Build a dashboard to execute search queries and visualize data.
See Query Syntax for more information on how to write Scanner log search queries. You can use Aggregations to produce numerical or tabular results for dashboards.
Last updated