Snowflake
This guide walks through how to set up Snowflake as a log source in Scanner Collect, using direct API integration to query login history, query history, and session history tables. Scanner will authenticate using key-pair authentication and pull logs from your Snowflake account usage tables.
We'll assume that you want Scanner to both store the logs in S3 and index them for search and detection.
Prerequisites
Before you begin, make sure you have:
Permissions to create users and roles in your Snowflake instance
Permissions to grant access to the
SNOWFLAKE.ACCOUNT_USAGEschemaScanner admin access to create the integration
Your Snowflake account identifier in the
organization_name-account_nameformatA warehouse name where Scanner will execute queries
Step 1: Create a New Source
In the Scanner UI, go to the Collect tab.
Click Create New Source.
Click Select a Source Type.
Choose Snowflake.
Select the Log Type:
Snowflake: Login History - captures authentication events from
SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORYSnowflake: Query History - captures query execution events from
SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORYSnowflake: Session History - captures session events from
SNOWFLAKE.ACCOUNT_USAGE.SESSIONS
For Ingest Method, select API Pull.
For Destination, select Scanner.
Click Next.
Step 2: Configure the Source
Set a Display Name, such as
my-org-snowflake-login-history.
Click Next.
Step 3: Authenticate with Snowflake
If you've previously created a Snowflake connection, select it from the list and skip to Step 6.
Otherwise, select New Snowflake Connection and continue to Step 4.
Step 4: Create a Snowflake Integration
Enter a Connection Name for this integration.
Enter your Account Identifier in the
organization_name-account_nameformat. You can retrieve this by running the following SQL in your Snowflake instance:SELECT CURRENT_ORGANIZATION_NAME() || '-' || CURRENT_ACCOUNT_NAME();Enter a Username for the Scanner service account (we suggest
scanner_service_account). You'll create this user in Snowflake in the next step.
Click Connect.
After creating the integration, Scanner will display:
The Username you chose
The Public Key that Scanner generated for this integration
Copy the public key—you'll need it in the next step.
Step 5: Create the Service Account in Snowflake
In your Snowflake instance, run the following script as an admin.
Before running the script, replace the placeholders:
scanner_service_account→ the username you chose in Step 4scanner_service_account_role→ a role name of your choice for ScannerCOMPUTE_WH→ the warehouse where Scanner will run queriesPUBLIC_KEY→ the public RSA key from Step 4
-- Set variables
SET scanner_user = 'scanner_service_account'; -- REPLACE with your chosen username
SET scanner_role = 'scanner_service_account_role'; -- REPLACE with a role name of your choice
SET warehouse_name = 'COMPUTE_WH'; -- REPLACE with the warehouse of your choice
-- 1. Create the role
CREATE ROLE IF NOT EXISTS IDENTIFIER($scanner_role);
-- 2. Grant usage on the SNOWFLAKE database and ACCOUNT_USAGE schema
GRANT IMPORTED PRIVILEGES ON DATABASE SNOWFLAKE TO ROLE IDENTIFIER($scanner_role);
-- 3. Grant usage on a specific warehouse
GRANT USAGE ON WAREHOUSE IDENTIFIER($warehouse_name) TO ROLE IDENTIFIER($scanner_role);
-- 4. Create the service account user with RSA public key
CREATE USER IF NOT EXISTS IDENTIFIER($scanner_user)
DEFAULT_ROLE = $scanner_role
DEFAULT_WAREHOUSE = $warehouse_name
RSA_PUBLIC_KEY = 'PUBLIC_KEY'; -- REPLACE with the public key from Step 4
-- 5. Grant the role to the user
GRANT ROLE IDENTIFIER($scanner_role) TO USER IDENTIFIER($scanner_user);Once you've completed the Snowflake setup, close the setup dialog and click Next.
Step 6: Configure the Destination
Choose the S3 Bucket where the raw Snowflake logs should be stored.
(Optional) Enter a Key Prefix to organize the data path in your bucket.
Choose the Scanner Index where logs will be made searchable.
Leave the Source Label as the default:
snowflake
Click Next.
Step 7: Transform and Enrich
Keep the default transformation: Normalize to ECS - Snowflake
This maps log fields to the Elastic Common Schema (ECS), making it easier to write cross-source queries and detection rules.
(Optional) Add additional transformation or enrichment steps if needed.
Click Next.
Step 8: Timestamp Extraction
Leave the default timestamp extraction settings. Scanner will automatically extract timestamps from the appropriate field based on the log type you selected.
Click Next.
Step 9: Review and Create
Review all configuration settings.
Click Create Source.
What Happens Next
Once created:
Scanner will poll the selected Snowflake account usage table every 2 minutes.
New events will be written to your S3 bucket, under the specified key prefix.
Logs will then be indexed for search and detections using your selected Scanner index.
Last updated
Was this helpful?