_usage
Overview
The _usage index tracks collect, indexing and query usage. It is useful for answering questions like:
How much data are we ingesting from different log sources?
Did one of our log sources stop receiving data?
Are our users or AI agents running inefficient or unnecessary queries?
All _usage logs have a record_type field, whose value could be one of collect_record, indexing_record or query_record. Each record type has its own schema, as described below.
Collect Records
Each collect record represents one file collected into your bucket based on a Collect Rule. Note that Scanner batches the logs before writing them to your buckets, so one file could contain log events from multiple HTTP push or API pulls.
Fields
collect_rule.id: The ID of the Collect Rule.collect_rule.name: The current name of the Collect Rule. Note that names are not required to be unique, and could be changed by editing.destination_file.bucket_name: The name of the bucket to which the file was written.destination_file.bucket_type: The type of the bucket to which the file was written. Must beaws_s3.destination_file.e_tag: The entity tag of the collected file.destination_file.file_key: The key/path of the collected file.destination_file.size_bytes: The physical size of the collected file.num_bytes_collected: The billable volume of the collected file.num_log_events_collected: The number of log events in the collected file.record_type: The type of this usage record. Must becollect_record.timestamp: The time the file was written to your bucket.
Example Queries
Get the total number of files, billable volume and number of log events, grouped by Collect Rule:
Example Detection Queries
Triggers an alert if the specified Collect Rule saw less than 1GB of data within the specified time range:
Indexing Records
Each indexing record represents one file ingested from your bucket into a Scanner Index based on one Index Rule. If you set up N Index Rules on the same source bucket, there will be N indexing records for the same file.
Note Scanner will not index the same file for the same Index Rule twice, even if the contents (and hence ETag) changed. The indexing record represents the first successful indexing attempt.
Fields
destination_index.id: The ID of the Scanner Index from which the logs are now queryable.destination_index.name: The name of the Scanner Index from which the logs are now queryable. Index names must be unique, but they could be changed by editing.index_rule.id: The ID of the Index Rule.index_rule.name: The current name of the Index Rule. Note that names are not required to be unique and could be changed.max_log_event_timestamp: The timestamp of the most recent log event parsed from the source file.min_log_event_timestamp: The timestamp of the least recent log event parsed from the source file.num_bytes_indexed: The billable volume of the indexed log events after applying all transformations.num_bytes_parsed: The volume of log events parsed from the source file before transformations. For reference only and not used for billing.num_log_events_indexed: The number of indexed log events after applying all transformations.num_log_events_parsed: The number of log events parsed from the source file before transformations. For reference only.record_type: The type of this usage record. Must beindexing_record.source_file.bucket_name: The name of the bucket where the source file lives.source_file.bucket_type: The type of the bucket where the source file lives. Must beaws_s3.source_file.e_tag: The entity tag of the source file at the time of indexing.source_file.file_key: The key/path of the source file.source_file.last_modified: The last-modified timestamp of the source file at the time of indexing.source_file.size_bytes: The physical size of the source file. Could be compressed.timestamp: The time the source file was successfully indexed.
Example Queries
Get the total number of files, billable volume and number of log events, grouped by Index Rule:
Example Detection Queries
Triggers an alert if the specified Index Rule saw less than 1GB of data within the specified time range:
Query Records
Each query record represents one executed query. Note that cancelled queries could still incur billable usage and are recorded here.
Fields
action_source: Where this query was executed from. Could be one ofapi,appandmcp.ended_at: The time the query ended.is_cancelled: Whether the query was cancelled/aborted before it completed.num_log_events_scanned: The total number of log events scanned for this query.num_bytes_scanned: The total billable volume scanned for this query.query_leid_range_end: The end of the queried log event ID range. Typically derived from the queried time range if not explicitly specified via the API.query_leid_range_start: The start of the queried log event ID range. Typically derived from the queried time range if not explicitly specified via the API.query_request_id: The unique ID of the query.query_text: The query text.query_time_range_end: The end of the queried time range. Not to be confused with query end time (ended_at).query_time_range_start: The start of the queried time range. Not to be confused with query start time (started_at).rbac_actor.display_name: The display name of the RBAC actor. Forapi_key, it is the name of the key (can be changed). Foruser, it is the user's email.rbac_actor.id: The unique ID of the RBAC actor.rbac_actor.type: The type of the RBAC actor. Could be one ofapi_keyoruser.record_type: The type of this usage record. Must bequery_record.started_at: The time the query started.
Example Queries
Get the total number of executed queries, billable volume and min/max time range, grouped by RBAC actor:
Example Detection Queries
Triggers an alert if the total query usage via MCP was greater than 1TB within the specified time range.
Last updated
Was this helpful?