Ingestion metrics reference for Looker and BigQuery

The Ingestion metrics Explore interface provides a variety of measure fields that you can use to create new dashboards. Dimensions and measures are the fundamental components of a dashboard. A dimension is a field that can be used to filter query results by grouping data. A measure is a field that calculates a value using a SQL aggregate function, such as COUNT, SUM, AVG, MIN, or MAX. Any field derived from other measure values is also considered a measure.

For information about the dimension fields and ingestion metrics schemas,seeIngestion metrics schema.

Ingestion metrics fields

The following table describes the additional fields that you can use asdimensions,filters, andmeasures:

FieldDescription
timestampThe Unix epoch time that represents the start time of the aggregated time interval associated with the metric.
total_entry_numberThe number of logs ingested through the Ingestion API component (i.e component == Ingestion API).
total_entry_number_in_millionThe number of logs ingested through the Ingestion API component, in millions.
total_entry_number_in_million_for_drillThe number of logs ingested through the Ingestion API component, in millions rounded to 0 decimal places.
total_size_bytesThe log volume ingested through the Ingestion API component, in bytes.
total_size_bytes_GBThe log volume ingested through the Ingestion API component, in GB (gigabyte) rounded to 2 decimals. A GB is 109 bytes.
total_size_bytes_GB_for_drillSame as total_size_bytes_GB.
total_size_bytes_GiBThe log volume ingested through the Ingestion API component, in GiB (gibibyte) rounded to 2 decimals. A GiB is 230 bytes.
total_eventsThe count of validated events during normalization (successfully ingested events).
total_error_eventsThe count of events that failed validation or failed parsing during normalization.
total_error_count_in_millionThe count of failed validation and failed parsing errors, in millions rounded to 0 decimals.
total_normalized_eventsThe count of events that passed validation during normalization (successfully parsed events).
total_validation_error_eventsThe count of events that failed during normalization.
total_parsing_error_eventsThe count of events that failed to parse during normalization.
periodThe reporting period as selected by thePeriod Filter. Values includeThis Period andPrevious Period.
period_filterThe reporting period before the specified date or after the specified date.
log_type_for_drillOnly populated for non-null log types.
valid_log_typeSame as log_type_for_drill.
offered_gcp_log_type43 (The count of Google Cloud log types offered by Google Security Operations.)
gcp_log_types_usedPercentage of the available Google Cloud log types that the customer ingests.
gcp_log_typeOnly populated for non-null Google Cloud log types.
total_log_volume_mb_per_hourThe total volume of logs (in all components), in MB per hour rounded to 2 decimals.
max_quota_limit_mb_per_secondThe maximum quota limit, in MB per second.

Use case: Sample query

The following table contains values for a sample query:

MeasureRows populated
Ingested log countcollector_id, log_type, log_count
Ingested volumecollector_id, log_type, log_volume
Normalized eventscollector_id, log_type, event_count
Forwarder cpu usagecollector_id, log_type, cpu_used
Note: All times shown in the table use UTC.Note: The table stores different measures as separate rows. Depending on the measure, only relevant columns are populated, while the non-relevant columns remain as null.TheComponent,start_time andend_time columns are populated for all measures.

The table has 4 components:

  1. Forward
  2. Ingestion API
  3. Normalizer
  4. Out Of Band (OOB)

Logs can be ingested to Google SecOps by OOB, the Forwarder, direct customer calls to Ingestion API, or internal service calls to the Ingestion API (for example, ETD, HTTPS Push webhooks, or Azure event hub integration).

All logs ingested into Google Security Operations flow through theIngestion API. After that, the logs are normalized by theNormalizer component.

Log count

  • Number of ingested logs:
SELECT  *FROM  `chronicle-catfood.datalake.ingestion_metrics`WHERE  log_count IS NOT NULL  AND component = 'Ingestion API'LIMIT  2 ;
Note: Count the logs only after includinguse component filter andspecify Ingestion API.
  • Volume of ingested logs:
SELECT  *FROM  `chronicle-catfood.datalake.ingestion_metrics`WHERE  log_volume IS NOT NULL  AND component = 'Ingestion API'LIMIT  2 ;
Note: Count the logs only after includinguse component filter andspecify Ingestion API.
  1. Apply thelogtype orcollectorID filter, and in theWHERE clause, addlog_type = <LOGTYPE> orcollector_id = <COLLECTOR_ID>.
  2. SelectAddGROUP BY in the query with the appropriate field to perform a group query.

TheNormalizer component handles parsing errors, which occur when events are generated. These errors are recorded in thedrop_reason_code andstate columns.

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-19 UTC.