Ingestion metrics reference for Looker and BigQuery
The Ingestion metrics Explore interface provides a variety of measure fields that you can use to create new dashboards. Dimensions and measures are the fundamental components of a dashboard. A dimension is a field that can be used to filter query results by grouping data. A measure is a field that calculates a value using a SQL aggregate function, such as COUNT, SUM, AVG, MIN, or MAX. Any field derived from other measure values is also considered a measure.
For information about the dimension fields and ingestion metrics schemas,seeIngestion metrics schema.
Ingestion metrics fields
The following table describes the additional fields that you can use asdimensions,filters, andmeasures:
| Field | Description |
|---|---|
| timestamp | The Unix epoch time that represents the start time of the aggregated time interval associated with the metric. |
| total_entry_number | The number of logs ingested through the Ingestion API component (i.e component == Ingestion API). |
| total_entry_number_in_million | The number of logs ingested through the Ingestion API component, in millions. |
| total_entry_number_in_million_for_drill | The number of logs ingested through the Ingestion API component, in millions rounded to 0 decimal places. |
| total_size_bytes | The log volume ingested through the Ingestion API component, in bytes. |
| total_size_bytes_GB | The log volume ingested through the Ingestion API component, in GB (gigabyte) rounded to 2 decimals. A GB is 109 bytes. |
| total_size_bytes_GB_for_drill | Same as total_size_bytes_GB. |
| total_size_bytes_GiB | The log volume ingested through the Ingestion API component, in GiB (gibibyte) rounded to 2 decimals. A GiB is 230 bytes. |
| total_events | The count of validated events during normalization (successfully ingested events). |
| total_error_events | The count of events that failed validation or failed parsing during normalization. |
| total_error_count_in_million | The count of failed validation and failed parsing errors, in millions rounded to 0 decimals. |
| total_normalized_events | The count of events that passed validation during normalization (successfully parsed events). |
| total_validation_error_events | The count of events that failed during normalization. |
| total_parsing_error_events | The count of events that failed to parse during normalization. |
| period | The reporting period as selected by thePeriod Filter. Values includeThis Period andPrevious Period. |
| period_filter | The reporting period before the specified date or after the specified date. |
| log_type_for_drill | Only populated for non-null log types. |
| valid_log_type | Same as log_type_for_drill. |
| offered_gcp_log_type | 43 (The count of Google Cloud log types offered by Google Security Operations.) |
| gcp_log_types_used | Percentage of the available Google Cloud log types that the customer ingests. |
| gcp_log_type | Only populated for non-null Google Cloud log types. |
| total_log_volume_mb_per_hour | The total volume of logs (in all components), in MB per hour rounded to 2 decimals. |
| max_quota_limit_mb_per_second | The maximum quota limit, in MB per second. |
Use case: Sample query
The following table contains values for a sample query:
| Measure | Rows populated |
| Ingested log count | collector_id, log_type, log_count |
| Ingested volume | collector_id, log_type, log_volume |
| Normalized events | collector_id, log_type, event_count |
| Forwarder cpu usage | collector_id, log_type, cpu_used |
Component,start_time andend_time columns are populated for all measures.The table has 4 components:
- Forward
- Ingestion API
- Normalizer
- Out Of Band (OOB)
Logs can be ingested to Google SecOps by OOB, the Forwarder, direct customer calls to Ingestion API, or internal service calls to the Ingestion API (for example, ETD, HTTPS Push webhooks, or Azure event hub integration).
All logs ingested into Google Security Operations flow through theIngestion API. After that, the logs are normalized by theNormalizer component.
Log count
- Number of ingested logs:
SELECT *FROM `chronicle-catfood.datalake.ingestion_metrics`WHERE log_count IS NOT NULL AND component = 'Ingestion API'LIMIT 2 ;use component filter andspecify Ingestion API.- Volume of ingested logs:
SELECT *FROM `chronicle-catfood.datalake.ingestion_metrics`WHERE log_volume IS NOT NULL AND component = 'Ingestion API'LIMIT 2 ;use component filter andspecify Ingestion API.- Apply the
logtypeorcollectorIDfilter, and in theWHEREclause, addlog_type = <LOGTYPE>orcollector_id = <COLLECTOR_ID>. - SelectAdd
GROUP BYin the query with the appropriate field to perform a group query.
TheNormalizer component handles parsing errors, which occur when events are generated. These errors are recorded in thedrop_reason_code andstate columns.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-19 UTC.