BigQuery audit logs overview
Cloud Audit Logs are a collection of logs provided by Google Cloudthat provide insight into operational concerns related to your use ofGoogle Cloud services. This page provides details aboutBigQuery specific log information, and it demonstrates howto use BigQuery to analyze logged activity. For more information, seeIntroduction to audit logs in BigQuery.
Versions
The audit log message system relies on structured logs, and theBigQuery service provides several types of messages:
AuditData: The old version of logs, whichreports API invocations.BigQueryAuditMetadata:The new version of logs, which reports resource interactions such aswhich tables were read from and written to by a given query joband which tables expired due to having an expiration time configured.AuditLog: The log format used when reporting requests.
Limitation
Log messages have a size limit of 100K bytes.For more information, seeTruncated log entry.
Message Formats
AuditData format
TheAuditDatamessages are communicated within theprotoPayload.serviceData submessagewithin the Cloud LoggingLogEntry message. AuditDatapayload returnsresource.type set tobigquery_resource, notbigquery_dataset.
BigQueryAuditMetadata format
You can findBigQueryAuditMetadatadetails in theprotoPayload.metadata submessage that is in theCloud LoggingLogEntry message.
In the Cloud Logging logs, theprotoPayload.serviceData information isnot set or used. InBigQueryAuditMetadata messages, there is more information:
resource.typeis set to one of the following values:bigquery_datasetfor operations to datasets suchasgoogle.cloud.bigquery.v2.DatasetService.*resource.labels.dataset_idcontains the encapsulating dataset.
bigquery_projectfor all other called methods,such as jobsresource.labels.locationcontains the location of the job.
protoPayload.methodNameis set to one of the following values:google.cloud.bigquery.v2.TableService.InsertTablegoogle.cloud.bigquery.v2.TableService.UpdateTablegoogle.cloud.bigquery.v2.TableService.PatchTablegoogle.cloud.bigquery.v2.TableService.DeleteTablegoogle.cloud.bigquery.v2.DatasetService.InsertDatasetgoogle.cloud.bigquery.v2.DatasetService.UpdateDatasetgoogle.cloud.bigquery.v2.DatasetService.PatchDatasetgoogle.cloud.bigquery.v2.DatasetService.DeleteDatasetgoogle.cloud.bigquery.v2.TableDataService.Listgoogle.cloud.bigquery.v2.JobService.InsertJobgoogle.cloud.bigquery.v2.JobService.Querygoogle.cloud.bigquery.v2.JobService.GetQueryResultsInternalTableExpired
protoPayload.resourceNamenow contains the URI for the referencedresource. For example, a table created by using an insert job reportsthe resource URI of the table. The earlier format reported the API resource(the job identifier).protoPayload.authorizationInfoonly includes information relevant to thespecific event. With earlier AuditData messages, you could merge multiplerecords when source and destination tables were in the same dataset ina query job.
AuditLog format
BigQuery Storage API uses theAuditLogformat when reporting requests. Logs contain information such as:
resource.typeis set to:bigquery_datasetforCreateReadSession.bigquery_tableforReadRows,SplitReadStreamandAppendRows.
protoPayload.methodNameis set to one of the following values:google.cloud.bigquery.storage.v1.BigQueryRead.CreateReadSessiongoogle.cloud.bigquery.storage.v1beta1.BigQueryStorage.CreateReadSessiongoogle.cloud.bigquery.storage.v1beta2.BigQueryRead.CreateReadSessiongoogle.cloud.bigquery.storage.v1.BigQueryRead.ReadRowsgoogle.cloud.bigquery.storage.v1.BigQueryRead.SplitReadStreamgoogle.cloud.bigquery.storage.v1.BigQueryWrite.AppendRows
Mapping audit entries to log streams
Audit logs are organized into the following three streams. For moreinformation about the streams, see theCloud Audit Logs documentation.
- Data access
- System event
- Admin activity
Data access (data_access)
Thedata_access stream contains entries about jobs by using theJobInsertion andJobChange events and about table data modificationsby using theTableDataChange andTableDataRead events.TableDataChangeandTableDataRead events have aresource.type value ofbigquery_dataset.
For example, when a load job appends data to a table, thedata_access streamadds aTableDataChange event. ATableDataRead event indicates whena consumer reads a table.
Note: BigQuery does not emit data access log entriesin the following scenarios:
If a job fails before or during execution,
TableDataChangeandTableDataReadevents are not logged.Data appended to a table using the legacy streaming API or the Storage WriteAPI does not generate
TableDataChangelog entries.Recursive dataset deletions, such as removing a dataset and its contents in asingle API call, don't yield deletion entries for each resource contained inthe dataset. The dataset removal is present in the activity log.
Partitioned tables don't generate
TableDataChangeentries for partitionexpirations.Wildcard tables access generates a single
TableDataReadentry and doesn'twrite a separate entry for each queried table.
System event (system_event)
You can set an expiration time on tables to remove them at a specified time.Thesystem_event stream reports aTableDeletion event whenthe table expires and is removed.
Admin activity (activity)
The mainactivity stream reports all remaining activities and eventssuch as table and dataset creation.
Visibility and access control
BigQuery audit logs can include information that users mightconsider sensitive, such as SQL text, schema definitions, and identifiersfor resources such as table and datasets. For information aboutmanaging access to this information, see the Cloud Loggingaccess control documentation.
Caller identities and resource names
Audit logging doesn't redact the caller's identity and IP addresses for anyaccess that succeeds or for any write operation.
For read-only operations that fail with a "permission denied" error,Audit logging performs the following tests:
- Is the caller in the same organization as the resource being logged?
- Is the caller a service account?
If the response to any test is true, then Audit logging doesn't redactthe caller's identity and IP addresses. If the response to all tests is false,then Audit logging redacts the identity and IP addresses.
For cross-project access, there are additional rules that apply:
The billing project must be the project that sends the request, andthe data project must be the project whose resources are also accessedduring the job. For example, a query job in a billing project readssome table data from the data project.
The billing project resource ID is redacted from the data project logunless the projects have the same domain associated with them or are in thesame organization.
Identities and caller IP addresses are not redacted from the data projectlog if either one of the preceding conditions apply or the billing projectand the data project are in the same organization and the billing projectalready includes the identity and caller IP address.
Cloud Logging exports
BigQuery automatically sends audit logs to Cloud Logging.Cloud Logging lets users filter androute messages to other services,including Pub/Sub, Cloud Storage, and BigQuery.
With long term log retention and log exports to BigQuery, you cando aggregated analysis on logs data. Cloud Logging documentshow messages are transformedwhen exported to BigQuery.
Filtering exports
To filter relevant BigQuery Audit messages, you can expressfilters as part of the export.
For example, the following advanced filter represents an export thatonly includes the newerBigQueryAuditMetadata format:
protoPayload.metadata."@type"="type.googleapis.com/google.cloud.audit.BigQueryAuditMetadata"
You can express additional filters based on the fields within the logmessages. For more information about crafting advanced filters, see theadvanced log filter documentation.
Defining a BigQuery log sink using gcloud
The following example command line shows how you can use the Google Cloud CLI tocreate a logging sinkin a dataset namedauditlog_dataset that onlyincludesBigQueryAuditMetadata messages. The dataset must already exist before you create the logging sink.
gcloudloggingsinkscreatemy-example-sinkbigquery.googleapis.com/projects/my-project-id/datasets/auditlog_dataset\--log-filter='protoPayload.metadata."@type"="type.googleapis.com/google.cloud.audit.BigQueryAuditMetadata"'
After the sink is created, give the service account created by the previous commandaccess to the dataset.
Querying exported logs
BigQueryAuditMetadata examples
The following examples show how you can useBigQueryAuditMetadata messagesto analyze BigQuery usage. Because of the schema conversion done duringthe export from Cloud Logging into BigQuery, the message bodies arepresented in semi-structured form. Theprotopayload_auditlog.metadataJson isaSTRING field, and it contains the JSON representation of the message. Youcan leverageJSON functionsin GoogleSQL to analyze this content.
Note: Change theFROM clause in each of these examples to thecorresponding exported tables in your project.
Example: Report expired tables
BigQueryAuditMetadata messages log when a table is deleted because itsexpiration time was reached. The following sample query shows when thesemessages occur and includes a URI that references the table resourcethat was removed.
#standardSQLSELECTprotopayload_auditlog.resourceNameASresourceName,receiveTimestampaslogTimeFROM`my-project-id.auditlog_dataset.cloudaudit_googleapis_com_system_event_*`WHEREprotopayload_auditlog.methodName='InternalTableExpired'ORDERBYresourceName
Example: Most popular datasets
This query shows coarse, per-dataset statistics about table reads and tablemodifications. Before you run this example,define a log sinkwith an existing dataset.
#standardSQLSELECTREGEXP_EXTRACT(protopayload_auditlog.resourceName,'^projects/[^/]+/datasets/([^/]+)/tables')ASdatasetRef,COUNT(DISTINCTREGEXP_EXTRACT(protopayload_auditlog.resourceName,'^projects/[^/]+/datasets/[^/]+/tables/(.*)$'))ASactive_tables,COUNTIF(JSON_QUERY(protopayload_auditlog.metadataJson,"$.tableDataRead")ISNOTNULL)ASdataReadEvents,COUNTIF(JSON_QUERY(protopayload_auditlog.metadataJson,"$.tableDataChange")ISNOTNULL)ASdataChangeEventsFROM`my-project-id.auditlog_dataset.cloudaudit_googleapis_com_data_access_*`WHEREJSON_QUERY(protopayload_auditlog.metadataJson,"$.tableDataRead")ISNOTNULLORJSON_QUERY(protopayload_auditlog.metadataJson,"$.tableDataChange")ISNOTNULLGROUPBYdatasetRefORDERBYdatasetRef
Troubleshooting
This section shows you how to resolve issues with BigQuery auditlogs.
Truncated log entry
The following issue occurs when a log message is larger thanthelog message size limit:
TheprotoPayload.metadata submessage in theCloud LoggingLogEntry message is truncated.
To resolve this issue, consider the following strategies:
Retrieve the full log message by using theBigQuery APIjobs.get method.
Reduce the size of the metadata in the log message; for example, by usingwildcards on common path prefixes to reduce the size of the
sourceUrilist.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.