Troubleshoot routing and storing logs
This document explains common routing and storage issues and how to use theGoogle Cloud console to view and troubleshoot configuration mistakes orunexpected results.
For general information about viewing log data, seeView logs in sink destinations.
Troubleshoot log routing
This section describes how to troubleshoot common issues when routing yourlog entries.
Destination contains unwanted log entries
You are viewing the log entries routed to a destination and determine that thedestination contains unwanted log entries.
To resolve this condition, update theexclusion filters for yoursinks that route log entries to the destination. Exclusion filters let youexclude selected log entries from being routed to a destination.
For example, assume that you create anaggregated sink to route log entries inan organization to a destination. To exclude the log entries from a specificproject from being routed to the destination,add the following exclusion filter to the sink:
logName:projects/PROJECT_ID
You can also exclude log entries from multiple projects by using the logical-ORoperator to joinlogName clauses.
Destination is missing log entries
Perhaps the most common sink-related issue is that log entries seem to bemissing from the destination of a sink.
Note: If the Google Cloud console is reporting an error message or if you seea log entry that is reporting a sink configuration error, then skip section andview the error log to target and fix the underlying issue.In some cases, an error isn't generated but you might notice that log entriesareunavailable when you try to access them in your destination. If you suspect thatyour sink isn't properly routing log entries, then check your sink'ssystem log-based metrics:
exports/byte_count: Number of bytes in log entries that were routed.exports/log_entry_count: Number of log entries that were routed.exports/error_count: Number of log entries that failed to be routed.
The metrics have labels that record the counts by sink name and destinationname and let you know whether your sink is routing log entries successfullyor or failing. For details about how to view metrics, seeLog-based metrics overview.
If your sink metrics indicate that your sink isn't performing as you expected,here are some possible reasons and what to do about them:
Latency
No matching log entries have been received since you created or updated your sink; only new log entries are routed.
Try waiting an hour and check your destination again.
Matching log entries are late-arriving.
There can be a delay before you can view your log entries in the destination.Late-arriving log entries are especially common for sinks which have configured Cloud Storage buckets as their destinations. Try waiting a few hours and check your destination again.
Viewing scope/filter is incorrect
The scope you're using to view log entries stored in a log bucket is incorrect.
Scope your search to one or morelog views as follows:
If you're using the Logs Explorer, then use theRefine scope button.
If you're using the gcloud CLI, then use the
gcloud logging readcommand and add a--view=AllLogsflag.
The time range you're using toselect and view data in your sink destination is too narrow.
Try broadening the time range that you're using when selecting data in your sink destination.
Error in sink filter
The sink's filter is incorrect and not capturing the log entries you expected to see in your destination.
Edit your sink's filter by using theLog Router in the Google Cloud console. To verify you entered the correct filter, selectPreview logs in theEdit sink panel. This opens the Logs Explorer in a new tab with the filter pre-populated. For instructions about viewing and managing your sinks, seeManage sinks.
View errors
For each of the supported sink destinations, Logging provideserror messages for improperly configured sinks.
There are several ways to view these sink-related errors; these methods aredescribed in the following sections:
- View the error logs generated for the sink.
- Receive sink error notifications by email. The sender of this emailis
logging-noreply@google.com.
Error logs
The recommended method for inspecting your sink-related errors in detail is toview the error log entries generated by the sink. For details about viewinglog entries, seeView logs by using the Logs Explorer.
You can use the following query in the query-editor pane in theLogs Explorer to review your sink's error logs. The same query works inthe Logging API and the gcloud CLI.
Before you copy the query, replace the variableSINK_NAME with thename of the sink you're trying to troubleshoot. You can find your sink's nameon theLog Router page in the Google Cloud console.
logName:"logging.googleapis.com%2Fsink_error"resource.type="logging_sink"resource.labels.name="SINK_NAME"For example, if your sink's name ismy-sink-123, then the log entry might looksimilar to the following:
{ errorGroups: [ 0: { id: "COXu96aNws6BiQE" }] insertId: "170up6jan" labels: { activity_type_name: "LoggingSinkConfigErrorV2" destination: "pubsub.googleapis.com/projects/my-project/topics/my-topic" error_code: "topic_not_found" error_detail: "" sink_id: "my-sink-123" } logName: "projects/my-project/logs/logging.googleapis.com%2Fsink_error" receiveTimestamp: "2024-07-11T14:41:42.578823830Z" resource: { labels: { destination: "pubsub.googleapis.com/projects/my-project/topics/my-topic" name: "my-sink-123" project_id: "my-project" } type: "logging_sink" } severity: "ERROR" textPayload: "Cloud Logging sink configuration error in my-project, sink my-sink-123: topic_not_found ()" timestamp: "2024-07-11T14:41:41.296157014Z"}TheLogEntry fieldlabels and its nested key-value informationhelps you target the source of your sink's error; it contains the affectedresource, affected sink, and error code. Thelabels.error_code field containsa shorthand description of the error, letting you know which component of yoursink needs reconfiguring.
To resolve this failure,edit your sink. For example, youmight edit your sink by using theLog Router page:
Note: If you're unable to seepermission-related sink errors, thenverify that you'veenabled Data Access audit logs forthe services to which you're sending logs. Data Access audit logs helpGoogle Support troubleshoot issues with your account.Therefore, we recommend enabling Data Access audit logs when possible.Email notifications
Essential Contacts sends sinkconfiguration error email notifications to contacts assigned to theTechnical notification category for a Google Cloud project or its parent resource.If the resource does not have a configured contact for Technical notifications,then users listed as IAMProject Ownerroles/owner for theresource receive the email notification.
The email message contains the following information:
- Resource ID: The name of the Google Cloud project or otherGoogle Cloud resource where the sink was configured.
- Sink name: The name of the sink that contains the configuration error.
- Sink destination: The full path of the sink's routing destination; forexample,
pubsub.googleapis.com/projects/PROJECT_ID/topics/TOPIC_ID - Error code: Shorthand description of the error category; for example,
topic_not_found. - Error detail: Detailed information about the error, includingrecommendations for troubleshooting the underlying error.
The sender of this email islogging-noreply@google.com.
To view and manage your sinks, use theLog Router page:
Any sink configuration errors that apply to the resource appear in the list asaCloud Logging sink configuration error. Each error contains a link to one ofthe log entries generated by the faulty sink. To examine the underlying errorsin detail, see the sectionError logs.
Types of sink errors
The following sections describe broad categories of sink-related errors andhow you can troubleshoot them.
Incorrect destination
If you set up a sink but then see a configuration error that the destinationcouldn't be found when Logging attempted to route log entries,here are some possible reasons:
Your sink's configuration contains a misspelling or other formatting error inthe specified sink destination.
You need to update the sink's configuration to properly specify the existingdestination.
The specified destination might have been deleted.
You can either change the sink's configuration to use a different, existingdestination or recreate the destination with the same name.
To resolve these types of failure,edit your sink. Forexample, you might edit your sink by using theLog Router page:
Your sink begins routing log entries when the destination is found and newlog entries that match your filter are received by Logging.
Managing sinks issues
If youdisabled a sink to stop storing log entries in alog bucket but still see log entries being routed, then wait a few minutes forchanges to the sink to apply.
Permissions issues
Note: If you're unable to see permission-related sink errors in yourlog entries, then verify that you'veenabled Data Access audit logs forthe services to which you're sending log entries. Data Access audit logs helpGoogle Support troubleshoot issues with your account. Therefore, we recommendenabling Data Access audit logs when possible.When a sink tries to route a log entry but lacks the appropriateIAM permissions for the sink's destination, the sink reportsan error, which you canview, and skips the log entry.
When you create a sink, the sink's service account must be granted theappropriatedestination permissions.If you create the sink in the Google Cloud console in the sameGoogle Cloud project, then the Google Cloud console typically assigns thesepermissions automatically. However, if you create the sink in a differentGoogle Cloud project, orby using gcloud CLI or the Logging API, then you mustconfigure the permissions manually.
If you're seeing permission-related errors for your sink, then add the necessarypermissions or update your sink to use a differentdestination. For instructions on how to update these permissions, seeDestination permissions.
There is a slight delay between creating the sink and using the sink's newservice account to authorize writing to the destination. Your sink beginsrouting log entries when any permissions are corrected and new log entries thatmatch your filter are received by Logging.
Organizational policy issues
If you're trying to route a log entry but encounter anorganization policy thatconstrains Logging from writing to the sink's destination, thenthe sink can't route to the selected destination and reports an error.
If you're seeing errors related to organization policies, then you can do thefollowing:
Update the organization policy for the destination to remove the constraintsblocking the sink from routing log entries; this presupposes that you havethe appropriate permissions to update the organization policy.
You might examine whether aResource Location Restriction (
constraints/gcp.resourceLocations) exists.This constraint determines the locations where data can be stored.Also, some services support constraints that might affect a log sink.For example, there are several restrictions that might apply when aPub/Sub destination is selected.For a list of possible constraints, seeOrganization policy constraints.For instructions, seeCreating and editing policies.
If you can't update the organization policy, then update your sink in theLog Router page to use a compliant destination.
Your sink begins routing log entries when the organization policy no longerblocks the sink from writing to the destination and new log entries that matchyour filter are received by Logging.
Encryption key issues
If you're using encryption keys, whether managed with Cloud Key Management Service or by you, toencrypt the data in the sink's destination, then you might see related errors.Here are some possible issues and ways to fix them:
Billing isn't being enabled for the Google Cloud project that contains theCloud KMS key.
Even if the sink was successfully created with the correct destination,this error message displays if there isn't a valid billing accountassociated with the Google Cloud project that contains the key.
Make sure there is a validbilling account linked to the Google Cloud projectthat contains the key. If a billing account isn't linked to theGoogle Cloud project, enable billing for that Google Cloud project oruse a Cloud KMS key contained by a Google Cloud project thathas a valid billing account linked to it.
The Cloud KMS key can't be found.
The Google Cloud project that contains the Cloud KMS keyconfigured to encrypt the data isn't found.
Use a valid Cloud KMS key from an existingGoogle Cloud project.
The location of the Cloud KMS key doesn't match the location of thedestination.
If the Google Cloud project that contains the Cloud KMS key islocated in a region that differs from the region of the destination, thenencryption fails and the sink can't route data to that destination.
Use a Cloud KMS key contained by a Google Cloud project whoseregion matches the sink's destination.
Encryption key access is denied to the sink's service account.
Even if the sink was successfully created with thecorrect service account permissions, this errormessage displays if the sink destination uses an encryption key thatdoesn't give the service account sufficient permissions to encrypt ordecrypt the data.
Grant the Cloud KMSCryptoKey Encrypter/Decrypter role for the service accountspecified in the sink's
writerIdentityfield for the key usedin the destination. Also verify that the Cloud KMS API is enabled.
Quota issues
When sinks write log entries, destination-specific quotas apply to theGoogle Cloud projects in which the sinks were created. If the quotas areexhausted, then the sink stops routing log entries to the destination.
For example, when routing data to BigQuery, you might see anerror that tells you your per-table streaming insert quota has been exceeded fora certain table in your dataset. In this case, your sink might be routing toomany log entries too quickly. The same concept applies to the other supportedsink destinations, for example to Pub/Sub topics.
To fix the quota exhaustion issues, decrease the amount of log data being routedby updating your sink's filter to match fewer log entries. You might use thesample function in your filter to select a fraction of thetotal number of log entries.
When quota is available, your sink routes log entries to the sink's destination.
For details on the limits that might apply when you route log entries,review the appropriate destination's quota information:
In addition to the generalsink error types, here are themost common destination-specific error types and how you can fix them.
Errors routing to Cloud Storage
The following are the most common errors when routing log entries toCloud Storage:
Late-arriving log entries:
Routed log entries are saved to Cloud Storage buckets in hourlybatches. It might take from 2 to3 hours before the first entries beginto appear.
Routed log file shards with the suffix
An("Append") hold log entries that arrived late. If the Cloud Storagedestination experiences an outage, then Cloud Logging buffers the datauntil the outage is over.
Unable to grant correct permissions to the destination:
- Verify that the service account for the log sink has the correctpermissions. For more information, see thePermissions issues section of this document.
Errors routing to BigQuery
The following are the most common errors when routing log entries toBigQuery:
Invalid table schema:
Log entries streamed to the table in your BigQuery datasetdon't match the current table's schema. Common issues include trying toroute log entries with different data types, which causes aschema mismatch. For example,one of the fields in the log entry is an integer, while a correspondingcolumn in the schema has a string type.
Make sure that your log entries match the table's schema. After you fixthe source of the error, you can rename your current table and letLogging create the table again.
BigQuery supports loadingnested data into its tables.However, when loading data from Logging, the maximumnested depth limit for a column is 13 levels.
When BigQuery identifies a schema mismatch, it creates atable within the corresponding dataset to store the error information.A table's type determines the table name. For date-sharded tables,the naming format is
export_errors_YYYYMMDD. For partitioned tables,the naming format isexport_errors. For information about the schemaof the error tables and about how to prevent future field-type mismatches,seeMismatches in schema.Log entries are outside of the permitted time boundaries:
Log entries streamed to the partitioned BigQuery table areoutsidethe permitted time boundaries. BigQuery doesn't acceptlog entries that are too far in the past or future.
You can update your sink to route those log entries toCloud Storage and use a BigQuery load job. See theBigQuery documentationfor further instructions.
Dataset doesn't allow the service account associated with the log sink towrite to it:
Even if the sink was successfully created with thecorrect service account permissions, this errormessage displays if there isn't a valid billing account associated withthe Google Cloud project that contains the sink destination.
Make sure there is abilling account linked to your Google Cloud project.If a billing account isn't linked to the sink destinationGoogle Cloud project, enable billing for that Google Cloud project orupdate the sink destination so that it's located in aGoogle Cloud project that has a valid billing account linked to it.
Dataset contains duplicate log entries:
Duplicate log entries can occur when there are failures in streaminglog entries toBigQuery, including due to retries or misconfigurations.Cloud Logging deduplicates log entries with the same
timestampandinsertIdat query time. BigQuery doesn't eliminateduplicate log entries.To ignore duplicate log entries in BigQuery, include the
SELECT DISTINCTclause in your query. For example:
SELECT DISTINCT insertId, timestamp FROMTABLE_NAME
Log entries are backfilled after a Cloud Logging incident:
Logging automatically generate tables with a
backfill_prefix as part of a backfill operation that occurs when a Cloud Loggingincident prevents routing of log data to BigQuery.Tables with a
backfill_prefix contain all log entries that were to berouted to BigQuery during the time range of theincident. These tables might contain some log entries that weresuccessfully routed to the table specified by the sink.To prevent duplicate data, we recommend merging data from backfill tablesinto the original tables and then deleting the backfill tables.
Errors routing to Cloud Logging buckets
You might encounter a situation where you can see log entries in theLogs Explorerthat you excluded with your sink. You can still see these log entries if any offollowing conditions are true:
You're running your query in the Google Cloud project that generated thelog entries.
To fix this, verify you're running your query in the correctGoogle Cloud project.
The excluded log entries were sent to multiple log buckets; you're seeing acopy of the same log you meant to exclude.
To fix this, check your sinks in theLog Router page to verify youaren't including the log entries in other sinks' filters.
You have access toviews in the log bucket wherethe log entries were sent. In this case, you can see those log entriesby default.
To avoid seeing these log entries in the Logs Explorer, you canrefine the scope of your searchto your source Google Cloud project or bucket.
Troubleshoot storing logs
Why can't I delete this bucket?
If you're trying to delete a bucket, do the following:
Verify that you have the correct permissions to delete the bucket. For thelist of the permissions that you need, seeAccess control with IAM.
Determine whether the bucket is locked bylisting the bucket's attributes. If the bucket islocked, check the bucket'sretention period. You can't delete a locked bucket untilall of the logs in the bucket have fulfilled the bucket's retention period.
Verify that the log bucket doesn't have a linked BigQuery dataset.You can't delete a log bucket with a linked dataset.
The following error is shown in response to a
deletecommand on alog bucket that has a linked dataset:FAILED_PRECONDITION: This bucket is used for advanced analytics and has an active link. The link must be deleted first before deleting the bucket
To list the links associated with a log bucket, run the
gcloud logging links listcommand or run theprojects.locations.buckets.links.listAPI method.
Which service accounts are routing logs to my bucket?
To determine if any service accounts have IAM permissions toroute logs to your bucket, do the following:
In the Google Cloud console, go to theIAM page:
If you use the search bar to find this page, then select the result whose subheading isIAM & Admin.
From thePermissions tab, view byRoles. You see a table with allthe IAM roles and principals associated with yourGoogle Cloud project.
In the table'sFiltertext box filter_list,enterLogs Bucket Writer.
You see any principals with theLogs Bucket Writer role. If a principalis a service account, its ID contains the string
gserviceaccount.com.Optional: If you want to remove a service account from being able to routelogs to your Google Cloud project, select thecheck box check_box_outline_blankfor the service account and clickRemove.
Why do I see logs for a Google Cloud project even though I excluded them from my_Default sink?
You might be viewing logs in a log bucket in acentralized Google Cloud project, whichaggregates logs from across your organization.
If you're using the Logs Explorer to access these logs and see logs that youexcluded from the_Default sink, then your view might be set to theGoogle Cloud project level.
To fix this issue, selectLog view in theRefine scope menuand then select the log view associated with the_Default bucket in yourGoogle Cloud project. You shouldn't see the excluded logs anymore.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-11-07 UTC.