Collate and route organization- and folder-level logs to supported destinations Stay organized with collections Save and categorize content based on your preferences.
This document describes how to create aggregatedsinks. Aggregated sinks let youcombine and route logs that are generated by theGoogle Cloud resourcesin your organization or folder to a centralized location.
Note: If your data is managed through anAssured Workloads environment,then this feature might be impacted or restricted. For information, seeRestrictions and limitations in Assured Workloads.Before you begin
Before you create a sink, ensure the following:
You are familiar with the behavior of aggregated sinks. To learn about thesesinks, seeAggregated sinks overview.
You have a Google Cloud folder or organization with log entriesthat you can see in theLogs Explorer.
You have one of the following IAM roles for theGoogle Cloud organization or folder from which you're routinglog entries.
- Owner (
roles/owner) - Logging Admin (
roles/logging.admin) - Logs Configuration Writer (
roles/logging.configWriter)
The permissions contained in these roles let you create, delete, ormodify sinks. For information about setting IAM roles, seethe LoggingAccess control guide.
- Owner (
Thedestination of the aggregated sink existsor you have the ability to create it.
When the destination is a Google Cloud project, the project can be in anyorganization. All other destinations can be in any project in anyorganization.
Select the tab for how you plan to use the samples on this page:
Console
When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication.
gcloud
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, aCloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.
REST
To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.
Install the Google Cloud CLI. After installation,initialize the Google Cloud CLI by running the following command:
gcloudinit
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
For more information, seeAuthenticate for using REST in the Google Cloud authentication documentation.
Create an aggregated sink
To configure an aggregated sink, create the sink and then grant thesink the permissions to write to the destination. This section describes howto create an aggregated sink. For information aboutgranting permissions to the sink, see the section of this page titledSet destination permissions.
You can create up to 200 sinks per folder or organization.
Console
To create an aggregated sink for your folder or organization, do the following:
In the Google Cloud console, go to theLog Router page:
If you use the search bar to find this page, then select the result whose subheading isLogging.
Select an existing folder or organization.
SelectCreate sink.
In theSink details panel, enter the following details:
Sink name: Provide an identifier for the sink; note that after youcreate the sink, you can't rename the sink but you can delete it andcreate a new sink.
Sink description (optional): Describe the purpose or use case forthe sink.
In theSelect sink service menu, select the type of destination,and then complete the dialog to specify the destination. You can selectan existing destination or create the destination.
For an intercepting sink, selectGoogle Cloud project, and thenenter the fully-qualified name of the destination Google Cloud project:
logging.googleapis.com/projects/DESTINATION_PROJECT_IDFor a non-intercepting sink, select the destination, and thenenter the fully-qualified name of the destination. The followingdestinations are supported:
Google Cloud project
logging.googleapis.com/projects/DESTINATION_PROJECT_IDCloud Logging bucket
logging.googleapis.com/projects/DESTINATION_PROJECT_ID/locations/LOCATION/buckets/BUCKET_NAME
BigQuery dataset
You must enter the fully-qualified name of a write-enabled dataset.The dataset can be adate-sharded or partitioned table. Don't enter thename of a linked dataset. Linked datasets are read only.
bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID
Cloud Storage bucket
storage.googleapis.com/BUCKET_NAME
Pub/Sub topic
pubsub.googleapis.com/projects/PROJECT_ID/topics/TOPIC_ID
Splunk
Enter the Pub/Sub topic for yourSplunkservice.
In theChoose logs to include in sink panel, select the resourcesto include in the sink.
For an intercepting sink, selectIntercept logs ingested by this organization and all child resources.
For a non-intercepting sink, selectInclude logs ingested by this resource and all child resources.
In theBuild inclusion filter field,enter a filter expression that matches the log entries youwant to include. If you don't set a filter, alllog entries from your selected resource are routed to the destination.
For example, you might want to build a filter to route all Data Accessaudit logs to a single Logging bucket. This filter lookslike the following:
LOG_ID("cloudaudit.googleapis.com/data_access") OR LOG_ID("externalaudit.googleapis.com/data_access")For filter examples, seeCreate filters for aggregated sinks section of thispage.
Note that the length of a filter can't exceed 20,000 characters.
Optional: To verify you entered the correct filter, selectPreview logs. This opens the Logs Explorer in a new tab with thefilter pre-populated.
Optional: In theChoose logs to exclude from the sink panel, dothe following:
In theExclusion filter name field, enter a name.
In theBuild an exclusion filter field, enter afilter expression thatmatches the log entries you want to exclude. You can also use the
Key Point: If you want your exclusion filter to be disabledwhen the sink is created, then selectDisable after you enteryour filter expression. You can update the sink later to enable theexclusion filter.samplefunctionto select a portion of the log entries to exclude.For example, to exclude the log entries from a specific project frombeingrouted to the destination, add the following exclusion filter:
logName:projects/PROJECT_ID
To exclude log entries from multiple projects, use the logical-ORoperator to join
logNameclauses.
You can create up to 50 exclusion filtersper sink. Note that the length of a filter can't exceed20,000 characters.
SelectCreate sink.
To complete the configuration of your aggregated sink,grant the service account for the sink the permission to writelog entries to your sink's destination. For more information, seeSet destination permissions.
gcloud
To create an aggregated sink, use thelogging sinks create command:
To create a sink, call the
gcloud logging sinks createcommand, and ensure that you include the--include-childrenoption.Before using the following command, make the following replacements:
- SINK_NAME: The name of the log sink. You can't change the name of a sink after you create it.
- SINK_DESTINATION: The service or project to where you want your log entries routed. For information about the format of thesedestinations, seeDestination path formats.
- INCLUSION_FILTER: The inclusion filter for a sink. For filter examples, seeCreate filters for aggregated sinks.
- FOLDER_ID: The ID of the folder. If you want to create a sinkat the organization level, then replace
--folder=FOLDER_IDwith-- organization=ORGANIZATION_ID.
Execute the
gcloud logging sinks createcommand:gcloud logging sinks createSINK_NAME \SINK_DESTINATION --include-children \ --folder=FOLDER_ID --log-filter="INCLUSION_FILTER"
You can also provide the following options:
- To create an intercepting sink, include the
--intercept-childrenoption.
For example, if you're creating an aggregated sink at the folder leveland whose destination is a Pub/Sub topic, your commandmight look like the following:
gcloudloggingsinkscreateSINK_NAME\pubsub.googleapis.com/projects/PROJECT_ID/topics/TOPIC_ID--include-children \--folder=FOLDER_ID --log-filter="logName:activity"
Grant the service account for the sink permission to write to your sinkdestination. For more information, seeSet destination permissions.
REST
To create an aggregated sink, use theorganizations.sinks.create orfolders.sinks.create Logging API method.Prepare the arguments to the method as follows:
Set the
parentfield to be the Google Cloud organization orfolder in which to create the sink. The parent must be one of thefollowing:organizations/ORGANIZATION_IDfolders/FOLDER_ID
In the
LogSinkobject in the method request body,do one of the following:Set
includeChildrentoTrue.To create an intercepting sink, also set the
interceptChildrenfield toTrue.
Set the
filterfield to match the log entries you want to include.For filter examples, seeCreate filters for aggregated sinks.
The length of a filter can't exceed 20,000 characters.
Set the remaining
LogSinkfields as you would for any sink.For more information, seeRoute logs to supported destinations.Call
organizations.sinks.createorfolders.sinks.createtocreate the sink.Grant the service account for the sink permission to write to your sinkdestination. For more information, seeSet destination permissions.
Any changes made to a sink might take a few minutes to apply.
Filters for aggregated sinks
This section provides examples of filters that you might use inan aggregated sink. For more examples, seeSample queries using the Logs Explorer.
Some examples use the following notation:
:is the substring operator. Don't substitute the=operator....represents any additional filter comparisons.- Variables are indicated by colored text. Replace them with valid values.
The length of a filter is restricted to 20,000 characters.
For more details about the filtering syntax, seeLogging query language.
Select the log source
To route log entries from all child resources, don'tspecify a project, folder, or organizationin your sink's inclusion and exclusion filters. For example, suppose youconfigure an aggregated sink for an organization with the following filter:
resource.type="gce_instance"
With the previous filter, log entries with a resource type ofCompute Engine instances that are written to any child of that organizationare routed by the aggregated sink to the destination.
However, there might be situations where you want to use an aggregated sinkto route log entries only from specific child resources. For example, forcompliancereasons you might want to store audit logs from specific folders or projectsin their own Cloud Storage bucket. In these situations, configure yourinclusion filter to specifyeach child resource whose log entries you wantrouted. If you want to route log entries from a folder and all projects withinthat folder,then the filter must list the folder and each of the projects contained bythat folder, and also join the statements with anOR clause.
The following filters restrict log entries tospecific Google Cloud projects, folders, or organizations:
logName:"projects/PROJECT_ID/logs/"AND...
logName:("projects/PROJECT_A_ID/logs/"OR"projects/PROJECT_B_ID/logs/")AND...
logName:"folders/FOLDER_ID/logs/"AND...
logName:"organizations/ORGANIZATION_ID/logs/"AND...
For example, to route only log entries written to Compute Engine instancesthat were written to the foldermy-folder, use the following filter:
logName:"folders/my-folder/logs/"ANDresource.type="gce_instance"
With the previous filter, log entries written to any resource other thanmy-folder, including log entries written to Google Cloud projects that arechildren ofmy-folder, aren't routed to the destination.
Select the monitored resource
To route log entries from only a specific monitored resource in aGoogle Cloud project, use multiple comparisons to specify the resourceexactly:
logName:"projects/PROJECT_ID/logs"ANDresource.type=RESOURCE_TYPEANDresource.labels.instance_id=INSTANCE_ID
For a list of resource types, seeMonitored resource types.
Select a sample of log entries
To route a random sample of log entries, add thesample built-infunction. For example, to route only ten percent of the log entries matchingyour current filter, use this addition:
sample(insertId, 0.10) AND ...For more information, see thesample function.
For more information about Cloud Logging filters, seeLogging query language.
Set destination permissions
This section describes how to grant Logging theIdentity and Access Management permissions to write log entries to your sink's destination.For the fulllist of Logging roles and permissions,seeAccess control.
Note: To set the destination permissions, you must haveOwner accesson the Google Cloud project that contains the destination.When you create or update a sink that routes log entries to any destination otherthan a log bucket in the current project, a service account for that sinkis required. Logging automatically creates and manages theservice account for you:
- As of May 22, 2023, when you create a sink and no serviceaccount for the underlying resource exists, Logging creates theservice account.Logging uses the same service account forall sinks in the underlying resource. Resources can be a Google Cloud project,an organization, a folder, or abilling account.
- Before May 22, 2023, Logging created a serviceaccount for each sink. As of May 22, 2023,Logging uses a shared service account for all sinks in theunderlying resource.
Thewriter identity of a sink is the identifier of the serviceaccount associated with that sink. All sinks have a writer identity unless theywrite to a log bucket in the current Google Cloud project. The emailaddress in the writer identity identifies the principal that must haveaccess to write data to the destination.
To route log entries to a resource protected by aservice perimeter, you must add the service account for thatsink to an access level and then assign it to the destination service perimeter.This isn't necessary for non-aggregated sinks. For details, seeVPC Service Controls: Cloud Logging.
To set permissions for your sink to route to its destination, do the following:
Console
Note: If you created your sink in the Google Cloud console and you haveOwner access to the destination, then Cloud Logging should have setup the necessary permissions on your behalf. If it did so, you're done. Ifnot, complete the following steps:To get information about the service account for your sink,do the following:
In the Google Cloud console, go to theLog Router page:
If you use the search bar to find this page, then select the result whose subheading isLogging.
Selectmore_vertMenu and then selectView sink details. The writer identity appears in theSink details panel.
If the value of the
writerIdentityfield contains an email address,then proceed to the next step. When the value isNone, you don'tneed to configure destination permissions.Copy the sink's writer identity into your clipboard. Writer identitiesmight look different depending on what resource your sink belongs to,but are always prefixed by
serviceAccount:. The following is anexample of a writer identity:serviceAccount:service-123456789012@gcp-sa-logging.iam.gserviceaccount.com
Grant the principal specified by the sink's writer identity thepermission to write log data to the destination:
In the Google Cloud console, go to theIAM page:
If you use the search bar to find this page, then select the result whose subheading isIAM & Admin.
In the toolbar of the Google Cloud console, select the project which storesthe destination of the aggregated sink. When the destination is aproject, select that project.
ClickGrant access.
Enter the principal specified by the sink's writer identity andthen grant an IAM role:
- For all destinations, grant theLogs Writer role (
roles/logging.logWriter). Specifically, a principal needs thelogging.logEntries.routepermission. - Grant one of the following roles based on the destination:
- Log bucket: Grant theLogs Bucket Writer role (
roles/logging.bucketWriter). - Cloud Storage bucket: Grant theStorage Object Creator role (
roles/storage.objectCreator). - BigQuery dataset: Grant theBigQuery Data Editor role (
roles/bigquery.dataEditor). - Pub/Sub topic, including Splunk: Grant thePub/Sub Publisher role (
roles/pubsub.publisher).
- Log bucket: Grant theLogs Bucket Writer role (
- For all destinations, grant theLogs Writer role (
gcloud
Ensure that you haveOwner access on theGoogle Cloud project that contains the destination.If you don't haveOwner access to the destination of the sink,then ask a project owner to add the writer identity as a principal.
To get information about the service account for your sink, call the
gcloud logging sinks describemethod.Before using the following command, make the following replacements:
- SINK_NAME: The name of the log sink. You can't change the name of a sink after you create it.
Execute the
gcloud logging sinks describecommand:gcloudloggingsinksdescribeSINK_NAMEIf the sink details contain a field labeled
writerIdentity, thenproceed to the next step. When the details don't include awriterIdentityfield, you don't need to configure destination permissions for the sink.Copy the sink's writer identity into your clipboard. The followingillustrates a writer identity:
serviceAccount:service-123456789012@gcp-sa-logging.iam.gserviceaccount.comGrant the sink's writer identity the permission to write log data to thedestination by calling the
gcloud projects add-iam-policy-bindingcommand.Before using the following command, make the following replacements:
- PROJECT_ID: The identifier of the project. Select the project which stores the destination ofthe aggregated sink. When the destination is a project, select thatproject.
- PRINCIPAL: An identifier for the principal that you want to grant the role to. Principal identifiers usually have the following form:
PRINCIPAL-TYPE:ID. For example,user:my-user@example.com. For a full list of the formats thatPRINCIPALcan have, seePrincipal identifiers. ROLE: An IAM role. Grant the sink's writer identity an IAMrole based on the destination of the log sink:
- For all destinations, grant theLogs Writer role (
roles/logging.logWriter). Specifically, a principal needs thelogging.logEntries.routepermission. - Grant one of the following roles based on the destination:
- Log bucket: Grant theLogs Bucket Writer role (
roles/logging.bucketWriter). - Cloud Storage bucket: Grant theStorage Object Creator role (
roles/storage.objectCreator). - BigQuery dataset: Grant theBigQuery Data Editor role (
roles/bigquery.dataEditor). - Pub/Sub topic, including Splunk: Grant thePub/Sub Publisher role (
roles/pubsub.publisher).
- Log bucket: Grant theLogs Bucket Writer role (
Execute the
gcloud projects add-iam-policy-bindingcommand:gcloudprojectsadd-iam-policy-bindingPROJECT_ID--member=PRINCIPAL--role=ROLE- For all destinations, grant theLogs Writer role (
REST
We recommend that you use the Google Cloud console or the Google Cloud CLI to grant a role to the service account.
What's next
Learn how to createlog views on a log bucket.Log views let you grant principals read-access to a subset of the log entriesstored in a log bucket.
For information about managing existing sinks, seeRoute logs to supported destinations: Manage sinks.
If you encounter issues as you use sinks to route logs, seeTroubleshoot routing and sinks.
To learn how to view your logs in their destinations, as well as howthe logs are formatted and organized, seeView logs in sink destinations
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-19 UTC.