Profile Azure Blob Storage data Stay organized with collections Save and categorize content based on your preferences.
This page describes how to configure Sensitive Data Protection discovery forAzure Blob Storage. This feature is available only to customers whohaveactivated Security Command Center at the Enterprisetier.
Sensitive Data Protection discovery helps you learn about the typesof data that you're storing in Blob Storage and thesensitivity levels of your data. When you profile yourBlob Storage data, you generatefile store data profiles,which provide insights and metadata about your Blob Storagecontainers. For each Blob Storage container, a file storedata profile includes the following information:
- The types of files that you're storing in the container,categorized intofileclusters
- The sensitivity level of the data in the container
- A summary about each detected file cluster, including the types of sensitiveinformation found
For a full list of insights and metadata in each file store data profile, seeFile store dataprofiles.
For more information about the discovery service, seeDataprofiles.
Workflow
The high-level workflow for profiling Azure Blob Storage data is asfollows:
In Security Command Center,create a connector for MicrosoftAzure. Make sure thatyou selectGrant permissions for Sensitive Data Protection discovery.
Create an inspectiontemplatein the
globalregion or the region where you plan to store the discoveryscan configuration and all generated data profiles.Create a discovery scan configuration forAzure Blob Storage.
Sensitive Data Protection profiles your data according to the schedulethat you specify.
Data residency considerations
Consider the following when you plan to profile data fromother cloud providers:
- The data profiles are stored alongside the discovery scan configuration. In contrast, when you profile Google Cloud data, the profiles are stored in the same region as the data to be profiled.
- If you store your inspection template in the
globalregion, an in-memory copy of that template is read in the region where you store the discovery scan configuration. - Your data is not modified. An in-memory copy of your data is read in the region where you store the discovery scan configuration. However, Sensitive Data Protection makes no guarantees about where the data passes through after it reaches the public internet. The data is encrypted with SSL.
Empty files and containers
Discovery doesn't scan empty Blob Storage files andcontainers and doesn't take these into account when listing file extensionsseen. A container that has only empty files is also considered as empty.
Before you begin
In Security Command Center, create a connector for Microsoft Azure. For moreinformation, seeConnect to Microsoft Azure for configuration and resourcedata collection in theSecurity Command Center documentation.
Confirm that you have the IAM permissions that are required toconfigure data profiles at theorganization level.
If you don't have the Organization Administrator(
roles/resourcemanager.organizationAdmin) or Security Admin(roles/iam.securityAdmin) role, you can still create a scanconfiguration. However, after you create the scan configuration, someone witheither of those roles mustgrant data profiling access to your service agent.Confirm that you have an inspection template in the
globalregion or theregion where you plan to store the discovery scan configuration and allgenerated data profiles.This task lets you automatically create an inspection template in the
globalregion only. If organizational policies prevent you from creating aninspection template in theglobalregion, then before you perform this task, you mustcreate aninspectiontemplatein the region where you plan to store the discovery scan configuration.To send Pub/Sub notifications to a topic when certain eventsoccur—such as when Sensitive Data Protection profiles a newcontainer—create a Pub/Subtopic before performing this task.
To generate data profiles, you need aservice agent container and a service agentwithin it. This task lets you create them automatically.
Create a scan configuration
Go to theCreate scan configuration page.
Go to your organization. On the toolbar, click the project selector andselect your organization.
The following sections provide more information about the steps in theCreatescan configuration page. At the end of each section, clickContinue.
Select a discovery type
SelectAzure Blob Storage.
Select scope
Do one of the following:
- To scan all Blob Storage assets that your Azure connectorhas access to, selectScan all Azure assets available through yourconnector.
- To scan the Blob Storage data in a single Azuresubscription, selectScan one Azure subscription. Enter thesubscription ID.
- To scan a single Blob Storage container, selectScan one Azure Blob Storage container. Enter the details of thecontainer that you want to scan.
Manage schedules
If thedefault profilingfrequency suitsyour needs, you can skip this section of theCreate scan configuration page.
Configure this section for the following reasons:
- To make fine-grained adjustments to the profiling frequency of all your dataor certain subsets of your data.
- To specify the containers that you don't want to profile.
- To specify the containers that you don't want profiled more than once.
To make fine-grained adjustments to profiling frequency, follow these steps:
ClickAdd schedule.
Note: If you selected a single container as the scope of this configuration, then you can't add a schedule. You can only edit the catch-all schedule. In addition, you can't specify filters. You can only edit the frequency and conditions for profiling.In theFilters section, define one or more filters that specify whichcontainers are in the schedule's scope. A container isconsidered to be in the schedule's scope if it matches at least one of thefilters defined.
To configure a filter, specify at least one of the following:
- A subscription ID or a regular expression that specifies one or moresubscription IDs
- A container name or a regular expression that specifies one ormore containers
Regular expressions must followRE2 syntax.
For example, if you want all containers in an account to beincluded in the filter, enter the subscription ID in theSubscription IDfield.
To match a filter, a container must meet all the regularexpressions specified within that filter.
To add more filters, clickAdd filter and repeat this step.
ClickFrequency.
In theFrequency section, specify whether to profile the containers that youselected and, if so, how often:
If you never want the containers to be profiled, turn offDo profile this data.
If you want the containers to be profiled at least once, leaveDo profile this data on.
Specify whether to reprofile your data and what events should trigger a reprofileoperation. For more information, seeFrequency of data profilegeneration.
- ForOn a schedule, specify how often you want the the containers to be reprofiled. The containers are reprofiled regardless of whether they underwent any changes.
- ForWhen inspect template changes, specify whether you want your data to be reprofiled when the associated inspection template is updated, and if so, how often.Note: You specify the inspection templates to use in theSelect inspection template step on this page.
An inspection template change is detected when either of the following occurs:
- The name of an inspection template changes in your scan configuration.
- The
updateTimeof an inspection template changes.
For example, if you set an inspection template for the
us-west1region and you update that inspection template, then only data in theus-west1region will be reprofiled.
Optional: ClickConditions.
In theConditions section, you specify any conditions that thecontainers—defined in your filters—must meet beforeSensitive Data Protection profiles them.
By default, Sensitive Data Protection scans all objects in acontainer. If you want to scan only the objects in a particularblob access tier, select those tiers. To include blobs that don't have anaccess tier, selectNot applicable.
ClickDone.
Optional: To add more schedules, clickAdd schedule and repeat theprevious steps.
To specify precedence between schedules, reorder them using the
The order of the schedules specifies how conflicts between schedules areresolved. If a container matches the filters of two different schedules,the schedule higher in the schedules list dictates the profiling frequencyfor that container.
Note: If your discovery pricing mode issubscription mode, the rate at which Sensitive Data Protection profiles your data is affected by how much capacity you purchased. To determine your daily profiling capacity, seeMonitoring utilization. If you haveunder-provisioned capacity, then the profiling frequencies that you set in your schedules might not be followed. If there is a backlog of data to be profiled, the schedule order doesn't dictate the order in which Sensitive Data Protection profiles the data in the backlog. Rather, all data resources in scope get a randomly assigned slot in the queue.Optional: Edit or turn offCatch-all schedule.
The last schedule in the list is the catch-all schedule. This schedule coversthe containers in your selected scope that don't match any of theschedules that you created. The catch-all schedule follows thesystemdefault profilingfrequency.
- To adjust the catch-all schedule, clickEdit schedule, and then adjustthe settings as needed.
- To prevent Sensitive Data Protection from profiling any resource that iscovered by the catch-all schedule, turn offProfile the resourcesthat don't match any custom schedule.
Select an inspection template
Depending on how you want to provide an inspection configuration, choose one ofthe following options. Regardless of which option you choose,Sensitive Data Protection scans your data in the region where that data is stored.That is, your data doesn't leave its region of origin.
Option 1: Create an inspection template
Choose this option if you want to create a new inspection template in theglobal region.
- ClickCreate new inspection template.
Optional: To modify the default selection of infoTypes, clickManage infoTypes.
For more information about how to manage built-in and custom infoTypes, seeManage infoTypes through theGoogle Cloud console.
You must have at least one infoType selected to continue.
Optional: Configure the inspection template further by adding rulesetsand setting a confidence threshold. For more information, seeConfigure detection.
When Sensitive Data Protection creates the scan configuration, it stores thisnew inspection template in theglobal region.
Option 2: Use an existing inspection template
Choose this option if you have existing inspection templates that youwant to use.
- ClickSelect existing inspection template.
- Enter the full resource name of the inspection template that you want to use. TheRegion field is automatically populated with the name of the region where your inspection template is stored.
The inspection template that you enter must be in the same region where you plan to store this discovery scan configuration and all the generated data profiles.
To respect data residency, Sensitive Data Protection doesn't use an inspection template outside the region where that template is stored.
To find the full resource name of an inspection template, follow these steps:
- Go to your inspection templates list. This page opens on a separate tab.
- Select the project that contains the inspection template that you want to use.
- SelectConfiguration> Templates> Inspect, and then click the template ID of the template that you want to use.
- On the page that opens, copy the full resource name of the template. The full resource name follows this format:
projects/PROJECT_ID/locations/REGION/inspectTemplates/TEMPLATE_ID
- On theCreate scan configuration page, in theTemplate name field, paste the full resource name of the template.
Add actions
This section describes how to specify actions that you wantSensitive Data Protection to take after profiling a container. These actionsare useful if you want to send insights gathered from data profiles to otherGoogle Cloud services.
Note: For information about how other Google Cloud services may charge you for configuring actions, seePricing for exporting data profiles.Publish to Google Security Operations
Metrics gathered from dataprofiles can add context to your Google Security Operations findings. The addedcontext can help you determine the most important security issues to address.
For example, if you're investigating a particular service agent,Google Security Operations can determine what resources the service agent accessedand whether any of those resources have high-sensitivity data.
To send your data profiles to your Google Security Operations instance, turn onPublish to Google Security Operations.
If you don't have a Google Security Operations instance enabled for yourorganization—through thestandaloneproduct orthroughSecurity Command CenterEnterprise—turning on this option has noeffect.
Publish to Security Command Center
Findings from data profiles provide context when you triage and develop responseplans for your vulnerability and threat findings inSecurity Command Center.
Note: You can also configure Security Command Center to automatically prioritize resources for theattack path simulation feature according to the calculated sensitivity of the data that the resources contain. For more information, seeSet resource priority values automatically by data sensitivity.To send the results of your data profiles to Security Command Center, make sure thePublish to Security Command Center option is turned on.
For more information, seePublish data profiles toSecurity Command Center.
Save data profile copies to BigQuery
Sensitive Data Protection saves a copy of each generated data profilein a BigQuery table. If you don't provide the details of yourpreferred table, Sensitive Data Protection creates a dataset and table in theservice agent container.By default, the dataset is namedsensitive_data_protection_discovery andthe table is nameddiscovery_profiles.
DataProfileBigQueryRowSchemaas its schema. This schema can change as Sensitive Data Protection addsfeatures. Make sure that your workflows can handle schema changes, for example,by ignoring unknown fields.This action lets you keep a history of all of your generated profiles. Thishistory can be useful for creating audit reports andvisualizing dataprofiles. You can alsoload this information into other systems.
Also, this option lets you see all of your data profiles in a single view,regardless of which region your data resides in. Although you can alsoview thedata profiles through theGoogle Cloud console, theconsole displays the profiles in only one region at a time.
When Sensitive Data Protection fails to profile a container, it periodicallyretries. To minimize noise in the exported data, Sensitive Data Protectionexports only the successfully generated profiles to BigQuery.
Sensitive Data Protection starts exporting profiles from the time you turn onthis option. Profiles that were generated before you turned on exporting aren'tsaved to BigQuery.
Note:Your service agent must have write access on the table where the profile copies will be saved. If you don't have a service agent yet, Sensitive Data Protection lets you create one later in theCreate scan configuration page.For example queries that you can use when analyzing data profiles,seeAnalyze data profiles.
Save sample discovery findings to BigQuery
Sensitive Data Protection can add sample findings to aBigQuery table of your choice. Sample findings represent a subsetof all findings and might not represent all infoTypes that were discovered.Normally, the system generates around 10 sample findings per container, butthis number can vary for each discovery run.
Each finding includes the actual string (also calledquote) that was detectedand its exact location.
This action is useful if you want to evaluate whether yourinspectionconfiguration is correctlymatching the type of information that you want to flag as sensitive. Using theexported data profiles and the exported sample findings, you can runqueries to get more information about the specific items that were flagged, theinfoTypes they matched, their exact locations, their calculated sensitivitylevels, and other details.
Important: The output table usesDataProfileFindingas its schema. This schema can change as Sensitive Data Protection addsfeatures. Make sure that your workflows can handle schema changes, for example,by ignoring unknown fields.Example query: Show sample findings relatedto file store data profiles
This example requires bothSave data profile copies to BigQuery andSave sample discovery findings to BigQuery to be enabled.
The following query uses anINNER JOIN operation on boththe table of exported data profiles and the table of exported sample findings. In the resultingtable, each record shows the finding's quote, the infoType that it matched, the resource thatcontains the finding, and the calculated sensitivity level of the resource.
SELECTfindings_table.quote,findings_table.infotype.name,findings_table.location.container_name,profiles_table.file_store_profile.file_store_pathasbucket_name,profiles_table.file_store_profile.sensitivity_scoreasbucket_sensitivity_scoreFROM`FINDINGS_TABLE_PROJECT_ID.FINDINGS_TABLE_DATASET_ID.FINDINGS_TABLE_ID_latest_v1`ASfindings_tableINNERJOIN`PROFILES_TABLE_PROJECT_ID.PROFILES_TABLE_DATASET_ID.PROFILES_TABLE_ID_latest_v1`ASprofiles_tableONfindings_table.data_profile_resource_name=profiles_table.file_store_profile.name
To save sample findings to a BigQuery table, follow thesesteps:
Turn onSave sample discovery findings to BigQuery.
Enter the details of the BigQuerytable where you want to save the sample findings.
The table that you specify for this action must be different from thetable used for theSave data profile copies to BigQuery action.
ForProject ID, enter the ID of an existing project where you wantto export the findings to.
ForDataset ID, enter the name of an existing dataset in the project.
ForTable ID, enter the name of the BigQuery table wherewant to save the findings to. If this table doesn't exist,Sensitive Data Protection automatically creates it for you using the namethat you provide.
For information about the contents of each finding that is saved in theBigQuery table, seeDataProfileFinding.
Publish to Pub/Sub
Turning onPublish to Pub/Sub lets you take programmaticactions based on profiling results. You can use Pub/Subnotifications to develop a workflow for catching and remediating findingswith significant data risk or sensitivity.
To send notifications to a Pub/Sub topic, follow these steps:
Turn onPublish to Pub/Sub.
A list of options appears. Each option describes an event that causesSensitive Data Protection to send a notification to Pub/Sub.
Select the events that should trigger a Pub/Sub notification.
If you selectSend a Pub/Sub notification each time a profile is updated,Sensitive Data Protection sends a notification when there's a change in thesensitivity level, data risk level, detected infoTypes, public access, andother importantmetrics in theprofile.
For each event you select, follow these steps:
Enter the name of the topic. The name must be in the following format:
projects/PROJECT_ID/topics/TOPIC_IDReplace the following:
- PROJECT_ID: the ID of the project associated with thePub/Sub topic.
- TOPIC_ID: the ID of the Pub/Sub topic.
Specify whether to include the full container profile in thenotification, or just the full resource name of the container thatwas profiled.
Set the minimum data risk and sensitivity levels that must be met forSensitive Data Protection to send a notification.
Specify whether only one or both of the data risk and sensitivityconditions must be met. For example, if you choose
AND, thenboth the data risk and the sensitivity conditions must bemet before Sensitive Data Protection sends a notification.
roles/pubsub.publisher). If you don't have a service agent yet, Sensitive Data Protection lets you create one later in theCreate scan configuration page. If there are configuration or permission issues with the Pub/Sub topic,Sensitive Data Protection retries sending the Pub/Sub notification for up totwo weeks. After two weeks, the notification is discarded.Manage service agent container and billing
In this section, you specify the projectto use as aservice agent container.You can have Sensitive Data Protection automatically create a new project,or you can choose an existing project.
Regardless of whether you're using a newly created service agent or reusing anexisting one, make sure it has read access to the data to be profiled.
Automatically create a project
If you don't have the permissions needed to create a project in theorganization, you need toselect an existing project insteador obtain the required permissions. For information about the requiredpermissions, seeRoles required to work with data profiles at the organizationor folderlevel.
To automatically create a project to use as your service agent container,follow these steps:
- In theService agent container field, review the suggested project ID andedit it as needed.
- ClickCreate.
- Optional: Update the default project name.
Select the account to bill for all billable operations related to this newproject, including operations that aren't related to discovery.
Note: If you already have an organization-leveldiscoverysubscription,this billing account is still required to create the project. However, forall discovery operations, you are billed through the project associated withyour subscription.ClickCreate.
Sensitive Data Protection creates the new project. The service agent withinthis project will be used to authenticate to Sensitive Data Protection andother APIs.
Select an existing project
To select an existing project as your service agent container, click theService agent container field and select the project.
Set the location to store the configuration
Click theResource location list, and select the region where youwant to store this scan configuration. All scan configurations that youlater create will also be stored in this location.
Where you choose to store your scan configuration doesn't affect the data to bescanned. Your datais scanned in the same region where that data is stored. For more information,seeData residency considerations.
Note: If you already have an existing scan configuration, you can't change the valueset in this field. All scan configurations are stored in the same location.If you want to change the location of all your scan configurations, you mustdelete them,recreate them, and store them in the new location.Review and create the configuration
- If you want to make sure that profiling doesn't start automatically after you create the scan configuration, selectCreate scan in paused mode.
This option is useful in the following cases:
- Your Google Cloud administrator still needs togrant data profiling access to the service agent.
- You want to create multiple scan configurations and you want some configurations tooverride others.
- You opted to save data profiles to BigQuery and you want to make sure the service agent has write access to the BigQuery table where the data profile copies will be saved.
- You opted to save sample discovery findings to BigQuery and you want to make sure that the service agent has write access to the BigQuery table where the sample findings will be saved.
- You configured Pub/Sub notifications and you want togrant publishing access to the service agent.
- Review your settings and clickCreate.
Sensitive Data Protection creates the scan configuration and adds it to the discovery scan configurations list.
To view or manage your scanconfigurations, seeManage scanconfigurations.
Note:We regularly improve our detection algorithm. If we find that your organizationor project would benefit from a new improvement that we implement, we mightautomatically regenerate your data profiles and redo theactions in your scanconfiguration. You won't incur Sensitive Data Protection charges for thisoperation. However, because we will redo the actions, you might incur chargesfor your use of other Google Cloud services. For example, if you configuredSensitive Data Protection to save the data profiles to BigQuery, youmight incur BigQuery charges.
What's next
- If you don't have the Organization Administrator(
roles/resourcemanager.organizationAdmin) or Security Admin(roles/iam.securityAdmin) role, someone with one of thoseroles mustgrant data profiling access to yourservice agent. - Learn how tomanage data profiles.
- Learn how tomanage scan configurations.
- Learn how toreceive and parse Pub/Sub messages published by the data profiler.
- Learn how totroubleshoot issues with data profiles.
- Look through thedata profiling limits.
- Look through thefileclusters that sensitivedata discovery can scan.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.