Disable soft delete Stay organized with collections Save and categorize content based on your preferences.
This page describes how to disable the soft delete feature on newand existing buckets across your organization.
Soft delete is enabled on new buckets by default toprevent data loss. If needed, you can disable soft delete for existingbuckets by modifying the soft delete policy, and you can disablesoft delete by default for new buckets by setting an organization-widedefault tag. Note that once you disable soft delete, your deleted datacannot be recovered, including accidental or malicious deletions.
Required roles
To get the permissions that you need to disable soft delete, ask your administrator to grant you the following IAM roles on the organization level:
- Storage Admin (
roles/storage.admin) - Tag Administrator (
roles/resourcemanager.tagAdmin) - Organization Viewer (
roles/resourcemanager.organizationViewer)
These predefined roles contain the permissions required to disable soft delete. To see the exact permissions that are required, expand theRequired permissions section:
Required permissions
The following permissions are required to disable soft delete:
storage.buckets.getstorage.buckets.updatestorage.buckets.list(this permission is only required if you plan to use the Google Cloud console to perform the instructions on this page)For required permissions that are included as part of the Tag Admin(
roles/resourcemanager.tagAdmin) role, seeRequired permissions for administering tags.
For information about granting roles, seeSet and manage IAM policies on buckets orManage access to projects.
Disable soft delete for a specific bucket
Before you begin, consider the following:
If you disable a soft delete policy from your bucket that hassoft-deleted objects in it during the time of disablement, the existingsoft-deleted objects are retained until the previously applied retentionduration expires.
After disabling a soft delete policy on your bucket, Cloud Storagedoesn't retain newly deleted objects.
When you disable a soft delete policy on your bucket, the change isn'tinstantaneous across Cloud Storage due to metadata caching. Therefore,we recommend waiting at least thirty seconds before initiating any otherdelete operations, such as bulk delete, after you disable asoft delete policy. This ensures that your data is deleted permanentlyrather than soft-deleted. For more information about consistency inCloud Storage operations, seeCloud Storage consistency.
Use the following instructions to disable soft delete for a specificbucket:
Console
- In the Google Cloud console, go to the Cloud StorageBuckets page.
In the list of buckets, click the name of the bucket whosesoft delete policy you want to disable.
Click theProtection tab.
In theSoft delete policy section, clickDisable to disable thesoft delete policy.
ClickConfirm.
To learn how to get detailed error information about failed Cloud Storage operations in the Google Cloud console, seeTroubleshooting.
Command line
Run thegcloud storage buckets update command with the--clear-soft-delete flag:
gcloud storage buckets update --clear-soft-delete gs://BUCKET_NAME
Where:
BUCKET_NAMEis the name of the bucket. Forexample,my-bucket.
REST APIs
JSON API
Have gcloud CLIinstalled and initialized, which lets you generate an access token for the
Authorizationheader.Create a JSON file that contains the following information:
{"softDeletePolicy":{"retentionDurationSeconds":"0"}}
Use
cURLto call theJSON API with aPATCHBucket request:curl -X PATCH --data-binary @JSON_FILE_NAME \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://storage.googleapis.com/storage/v1/b/BUCKET_NAME"
Where:
JSON_FILE_NAMEis the path for the JSONfile that you created in Step 2.BUCKET_NAMEis the name of the relevantbucket. For example,my-bucket.
Disable soft delete for the 100 largest buckets in a project
Using the Google Cloud console you can disable soft delete for up to 100buckets at once, with buckets sorted by the most soft-deleted bytes or thehighest ratio of soft-deleted bytes to live bytes, allowing you to managebuckets with the greatest impact to your soft delete costs.
- In the Google Cloud console, go to the Cloud StorageBuckets page.
In the Cloud Storage page, clickSettings.
Click theSoft delete tab.
From theTop buckets by deleted bytes list, select the buckets youwant to disable soft delete for.
ClickTurn off soft delete.
Soft delete is disabled on the buckets youselected.
Disable soft delete for multiple or all buckets within a project
Using the Google Cloud CLI, run thegcloud storage buckets updatecommand with the--project flag and the* wildcard to bulkdisable soft delete for multiple or all buckets within a project:
gcloud storage buckets update --project=PROJECT_ID --clear-soft-delete gs://*
Where:
PROJECT_IDis the ID of the project. For example,my-project.
Disable soft delete across all buckets within a folder
Using the Google Cloud CLI, run thegcloud projects list andgcloud storage buckets update commands to disable soft delete on bucketsacross all the projects in a specifiedfolder.
Run thegcloud projects list andgcloud storage buckets updatecommands to list all the buckets under a specified folder and then disablesoft delete for all buckets within the folder:
gcloud projects list --filter="parent.id:FOLDER_ID" --format="value(projectId)" | while read projectdo gcloud storage buckets update --project=$project --clear-soft-delete gs://*done
Where:
FOLDER_IDis the name of the folder. Forexample,123456.
Disable soft delete at the organization level
Using the Google Cloud CLI, run thegcloud storage buckets updatecommand with the--clear-soft-delete flag and the* wildcard todisable soft delete at the organization level:
Run thegcloud storage buckets update command with the--clear-soft-delete flag and the* wildcard to disablesoft delete for all buckets within your organization:
gcloud projects list --format="value(projectId)" | while read projectdo gcloud storage buckets update --project=$project --clear-soft-delete gs://*done
Cloud Storage disables soft delete on existing buckets.Objects that have already been soft deleted will remain in the buckets untiltheir soft delete retention duration completes, after which, they arepermanently deleted.
Note: Some shells such as bash and zsh can attempt to expand wildcardsbefore passing the arguments to Google Cloud CLI. To avoid this, we recommendsurrounding the argument with single quotes (on Linux) or double quotes(on Windows). For more information about URI wildcards, seeURI wildcards behavior considerations.Disable soft delete for new buckets
While soft delete is enabled by default on new buckets,you can prevent soft delete from default enablement using tags.Tags use thestorage.defaultSoftDeletePolicy key to apply a0d (zero days)soft delete policy at the organization level, which disables thefeature and prevents future retention of deleted data.
Use the following instructions to disable soft delete by default whenyou create new buckets. Note that the following instructions aren'tequivalent to setting an organization policy that mandates a particularsoft delete policy, meaning you can still enable soft deleteon specific buckets by specifying a policy if needed.
Using the Google Cloud CLI, create the
storage.defaultSoftDeletePolicytagwhich is used to change the default soft delete retention durationon new buckets. Note that only thestorage.defaultSoftDeletePolicytag nameupdates the default soft delete retention duration.Create a tag key using the
gcloud resource-manager tags keys createcommand:gcloud resource-manager tags keys create storage.defaultSoftDeletePolicy \ --parent=organizations/ORGANIZATION_ID \ --description="Configures the default softDeletePolicy for new Storage buckets."
Where:
ORGANIZATION_IDis the numeric ID of theorganization you want to set a default soft delete retentionduration for. For example,12345678901. To learnhow to find the organization ID, seeGetting your organization resource ID.
Create a tag value for
0d(zero days) to disable thesoft delete retention period by default on new buckets usingthegcloud resource-manager tags values createcommand:gcloud resource-manager tags values create 0d \ --parent=ORGANIZATION_ID/storage.defaultSoftDeletePolicy \ --description="Disables soft delete for new Storage buckets."
Where:
ORGANIZATION_IDis the numeric ID of theorganization you want to set the default soft delete retentionduration for. For example,12345678901.
storage.defaultSoftDeletePolicytag. Tocreate a tag value for0dand disable soft delete,you'll need to update the existingstorage.defaultSoftDeletePolicytag to use the0dtag value. For more information about updating tags, seeUpdate existing tags.Attach the tag to your resource using the
gcloud resource-manager tags bindings createcommand:gcloud resource-manager tags bindings create \ --tag-value=ORGANIZATION_ID/storage.defaultSoftDeletePolicy/0d \ --parent=RESOURCE_ID
Where:
ORGANIZATION_IDis the numeric ID of theorganization under which the tag was created. For example,12345678901.RESOURCE_IDis the full name of theorganization you want to create the tag binding for. Forexample, to attach a tag toorganizations/7890123456, enter//cloudresourcemanager.googleapis.com/organizations/7890123456.
Disable soft delete for buckets that exceed a specified cost threshold
Using the Cloud Client Libraries for Python, you can disable soft delete forbuckets that exceed a specified relative cost threshold with a Python clientlibrary sample. The sample does the following:
Calculates the relative cost of storage for each storage class.
Assesses the soft delete cost accumulated by your buckets.
Sets a cost threshold for soft delete usage and lists the bucketsthat exceed the threshold you set and lets you disablesoft delete for the buckets that exceed the threshold.
To learn more about setting up the Python client library and using the sample,see theCloud Storage soft delete cost analyzerREADME.md page.
The following sample disables soft delete for buckets thatexceed a specified cost threshold:
from__future__importannotationsimportargparseimportjsonimportgoogle.cloud.monitoring_v3asmonitoring_clientdefget_relative_cost(storage_class:str)->float:"""Retrieves the relative cost for a given storage class and location. Args: storage_class: The storage class (e.g., 'standard', 'nearline'). Returns: The price per GB from the https://cloud.google.com/storage/pricing, divided by the standard storage class. """relative_cost={"STANDARD":0.023/0.023,"NEARLINE":0.013/0.023,"COLDLINE":0.007/0.023,"ARCHIVE":0.0025/0.023,}returnrelative_cost.get(storage_class,1.0)defget_soft_delete_cost(project_name:str,soft_delete_window:float,agg_days:int,lookback_days:int,)->dict[str,list[dict[str,float]]]:"""Calculates soft delete costs for buckets in a Google Cloud project. Args: project_name: The name of the Google Cloud project. soft_delete_window: The time window in seconds for considering soft-deleted objects (default is 7 days). agg_days: Aggregate results over a time period, defaults to 30-day period lookback_days: Look back up to upto days, defaults to 360 days Returns: A dictionary with bucket names as keys and cost data for each bucket, broken down by storage class. """query_client=monitoring_client.QueryServiceClient()# Step 1: Get storage class ratios for each bucket.storage_ratios_by_bucket=get_storage_class_ratio(project_name,query_client,agg_days,lookback_days)# Step 2: Fetch soft-deleted bytes and calculate costs using Monitoring API.soft_deleted_costs=calculate_soft_delete_costs(project_name,query_client,soft_delete_window,storage_ratios_by_bucket,agg_days,lookback_days,)returnsoft_deleted_costsdefcalculate_soft_delete_costs(project_name:str,query_client:monitoring_client.QueryServiceClient,soft_delete_window:float,storage_ratios_by_bucket:dict[str,float],agg_days:int,lookback_days:int,)->dict[str,list[dict[str,float]]]:"""Calculates the relative cost of enabling soft delete for each bucket in a project for certain time frame in secs. Args: project_name: The name of the Google Cloud project. query_client: A Monitoring API query client. soft_delete_window: The time window in seconds for considering soft-deleted objects (default is 7 days). storage_ratios_by_bucket: A dictionary of storage class ratios per bucket. agg_days: Aggregate results over a time period, defaults to 30-day period lookback_days: Look back up to upto days, defaults to 360 days Returns: A dictionary with bucket names as keys and a list of cost data dictionaries for each bucket, broken down by storage class. """soft_deleted_bytes_time=query_client.query_time_series(monitoring_client.QueryTimeSeriesRequest(name=f"projects/{project_name}",query=f"""{{ # Fetch 1: Soft-deleted (bytes seconds) fetch gcs_bucket :: storage.googleapis.com/storage/v2/deleted_bytes | value val(0) * {soft_delete_window}\'s\' # Multiply by soft delete window | group_by [resource.bucket_name, metric.storage_class], window(), .sum; # Fetch 2: Total byte-seconds (active objects) fetch gcs_bucket :: storage.googleapis.com/storage/v2/total_byte_seconds | filter metric.type != 'soft-deleted-object' | group_by [resource.bucket_name, metric.storage_class], window(1d), .mean # Daily average | group_by [resource.bucket_name, metric.storage_class], window(), .sum # Total over window }} # End query definition | every{agg_days}d # Aggregate over larger time intervals | within{lookback_days}d # Limit data range for analysis | ratio # Calculate ratio (soft-deleted (bytes seconds)/ total (bytes seconds)) """,))buckets:dict[str,list[dict[str,float]]]={}missing_distribution_storage_class=[]fordata_pointinsoft_deleted_bytes_time.time_series_data:bucket_name=data_point.label_values[0].string_valuestorage_class=data_point.label_values[1].string_value# To include location-based cost analysis:# 1. Uncomment the line below:# location = data_point.label_values[2].string_value# 2. Update how you calculate 'relative_storage_class_cost' to factor in locationsoft_delete_ratio=data_point.point_data[0].values[0].double_valuedistribution_storage_class=bucket_name+" - "+storage_classstorage_class_ratio=storage_ratios_by_bucket.get(distribution_storage_class)ifstorage_class_ratioisNone:missing_distribution_storage_class.append(distribution_storage_class)buckets.setdefault(bucket_name,[]).append({# Include storage class and location data for additional plotting dimensions.# "storage_class": storage_class,# 'location': location,"soft_delete_ratio":soft_delete_ratio,"storage_class_ratio":storage_class_ratio,"relative_storage_class_cost":get_relative_cost(storage_class),})ifmissing_distribution_storage_class:print("Missing storage class for following buckets:",missing_distribution_storage_class,)raiseValueError("Cannot proceed with missing storage class ratios.")returnbucketsdefget_storage_class_ratio(project_name:str,query_client:monitoring_client.QueryServiceClient,agg_days:int,lookback_days:int,)->dict[str,float]:"""Calculates storage class ratios for each bucket in a project. This information helps determine the relative cost contribution of each storage class to the overall soft-delete cost. Args: project_name: The Google Cloud project name. query_client: Google Cloud's Monitoring Client's QueryServiceClient. agg_days: Aggregate results over a time period, defaults to 30-day period lookback_days: Look back up to upto days, defaults to 360 days Returns: Ratio of Storage classes within a bucket. """request=monitoring_client.QueryTimeSeriesRequest(name=f"projects/{project_name}",query=f"""{{ # Fetch total byte-seconds for each bucket and storage class fetch gcs_bucket :: storage.googleapis.com/storage/v2/total_byte_seconds | group_by [resource.bucket_name, metric.storage_class], window(), .sum; # Fetch total byte-seconds for each bucket (regardless of class) fetch gcs_bucket :: storage.googleapis.com/storage/v2/total_byte_seconds | group_by [resource.bucket_name], window(), .sum }} | ratio # Calculate ratios of storage class size to total size | every{agg_days}d | within{lookback_days}d """,)storage_class_ratio=query_client.query_time_series(request)storage_ratios_by_bucket={}fortime_seriesinstorage_class_ratio.time_series_data:bucket_name=time_series.label_values[0].string_valuestorage_class=time_series.label_values[1].string_valueratio=time_series.point_data[0].values[0].double_value# Create a descriptive key for the dictionarykey=f"{bucket_name} -{storage_class}"storage_ratios_by_bucket[key]=ratioreturnstorage_ratios_by_bucketdefsoft_delete_relative_cost_analyzer(project_name:str,cost_threshold:float=0.0,soft_delete_window:float=604800,agg_days:int=30,lookback_days:int=360,list_buckets:bool=False,)->str|dict[str,float]:# Note potential string output"""Identifies buckets exceeding the relative cost threshold for enabling soft delete. Args: project_name: The Google Cloud project name. cost_threshold: Threshold above which to consider removing soft delete. soft_delete_window: Time window for calculating soft-delete costs (in seconds). agg_days: Aggregate results over this time period (in days). lookback_days: Look back up to this many days. list_buckets: Return a list of bucket names (True) or JSON (False, default). Returns: JSON formatted results of buckets exceeding the threshold and costs *or* a space-separated string of bucket names. """buckets:dict[str,float]={}forbucket_name,storage_sourcesinget_soft_delete_cost(project_name,soft_delete_window,agg_days,lookback_days).items():bucket_cost=0.0forstorage_sourceinstorage_sources:bucket_cost+=(storage_source["soft_delete_ratio"]*storage_source["storage_class_ratio"]*storage_source["relative_storage_class_cost"])ifbucket_cost >cost_threshold:buckets[bucket_name]=round(bucket_cost,4)iflist_buckets:return" ".join(buckets.keys())# Space-separated bucket nameselse:returnjson.dumps(buckets,indent=2)# JSON outputdefsoft_delete_relative_cost_analyzer_main()->None:# Sample run: python storage_soft_delete_relative_cost_analyzer.py <Project Name>parser=argparse.ArgumentParser(description="Analyze and manage Google Cloud Storage soft-delete costs.")parser.add_argument("project_name",help="The name of the Google Cloud project to analyze.")parser.add_argument("--cost_threshold",type=float,default=0.0,help="Relative Cost threshold.",)parser.add_argument("--soft_delete_window",type=float,default=604800.0,help="Time window (in seconds) for considering soft-deleted objects.",)parser.add_argument("--agg_days",type=int,default=30,help=("Time window (in days) for aggregating results over a time period,"" defaults to 30-day period"),)parser.add_argument("--lookback_days",type=int,default=360,help=("Time window (in days) for considering the how old the bucket to be."),)parser.add_argument("--list",type=bool,default=False,help="Return the list of bucketnames seperated by space.",)args=parser.parse_args()response=soft_delete_relative_cost_analyzer(args.project_name,args.cost_threshold,args.soft_delete_window,args.agg_days,args.lookback_days,args.list,)ifnotargs.list:print("To remove soft-delete policy from the listed buckets run:\n"# Capture output"python storage_soft_delete_relative_cost_analyzer.py"" [your-project-name] --[OTHER_OPTIONS] --list > list_of_buckets.txt\n""cat list_of_buckets.txt | gcloud storage buckets update -I ""--clear-soft-delete",response,)returnprint(response)if__name__=="__main__":soft_delete_relative_cost_analyzer_main()What's next
Reviewconsiderations before re-enabling soft delete.
Learn abouthow soft delete interacts with other Cloud Storage features.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-19 UTC.