Transfer between Cloud Storage buckets Stay organized with collections Save and categorize content based on your preferences.
Storage Transfer Service can be used to transfer large amounts of data betweenCloud Storage buckets, either within the same Google Cloud project,or between different projects.
Bucket migrations are useful in a number of scenarios. They can be used toconsolidate data from separate projects, to move data into a backup location,or to change the location of your data.
When to use Storage Transfer Service
Google Cloud offers multiple options to transfer data betweenCloud Storage buckets. We recommend the following guidelines:
Transferring less than 1 TB: Use
gcloud. For instructions,refer toMove and rename buckets.Transferring more than 1 TB: Use Storage Transfer Service. Storage Transfer Service is amanaged transfer option that provides out of the box security, reliability,and performance. It eliminates the need to optimize and maintain scripts,and handle retries.
This guide discusses best practices when transferring data betweenCloud Storage buckets using Storage Transfer Service.
Define a transfer strategy
What your transfer strategy looks like depends on the complexity of yoursituation. Make sure to include the following considerations in your plan.
Choose a bucket name
To move your data to a storage bucket with a different location, choose one ofthe following approaches:
- New bucket name. Update your applications to point to a storage bucketwith a different name.
- Keep bucket name. Replace your storage bucket to keep the current name,meaning you don't need to update your applications.
In both cases you should plan for downtime, and give your users suitable noticethat downtime is coming. Review the following explanations to understand whichchoice is best for you.
New bucket name
With a new bucket name, you need to update all code and services that use yourcurrent bucket. How you do this depends on how your applications are built anddeployed.
For certain setups this approach might have less downtime, but requires morework to ensure a smooth transition. It involves the following steps:
- Copying your data to a new storage bucket.
- Starting your downtime.
- Updating your applications to point to the new bucket.
- Verifying that everything works as expected, and that all relevant systemsand accounts have access to the bucket.
- Deleting the original bucket.
- Ending your downtime.
Keep bucket name
Use this approach if you prefer not to change your code to point to a new bucketname. It involves the following steps:
- Copying your data to a temporary storage bucket.
- Starting your downtime.
- Deleting your original bucket.
- Creating a new bucket with the same name as your original bucket.
- Copying the data to your new bucket from the temporary bucket.
- Deleting the temporary bucket.
- Verifying that everything works as expected, and that all relevant systemsand accounts have access to the bucket.
- Ending your downtime.
Minimize downtime
Storage Transfer Service does not lock reads or writes on the source or destinationbuckets during a transfer.
If you choose to manually lock reads/writes on your bucket, you can minimizedowntime by transferring your data in two steps: seed, and sync.
Seed transfer: Perform a bulk transfer without locking read/write on thesource.
Sync transfer: After the first run is complete, lock the read/write onthe source bucket and perform another transfer. Storage Transfer Service transfersare incremental by default, so this second transfer only transfers datathat changed during the seed transfer.
Optimize the transfer speed
When estimating how long a transfer job takes, consider the possiblebottlenecks. For example, if the source has billions of small files, thenyour transfer speed is going to be QPS-bound. If object sizes are large,bandwidth might be the bottleneck.
Bandwidth limits are set at the region level and are fairly allocated across allprojects. If sufficient bandwidth is available, Storage Transfer Service can completearound 1000 tasks per transfer job per second. You can accelerate a transfer inthis case by splitting your job into multiple small transfer jobs, for exampleby usinginclude and exclude prefixesto transfer certain files.
In cases where the location, storage class, and encryption key are the same,Storage Transfer Service does not create a new copy of the bytes; it instead creates anew metadata entry that points to the source blob. As a result, same locationand class copies of a large corpus are completed very quickly and are onlyQPS-bound.
Deletes are also metadata-only operations. For these transfers, parallelizingthe transfer by splitting it into multiple small jobs can increase the speed.
Preserve metadata
The following object metadata is preserved when transferring data betweenCloud Storage buckets with Storage Transfer Service:
- User-createdcustom metadata.
- Cloud Storage fixed-key metadata fields, such as Cache-Control,Content-Disposition, Content-Type, and Custom-Time.
- Object size.
- Generation number is preservedas a custom metadata field with the key
x-goog-reserved-source-generation, which you can edit later or remove.
The following metadata fields can optionally be preserved when transferringusing the API:
- ACLs (
acl) - Storage class (
storageClass) - CMEK (
kmsKey) - Temporary hold (
temporaryHold) - Object creation time (
customTime)
Refer to theTransferSpec API referencefor more details.
The following metadata fields aren't preserved:
- Last updated time (
updated) etagcomponentCount
If preserved, object creation time is stored as a custom field,customTime. The object'supdated time is reset upon transfer, so theobject's time spent in its storage class is also reset. This means an object inColdline Storage, post-transfer, has to exist again for 90 days at thedestination to avoid early deletion charges.
You can apply yourcreateTime-based lifecycle policiesusingcustomTime. ExistingcustomTime values are overwritten.
For more details on what is and isn't preserved, refer toMetadata preservation.
Handle versioned objects
If you want to transfer all versions of your storage objects and not just thelatest, you need to use either thegcloud CLI or REST API to transferyour data, combined with Storage Transfer Service'smanifest feature.
To transfer all object versions:
List the bucket objects and copy them into a JSON file:
gcloudstoragels--all-versions --recursive --json [SOURCE_BUCKET] > object-listing.jsonThis command typically lists around 1k objects per second.
Split the JSON file into two CSV files, one file with non-current versions,and another with the live versions:
jq-r'.[]|select(.type=="cloud_object"and(.metadata|has("timeDeleted")|not))|[.metadata.name,.metadata.generation]|@csv'object-listing.json >live-object-manifest.csvjq-r'.[]|select(.type=="cloud_object"and(.metadata|has("timeDeleted")))|[.metadata.name,.metadata.generation]|@csv'object-listing.json >non-current-object-manifest.csvEnable object versioning onthe destination bucket.
Transfer the non-current versions first by passing the
non-current-object-manifest.csvmanifest file asthe value of thetransferManifestfield.Then, transfer the live versions in the same way, specifying
live-object-manifest.csvas the manifest file.
Configure transfer options
Some of the options available to you when setting up your transfer are asfollows:
Logging: Cloud Loggingprovides detailed logs of individual objects, allowing you to verifytransfer status and to perform additional data integrity checks.
Filtering: Youcan use include and exclude prefixes to limit which objectsStorage Transfer Service operates on. This option can be used to split a transferinto multiple transfer jobs so that they can run in parallel. SeeOptimize the transfer speed for moreinformation.
Transfer options:You can configure your transfer to overwrite existing items in thedestination bucket; to delete objects in the destination that don't exist inthe transfer set; or to delete transferred objects from the source.
Transfer your data
After you've defined yourtransfer strategy, you canperform the transfer itself.
Create a new bucket
Before beginning the transfer,create a storage bucket.Seelocation_considerations for helpchoosing an appropriate bucket location.
You might wish to copy over some of thebucket metadatawhen you create the new bucket. SeeGet bucket metadatato learn how to display the source bucket's metadata, so that you can apply thesame settings to your new bucket.
Copy objects to the new bucket
You can copy objects from the source bucket to a new bucket using theGoogle Cloud console, thegcloud CLI, REST API, or client libraries.Which approach you choose depends on yourtransfer strategy.
The following instructions are for the basic use case of transferring objectsfrom one bucket to another, and should be modified to fit your needs.
Don't include sensitive information such as personally identifiable information(PII) or security data in your transfer job name. Resource names may bepropagated to the names of other Google Cloud resources and may be exposedto Google-internal systems outside of your project.
Google Cloud console
Use theCloud Storage Transfer Service from withinGoogle Cloud console:
Open the Transfer page in the Google Cloud console.
- ClickCreate transfer job.
Follow the step-by-step walkthrough, clickingNext step as youcomplete each step:
Get started: UseGoogle Cloud Storage as both yourSource TypeandDestination Type.
Choose a source: Either enter the name of the wanted bucket directly, orclickBrowse to find and select the bucket you want.
Choose a destination: Either enter the name of the wanted bucketdirectly, or clickBrowse to find and select the bucket you want.
Choose settings: Select the optionDelete files from source afterthey're transferred.
Scheduling options: You can ignore this section.
After you complete the step-by-step walkthrough, clickCreate.
This begins the process of copying objects from your old bucket intoyour new one. This process may take some time; however, after you clickCreate, you can navigate away from the Google Cloud console.
To view the transfer's progress:
Open the Transfer page in the Google Cloud console.
To learn how to get detailed error information about failed Storage Transfer Service operations in the Google Cloud console, seeTroubleshooting.
After the transfer completes, you don't need to do anything to delete theobjects from your old bucket if you selected theDelete source objectsafter the transfer completes checkbox during setup. You may, however,want to alsodelete your old bucket,which you must do separately.
gcloud CLI
Install the gcloud CLI
If you haven't already,install the gcloud command-line tool.
Then, callgcloud init to initialize the tool and to specify your project IDand user account. SeeInitializing Cloud SDK formore details.
gcloudinitAdd the service account to your destination folder
You must add the Storage Transfer Service service account to your destination bucketbefore creating a transfer. To do so, usegcloud storage buckets add-iam-policy-binding:
gcloud storage buckets add-iam-policy-binding gs://bucket_name \--member=serviceAccount:project-12345678@storage-transfer-service.iam.gserviceaccount.com \--role=roles/storage.admin
For instructions using the Google Cloud console or API, refer toUse IAM permissionsin the Cloud Storage documentation.
Create the transfer job
To create a new transfer job, use thegcloud transfer jobs create command.Creating a new job initiates the specified transfer, unless a schedule or--do-not-run is specified.
gcloudtransferjobscreateSOURCEDESTINATIONReplace the following:
SOURCE is the data source for this transfer, in the format
gs://BUCKET_NAME.DESTINATION is your new bucket, in the form
gs://BUCKET_NAME.
Additional options include:
Job information: You can specify
--nameand--description.Schedule: Specify
--schedule-starts,--schedule-repeats-every, and--schedule-repeats-until, or--do-not-run.Object conditions: Use conditions to determine which objects aretransferred. These include
--include-prefixesand--exclude-prefixes,and the time-based conditions in--include-modified-[before | after]-[absolute | relative].Transfer options: Specify whether to overwrite destination files(
--overwrite-when=differentoralways) and whether to delete certainfiles during or after the transfer(--delete-from=destination-if-uniqueorsource-after-transfer); specifywhichmetadata values to preserve (--preserve-metadata); andoptionally set a storage class on transferred objects(--custom-storage-class).Notifications: ConfigurePub/Sub notifications for transferswith
--notification-pubsub-topic,--notification-event-types, and--notification-payload-format.
To view all options, rungcloud transfer jobs create --help.
For example, to transfer all objects with the prefixfolder1:
gcloudtransferjobscreategs://old-bucketgs://new-bucket\--include-prefixes="folder1/"REST
In this example, you'll learn how to move files from one Cloud Storagebucket to another. For example, you can move data to a bucket in anotherlocation.
Note: The process is the same if the bucket is located in a differentproject.Request usingtransferJobs create:
POSThttps://storagetransfer.googleapis.com/v1/transferJobs{"description":"YOUR DESCRIPTION","status":"ENABLED","projectId":"PROJECT_ID","schedule":{"scheduleStartDate":{"day":1,"month":1,"year":2025},"startTimeOfDay":{"hours":1,"minutes":1},"scheduleEndDate":{"day":1,"month":1,"year":2025}},"transferSpec":{"gcsDataSource":{"bucketName":"GCS_SOURCE_NAME"},"gcsDataSink":{"bucketName":"GCS_SINK_NAME"},"transferOptions":{"deleteObjectsFromSourceAfterTransfer":true}}}
Response:
200OK{"transferJob":[{"creationTime":"2015-01-01T01:01:00.000000000Z","description":"YOUR DESCRIPTION","name":"transferJobs/JOB_ID","status":"ENABLED","lastModificationTime":"2015-01-01T01:01:00.000000000Z","projectId":"PROJECT_ID","schedule":{"scheduleStartDate":{"day":1,"month":1,"year":2015},"startTimeOfDay":{"hours":1,"minutes":1}},"transferSpec":{"gcsDataSource":{"bucketName":"GCS_SOURCE_NAME",},"gcsDataSink":{"bucketName":"GCS_NEARLINE_SINK_NAME"},"objectConditions":{"minTimeElapsedSinceLastModification":"2592000.000s"},"transferOptions":{"deleteObjectsFromSourceAfterTransfer":true}}}]}
Client libraries
In this example, you'll learn how to move files from one Cloud Storagebucket to another. For example, you can replicate data to a bucket in anotherlocation.
Note: The process is the same if the bucket is located in a differentproject.For more information about the Storage Transfer Service client libraries, seeGetting started with Storage Transfer Service client libraries.
Java
Looking for older samples? See theStorage Transfer Service Migration Guide.
importcom.google.protobuf.Duration;importcom.google.storagetransfer.v1.proto.StorageTransferServiceClient;importcom.google.storagetransfer.v1.proto.TransferProto.CreateTransferJobRequest;importcom.google.storagetransfer.v1.proto.TransferTypes.GcsData;importcom.google.storagetransfer.v1.proto.TransferTypes.ObjectConditions;importcom.google.storagetransfer.v1.proto.TransferTypes.Schedule;importcom.google.storagetransfer.v1.proto.TransferTypes.TransferJob;importcom.google.storagetransfer.v1.proto.TransferTypes.TransferJob.Status;importcom.google.storagetransfer.v1.proto.TransferTypes.TransferOptions;importcom.google.storagetransfer.v1.proto.TransferTypes.TransferSpec;importcom.google.type.Date;importcom.google.type.TimeOfDay;importjava.io.IOException;importjava.util.Calendar;publicclassTransferToNearline{/** * Creates a one-off transfer job that transfers objects in a standard GCS bucket that are more * than 30 days old to a Nearline GCS bucket. */publicstaticvoidtransferToNearline(StringprojectId,StringjobDescription,StringgcsSourceBucket,StringgcsNearlineSinkBucket,longstartDateTime)throwsIOException{// Your Google Cloud Project ID// String projectId = "your-project-id";// A short description of this job// String jobDescription = "Sample transfer job of old objects to a Nearline GCS bucket.";// The name of the source GCS bucket to transfer data from// String gcsSourceBucket = "your-gcs-source-bucket";// The name of the Nearline GCS bucket to transfer old objects to// String gcsSinkBucket = "your-nearline-gcs-bucket";// What day and time in UTC to start the transfer, expressed as an epoch date timestamp.// If this is in the past relative to when the job is created, it will run the next day.// long startDateTime =// new SimpleDateFormat("yyyy-MM-dd HH:mm:ss").parse("2000-01-01 00:00:00").getTime();// Parse epoch timestamp into the model classesCalendarstartCalendar=Calendar.getInstance();startCalendar.setTimeInMillis(startDateTime);// Note that this is a Date from the model class package, not a java.util.DateDatedate=Date.newBuilder().setYear(startCalendar.get(Calendar.YEAR)).setMonth(startCalendar.get(Calendar.MONTH)+1).setDay(startCalendar.get(Calendar.DAY_OF_MONTH)).build();TimeOfDaytime=TimeOfDay.newBuilder().setHours(startCalendar.get(Calendar.HOUR_OF_DAY)).setMinutes(startCalendar.get(Calendar.MINUTE)).setSeconds(startCalendar.get(Calendar.SECOND)).build();TransferJobtransferJob=TransferJob.newBuilder().setDescription(jobDescription).setProjectId(projectId).setTransferSpec(TransferSpec.newBuilder().setGcsDataSource(GcsData.newBuilder().setBucketName(gcsSourceBucket)).setGcsDataSink(GcsData.newBuilder().setBucketName(gcsNearlineSinkBucket)).setObjectConditions(ObjectConditions.newBuilder().setMinTimeElapsedSinceLastModification(Duration.newBuilder().setSeconds(2592000/* 30 days */))).setTransferOptions(TransferOptions.newBuilder().setDeleteObjectsFromSourceAfterTransfer(true))).setSchedule(Schedule.newBuilder().setScheduleStartDate(date).setStartTimeOfDay(time)).setStatus(Status.ENABLED).build();// Create a Transfer Service clientStorageTransferServiceClientstorageTransfer=StorageTransferServiceClient.create();// Create the transfer jobTransferJobresponse=storageTransfer.createTransferJob(CreateTransferJobRequest.newBuilder().setTransferJob(transferJob).build());System.out.println("Created transfer job from standard bucket to Nearline bucket:");System.out.println(response.toString());}}Python
Looking for older samples? See theStorage Transfer Service Migration Guide.
fromdatetimeimportdatetimefromgoogle.cloudimportstorage_transferfromgoogle.protobuf.duration_pb2importDurationdefcreate_daily_nearline_30_day_migration(project_id:str,description:str,source_bucket:str,sink_bucket:str,start_date:datetime,):"""Create a daily migration from a GCS bucket to a Nearline GCS bucket for objects untouched for 30 days."""client=storage_transfer.StorageTransferServiceClient()# The ID of the Google Cloud Platform Project that owns the job# project_id = 'my-project-id'# A useful description for your transfer job# description = 'My transfer job'# Google Cloud Storage source bucket name# source_bucket = 'my-gcs-source-bucket'# Google Cloud Storage destination bucket name# sink_bucket = 'my-gcs-destination-bucket'transfer_job_request=storage_transfer.CreateTransferJobRequest({"transfer_job":{"project_id":project_id,"description":description,"status":storage_transfer.TransferJob.Status.ENABLED,"schedule":{"schedule_start_date":{"day":start_date.day,"month":start_date.month,"year":start_date.year,}},"transfer_spec":{"gcs_data_source":{"bucket_name":source_bucket,},"gcs_data_sink":{"bucket_name":sink_bucket,},"object_conditions":{"min_time_elapsed_since_last_modification":Duration(seconds=2592000# 30 days)},"transfer_options":{"delete_objects_from_source_after_transfer":True},},}})result=client.create_transfer_job(transfer_job_request)print(f"Created transferJob:{result.name}")Verify copied objects
After your transfer is complete, we recommend performing additional dataintegrity checks.
Validate that the objects were copied correctly, by verifying the metadataon the objects, such as checksums and size.
Verify that the correct version of the objects were copied.Storage Transfer Service offers an out-of-the-box option to verify that objects arecopies. If you've enabledlogging,view logs to verifywhether all the objects were successfully copied, including theircorresponding metadata fields.
Start using the destination bucket
After the migration is complete and verified, update any existing applicationsor workloads so that they use the target bucket name. Checkdata access logsinCloud Audit Logs to ensure that youroperations are correctly modifying and reading objects.
Delete the original bucket
After everything is working well,delete the original bucket.
Storage Transfer Service offers the option of deleting objects after they have beentransferred by specifyingdeleteObjectsFromSourceAfterTransfer: true in thejob configuration, or selecting the option in the Google Cloud console.
Schedule object deletion
To schedule the deletion of your objects at a later date, use a combination of ascheduled transfer job, and thedeleteObjectsUniqueInSink = trueoption.
The transfer job should be set up to transfer an empty bucket into the bucketcontaining your objects. This causes Storage Transfer Service to list the objects andbegin deleting them. As deletions are a metadata-only operation, the transferjob is only QPS-bound. To speed up the process, split the transfer into multiplejobs, each acting on a distinct set of prefixes.
Alternatively, Google Cloud offers amanaged cron job scheduler. For more information, seeSchedule Google Cloud STS Transfer Job with Cloud Scheduler.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-19 UTC.