Download objects Stay organized with collections Save and categorize content based on your preferences.
This page shows you how to download objects from your buckets inCloud Storage to persistent storage. You can alsodownload objects into memory.
Note: If you usecustomer-supplied encryption keys with your objects, seeDownload objects you've encrypted for downloading instructions.Required roles
In order to get the required permissions for downloading objects, ask youradministrator to grant you the Storage Object Viewer(roles/storage.objectViewer) role on the bucket. If you plan on using theGoogle Cloud console, ask your administrator to grant you the Storage Admin(roles/storage.admin) role on the bucket instead.
These roles contain the permissions required to download objects. To see theexact permissions that are required, expand theRequired permissionssection:
Required permissions
storage.buckets.list- This permission is only required for using the Google Cloud console to perform the tasks on this page.
storage.objects.getstorage.objects.list- This permission is only required for using the Google Cloud console to perform the tasks on this page.
You might also be able to get these permissions with otherpredefined roles orcustom roles.
For instructions on granting roles on buckets, seeSet and manage IAM policies on buckets.
Download an object from a bucket
Complete the following instructions to download an object from a bucket:
Console
- In the Google Cloud console, go to the Cloud StorageBuckets page.
In the list of buckets, click the name of the bucket that containsthe object you want to download.
TheBucket details page opens, with theObjects tab selected.
Navigate to the object, which may be located in a folder.
Click theDownload icon associated with the object.
Your browser settings control the download location for the object.
To learn how to get detailed error information about failed Cloud Storage operations in the Google Cloud console, seeTroubleshooting.
Command line
Use thegcloud storage cp command:
gcloud storage cp gs://BUCKET_NAME/OBJECT_NAMESAVE_TO_LOCATION
Where:
BUCKET_NAMEis the name of the bucketcontaining the object you are downloading. For example,my-bucket.OBJECT_NAMEis the name of object you aredownloading. For example,pets/dog.png.SAVE_TO_LOCATIONis the local path where youare saving your object. For example,Desktop/Images.
If successful, the response looks like the following example:
Completed files 1/1 | 164.3kiB/164.3kiB
If your download is interrupted prior to completion, run the samecpcommand to resume the download from where it left off.
Client libraries
For more information, see theCloud StorageC++ API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StorageC# API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StorageGo API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StorageJava API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. The following sample downloads an individual object: The following sample downloads multiple objects using multiple processes: The following sample downloads all objects with a common prefix using multiple processes: For more information, see theCloud StorageNode.js API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. The following sample downloads an individual object: The following sample downloads multiple objects using multiple processes: The following sample downloads all objects with a common prefix using multiple processes: For more information, see theCloud StoragePHP API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StoragePython API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. The following sample downloads an individual object: The following sample downloads multiple objects using multiple processes: The following sample downloads all objects in a bucket using multiple processes: For more information, see theCloud StorageRuby API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.C++
namespacegcs=::google::cloud::storage;[](gcs::Clientclient,std::stringconst&bucket_name,std::stringconst&object_name){gcs::ObjectReadStreamstream=client.ReadObject(bucket_name,object_name);intcount=0;std::stringline;while(std::getline(stream,line,'\n')){++count;}if(stream.bad())throwgoogle::cloud::Status(stream.status());std::cout <<"The object has " <<count <<" lines\n";}C#
usingGoogle.Cloud.Storage.V1;usingSystem;usingSystem.IO;publicclassDownloadFileSample{publicvoidDownloadFile(stringbucketName="your-unique-bucket-name",stringobjectName="my-file-name",stringlocalPath="my-local-path/my-file-name"){varstorage=StorageClient.Create();usingvaroutputFile=File.OpenWrite(localPath);storage.DownloadObject(bucketName,objectName,outputFile);Console.WriteLine($"Downloaded {objectName} to {localPath}.");}}Go
import("context""fmt""io""os""time""cloud.google.com/go/storage")// downloadFile downloads an object to a file.funcdownloadFile(wio.Writer,bucket,objectstring,destFileNamestring)error{// bucket := "bucket-name"// object := "object-name"// destFileName := "file.txt"ctx:=context.Background()client,err:=storage.NewClient(ctx)iferr!=nil{returnfmt.Errorf("storage.NewClient: %w",err)}deferclient.Close()ctx,cancel:=context.WithTimeout(ctx,time.Second*50)defercancel()f,err:=os.Create(destFileName)iferr!=nil{returnfmt.Errorf("os.Create: %w",err)}rc,err:=client.Bucket(bucket).Object(object).NewReader(ctx)iferr!=nil{returnfmt.Errorf("Object(%q).NewReader: %w",object,err)}deferrc.Close()if_,err:=io.Copy(f,rc);err!=nil{returnfmt.Errorf("io.Copy: %w",err)}iferr=f.Close();err!=nil{returnfmt.Errorf("f.Close: %w",err)}fmt.Fprintf(w,"Blob %v downloaded to local file %v\n",object,destFileName)returnnil}Java
importcom.google.cloud.storage.BlobId;importcom.google.cloud.storage.Storage;importcom.google.cloud.storage.StorageOptions;importjava.nio.file.Paths;publicclassDownloadObject{publicstaticvoiddownloadObject(StringprojectId,StringbucketName,StringobjectName,StringdestFilePath)throwsException{// The ID of your GCP project// String projectId = "your-project-id";// The ID of your GCS bucket// String bucketName = "your-unique-bucket-name";// The ID of your GCS object// String objectName = "your-object-name";// The path to which the file should be downloaded// String destFilePath = "/local/path/to/file.txt";StorageOptionsstorageOptions=StorageOptions.newBuilder().setProjectId(projectId).build();try(Storagestorage=storageOptions.getService()){storage.downloadTo(BlobId.of(bucketName,objectName),Paths.get(destFilePath));System.out.println("Downloaded object "+objectName+" from bucket name "+bucketName+" to "+destFilePath);}}}importcom.google.cloud.storage.BlobInfo;importcom.google.cloud.storage.transfermanager.DownloadResult;importcom.google.cloud.storage.transfermanager.ParallelDownloadConfig;importcom.google.cloud.storage.transfermanager.TransferManager;importcom.google.cloud.storage.transfermanager.TransferManagerConfig;importjava.nio.file.Path;importjava.util.List;classDownloadMany{publicstaticvoiddownloadManyBlobs(StringbucketName,List<BlobInfo>blobs,PathdestinationDirectory)throwsException{try(TransferManagertransferManager=TransferManagerConfig.newBuilder().build().getService()){ParallelDownloadConfigparallelDownloadConfig=ParallelDownloadConfig.newBuilder().setBucketName(bucketName).setDownloadDirectory(destinationDirectory).build();List<DownloadResult>results=transferManager.downloadBlobs(blobs,parallelDownloadConfig).getDownloadResults();for(DownloadResultresult:results){System.out.println("Download of "+result.getInput().getName()+" completed with status "+result.getStatus());}}}}importcom.google.cloud.storage.BlobInfo;importcom.google.cloud.storage.Storage;importcom.google.cloud.storage.StorageOptions;importcom.google.cloud.storage.transfermanager.DownloadResult;importcom.google.cloud.storage.transfermanager.ParallelDownloadConfig;importcom.google.cloud.storage.transfermanager.TransferManager;importcom.google.cloud.storage.transfermanager.TransferManagerConfig;importjava.nio.file.Path;importjava.util.List;importjava.util.stream.Collectors;classDownloadBucket{publicstaticvoiddownloadBucketContents(StringprojectId,StringbucketName,PathdestinationDirectory){Storagestorage=StorageOptions.newBuilder().setProjectId(projectId).build().getService();List<BlobInfo>blobs=storage.list(bucketName).streamAll().map(blob->blob.asBlobInfo()).collect(Collectors.toList());TransferManagertransferManager=TransferManagerConfig.newBuilder().build().getService();ParallelDownloadConfigparallelDownloadConfig=ParallelDownloadConfig.newBuilder().setBucketName(bucketName).setDownloadDirectory(destinationDirectory).build();List<DownloadResult>results=transferManager.downloadBlobs(blobs,parallelDownloadConfig).getDownloadResults();for(DownloadResultresult:results){System.out.println("Download of "+result.getInput().getName()+" completed with status "+result.getStatus());}}}Node.js
/** * TODO(developer): Uncomment the following lines before running the sample. */// The ID of your GCS bucket// const bucketName = 'your-unique-bucket-name';// The ID of your GCS file// const fileName = 'your-file-name';// The path to which the file should be downloaded// const destFileName = '/local/path/to/file.txt';// Imports the Google Cloud client libraryconst{Storage}=require('@google-cloud/storage');// Creates a clientconststorage=newStorage();asyncfunctiondownloadFile(){constoptions={destination:destFileName,};// Downloads the fileawaitstorage.bucket(bucketName).file(fileName).download(options);console.log(`gs://${bucketName}/${fileName} downloaded to${destFileName}.`);}downloadFile().catch(console.error);/** * TODO(developer): Uncomment the following lines before running the sample. */// The ID of your GCS bucket// const bucketName = 'your-unique-bucket-name';// The ID of the first GCS file to download// const firstFileName = 'your-first-file-name';// The ID of the second GCS file to download// const secondFileName = 'your-second-file-name;// Imports the Google Cloud client libraryconst{Storage,TransferManager}=require('@google-cloud/storage');// Creates a clientconststorage=newStorage();// Creates a transfer manager clientconsttransferManager=newTransferManager(storage.bucket(bucketName));asyncfunctiondownloadManyFilesWithTransferManager(){// Downloads the filesawaittransferManager.downloadManyFiles([firstFileName,secondFileName]);for(constfileNameof[firstFileName,secondFileName]){console.log(`gs://${bucketName}/${fileName} downloaded to${fileName}.`);}}downloadManyFilesWithTransferManager().catch(console.error);/** * TODO(developer): Uncomment the following lines before running the sample. */// The ID of your GCS bucket// const bucketName = 'your-unique-bucket-name';// The ID of the GCS folder to download. The folder will be downloaded to the local path of the executing code.// const folderName = 'your-folder-name';// Imports the Google Cloud client libraryconst{Storage,TransferManager}=require('@google-cloud/storage');// Creates a clientconststorage=newStorage();// Creates a transfer manager clientconsttransferManager=newTransferManager(storage.bucket(bucketName));asyncfunctiondownloadFolderWithTransferManager(){// Downloads the folderawaittransferManager.downloadManyFiles(folderName);console.log(`gs://${bucketName}/${folderName} downloaded to${folderName}.`);}downloadFolderWithTransferManager().catch(console.error);PHP
use Google\Cloud\Storage\StorageClient;/** * Download an object from Cloud Storage and save it as a local file. * * @param string $bucketName The name of your Cloud Storage bucket. * (e.g. 'my-bucket') * @param string $objectName The name of your Cloud Storage object. * (e.g. 'my-object') * @param string $destination The local destination to save the object. * (e.g. '/path/to/your/file') */function download_object(string $bucketName, string $objectName, string $destination): void{ $storage = new StorageClient(); $bucket = $storage->bucket($bucketName); $object = $bucket->object($objectName); $object->downloadToFile($destination); printf( 'Downloaded gs://%s/%s to %s' . PHP_EOL, $bucketName, $objectName, basename($destination) );}Python
fromgoogle.cloudimportstoragedefdownload_blob(bucket_name,source_blob_name,destination_file_name):"""Downloads a blob from the bucket."""# The ID of your GCS bucket# bucket_name = "your-bucket-name"# The ID of your GCS object# source_blob_name = "storage-object-name"# The path to which the file should be downloaded# destination_file_name = "local/path/to/file"storage_client=storage.Client()bucket=storage_client.bucket(bucket_name)# Construct a client side representation of a blob.# Note `Bucket.blob` differs from `Bucket.get_blob` as it doesn't retrieve# any content from Google Cloud Storage. As we don't need additional data,# using `Bucket.blob` is preferred here.blob=bucket.blob(source_blob_name)blob.download_to_filename(destination_file_name)print("Downloaded storage object{} from bucket{} to local file{}.".format(source_blob_name,bucket_name,destination_file_name))defdownload_many_blobs_with_transfer_manager(bucket_name,blob_names,destination_directory="",workers=8):"""Download blobs in a list by name, concurrently in a process pool. The filename of each blob once downloaded is derived from the blob name and the `destination_directory `parameter. For complete control of the filename of each blob, use transfer_manager.download_many() instead. Directories will be created automatically as needed to accommodate blob names that include slashes. """# The ID of your GCS bucket# bucket_name = "your-bucket-name"# The list of blob names to download. The names of each blobs will also# be the name of each destination file (use transfer_manager.download_many()# instead to control each destination file name). If there is a "/" in the# blob name, then corresponding directories will be created on download.# blob_names = ["myblob", "myblob2"]# The directory on your computer to which to download all of the files. This# string is prepended (with os.path.join()) to the name of each blob to form# the full path. Relative paths and absolute paths are both accepted. An# empty string means "the current working directory". Note that this# parameter allows accepts directory traversal ("../" etc.) and is not# intended for unsanitized end user input.# destination_directory = ""# The maximum number of processes to use for the operation. The performance# impact of this value depends on the use case, but smaller files usually# benefit from a higher number of processes. Each additional process occupies# some CPU and memory resources until finished. Threads can be used instead# of processes by passing `worker_type=transfer_manager.THREAD`.# workers=8fromgoogle.cloud.storageimportClient,transfer_managerstorage_client=Client()bucket=storage_client.bucket(bucket_name)results=transfer_manager.download_many_to_path(bucket,blob_names,destination_directory=destination_directory,max_workers=workers)forname,resultinzip(blob_names,results):# The results list is either `None` or an exception for each blob in# the input list, in order.ifisinstance(result,Exception):print("Failed to download{} due to exception:{}".format(name,result))else:print("Downloaded{} to{}.".format(name,destination_directory+name))defdownload_bucket_with_transfer_manager(bucket_name,destination_directory="",workers=8,max_results=1000):"""Download all of the blobs in a bucket, concurrently in a process pool. The filename of each blob once downloaded is derived from the blob name and the `destination_directory `parameter. For complete control of the filename of each blob, use transfer_manager.download_many() instead. Directories will be created automatically as needed, for instance to accommodate blob names that include slashes. """# The ID of your GCS bucket# bucket_name = "your-bucket-name"# The directory on your computer to which to download all of the files. This# string is prepended (with os.path.join()) to the name of each blob to form# the full path. Relative paths and absolute paths are both accepted. An# empty string means "the current working directory". Note that this# parameter allows accepts directory traversal ("../" etc.) and is not# intended for unsanitized end user input.# destination_directory = ""# The maximum number of processes to use for the operation. The performance# impact of this value depends on the use case, but smaller files usually# benefit from a higher number of processes. Each additional process occupies# some CPU and memory resources until finished. Threads can be used instead# of processes by passing `worker_type=transfer_manager.THREAD`.# workers=8# The maximum number of results to fetch from bucket.list_blobs(). This# sample code fetches all of the blobs up to max_results and queues them all# for download at once. Though they will still be executed in batches up to# the processes limit, queueing them all at once can be taxing on system# memory if buckets are very large. Adjust max_results as needed for your# system environment, or set it to None if you are sure the bucket is not# too large to hold in memory easily.# max_results=1000fromgoogle.cloud.storageimportClient,transfer_managerstorage_client=Client()bucket=storage_client.bucket(bucket_name)blob_names=[blob.nameforblobinbucket.list_blobs(max_results=max_results)]results=transfer_manager.download_many_to_path(bucket,blob_names,destination_directory=destination_directory,max_workers=workers)forname,resultinzip(blob_names,results):# The results list is either `None` or an exception for each blob in# the input list, in order.ifisinstance(result,Exception):print("Failed to download{} due to exception:{}".format(name,result))else:print("Downloaded{} to{}.".format(name,destination_directory+name))Ruby
defdownload_filebucket_name:,file_name:,local_file_path:# The ID of your GCS bucket# bucket_name = "your-unique-bucket-name"# The ID of your GCS object# file_name = "your-file-name"# The path to which the file should be downloaded# local_file_path = "/local/path/to/file.txt"require"google/cloud/storage"storage=Google::Cloud::Storage.newbucket=storage.bucketbucket_name,skip_lookup:truefile=bucket.filefile_namefile.downloadlocal_file_pathputs"Downloaded#{file.name} to#{local_file_path}"end
REST APIs
JSON API
Have gcloud CLIinstalled and initialized, which lets you generate an access token for the
Authorizationheader.Use
cURLto call theJSON API with aGETObject request:curl -X GET \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -o "SAVE_TO_LOCATION" \ "https://storage.googleapis.com/storage/v1/b/BUCKET_NAME/o/OBJECT_NAME?alt=media"
Where:
SAVE_TO_LOCATIONis the path to thelocation where you want to save your object. For example,Desktop/dog.png.BUCKET_NAMEis the name of the bucketcontaining the object you are downloading. For example,my-bucket.OBJECT_NAMEis the URL-encoded name of theobject you are downloading. For example,pets/dog.png,URL-encoded aspets%2Fdog.png.
XML API
Have gcloud CLIinstalled and initialized, which lets you generate an access token for the
Authorizationheader.Use
cURLto call theXML API with aGETObject request:curl -X GET \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -o "SAVE_TO_LOCATION" \ "https://storage.googleapis.com/BUCKET_NAME/OBJECT_NAME"
Where:
SAVE_TO_LOCATIONis the path to thelocation where you want to save your object. For example,Desktop/dog.png.BUCKET_NAMEis the name of the bucketcontaining the object you are downloading. For example,my-bucket.OBJECT_NAMEis the URL-encoded name of theobject you are downloading. For example,pets/dog.png,URL-encoded aspets%2Fdog.png.
To more efficiently download all objects in a bucket or subdirectory, use thegcloud storage cp command or a client library:
gcloud storage cp --recursive gs://BUCKET_NAME/FOLDER_NAME .
Download a portion of an object
Note: Objectchecksums apply to an object in its entirety. This meanschecksums can't be used tovalidate the integrity of a portion of anobject.If your download gets interrupted, you can resume where you left off byrequesting only the portion of the object that's left. Complete the followinginstructions to download a portion of an object.
Console
The Google Cloud console does not support downloading portions of anobject. Use the gcloud CLI instead.
Command line
The Google Cloud CLI automatically attempts to resume interrupted downloads,except when performingstreaming downloads. If your download getsinterrupted, a partially downloaded temporary file becomes visible inthe destination hierarchy. Run the samecp command to resume thedownload where it left off.
When the download is complete, the temporary file is deleted andreplaced with the downloaded contents. Temporary files are stored in aconfigurable location, which by default is in the user's home directoryunder.config/gcloud/surface_data/storage/tracker_files. You canchange or view the location that temporary files are stored by runninggcloud config get storage/tracker_files_directory.
Client libraries
For more information, see theCloud StorageC++ API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StorageC# API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StorageGo API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StorageJava API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StorageNode.js API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StoragePHP API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StoragePython API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StorageRuby API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.C++
namespacegcs=::google::cloud::storage;[](gcs::Clientclient,std::stringconst&bucket_name,std::stringconst&object_name,std::int64_tstart,std::int64_tend){gcs::ObjectReadStreamstream=client.ReadObject(bucket_name,object_name,gcs::ReadRange(start,end));intcount=0;std::stringline;while(std::getline(stream,line,'\n')){std::cout <<line <<"\n";++count;}if(stream.bad())throwgoogle::cloud::Status(stream.status());std::cout <<"The requested range has " <<count <<" lines\n";}C#
usingGoogle.Apis.Storage.v1;usingGoogle.Cloud.Storage.V1;usingSystem;usingSystem.IO;usingSystem.Net.Http;usingSystem.Net.Http.Headers;usingSystem.Threading.Tasks;publicclassDownloadByteRangeAsyncSample{publicasyncTaskDownloadByteRangeAsync(stringbucketName="your-unique-bucket-name",stringobjectName="my-file-name",longfirstByte=0,longlastByte=20,stringlocalPath="my-local-path/my-file-name"){varstorageClient=StorageClient.Create();// Create an HTTP request for the media, for a limited byte range.StorageServicestorage=storageClient.Service;varuri=newUri($"{storage.BaseUri}b/{bucketName}/o/{objectName}?alt=media");varrequest=newHttpRequestMessage{RequestUri=uri};request.Headers.Range=newRangeHeaderValue(firstByte,lastByte);usingvaroutputFile=File.OpenWrite(localPath);// Use the HttpClient in the storage object because it supplies// all the authentication headers we need.varresponse=awaitstorage.HttpClient.SendAsync(request);awaitresponse.Content.CopyToAsync(outputFile,null);Console.WriteLine($"Downloaded {objectName} to {localPath}.");}}Go
import("context""fmt""io""os""time""cloud.google.com/go/storage")// downloadByteRange downloads a specific byte range of an object to a file.funcdownloadByteRange(wio.Writer,bucket,objectstring,startByteint64,endByteint64,destFileNamestring)error{// bucket := "bucket-name"// object := "object-name"// startByte := 0// endByte := 20// destFileName := "file.txt"ctx:=context.Background()client,err:=storage.NewClient(ctx)iferr!=nil{returnfmt.Errorf("storage.NewClient: %w",err)}deferclient.Close()ctx,cancel:=context.WithTimeout(ctx,time.Second*50)defercancel()f,err:=os.Create(destFileName)iferr!=nil{returnfmt.Errorf("os.Create: %w",err)}length:=endByte-startByterc,err:=client.Bucket(bucket).Object(object).NewRangeReader(ctx,startByte,length)iferr!=nil{returnfmt.Errorf("Object(%q).NewReader: %w",object,err)}deferrc.Close()if_,err:=io.Copy(f,rc);err!=nil{returnfmt.Errorf("io.Copy: %w",err)}iferr=f.Close();err!=nil{returnfmt.Errorf("f.Close: %w",err)}fmt.Fprintf(w,"Bytes %v to %v of blob %v downloaded to local file %v\n",startByte,startByte+length,object,destFileName)returnnil}Java
importcom.google.cloud.ReadChannel;importcom.google.cloud.storage.BlobId;importcom.google.cloud.storage.Storage;importcom.google.cloud.storage.StorageOptions;importcom.google.common.io.ByteStreams;importjava.io.IOException;importjava.nio.channels.FileChannel;importjava.nio.file.Paths;importjava.nio.file.StandardOpenOption;publicclassDownloadByteRange{publicstaticvoiddownloadByteRange(StringprojectId,StringbucketName,StringblobName,longstartByte,longendBytes,StringdestFileName)throwsIOException{// The ID of your GCP project// String projectId = "your-project-id";// The ID of your GCS bucket// String bucketName = "your-unique-bucket-name";// The name of the blob/file that you wish to modify permissions on// String blobName = "your-blob-name";// The starting byte at which to begin the download// long startByte = 0;// The ending byte at which to end the download// long endByte = 20;// The path to which the file should be downloaded// String destFileName = '/local/path/to/file.txt';Storagestorage=StorageOptions.newBuilder().setProjectId(projectId).build().getService();BlobIdblobId=BlobId.of(bucketName,blobName);try(ReadChannelfrom=storage.reader(blobId);FileChannelto=FileChannel.open(Paths.get(destFileName),StandardOpenOption.WRITE)){from.seek(startByte);from.limit(endBytes);ByteStreams.copy(from,to);System.out.printf("%s downloaded to %s from byte %d to byte %d%n",blobId.toGsUtilUri(),destFileName,startByte,endBytes);}}}Node.js
/** * TODO(developer): Uncomment the following lines before running the sample. */// The ID of your GCS bucket// const bucketName = 'your-unique-bucket-name';// The ID of your GCS file// const fileName = 'your-file-name';// The starting byte at which to begin the download// const startByte = 0;// The ending byte at which to end the download// const endByte = 20;// The path to which the file should be downloaded// const destFileName = '/local/path/to/file.txt';// Imports the Google Cloud client libraryconst{Storage}=require('@google-cloud/storage');// Creates a clientconststorage=newStorage();asyncfunctiondownloadByteRange(){constoptions={destination:destFileName,start:startByte,end:endByte,};// Downloads the file from the starting byte to the ending byte specified in optionsawaitstorage.bucket(bucketName).file(fileName).download(options);console.log(`gs://${bucketName}/${fileName} downloaded to${destFileName} from byte${startByte} to byte${endByte}.`);}downloadByteRange();PHP
use Google\Cloud\Storage\StorageClient;/** * Download a byte range from Cloud Storage and save it as a local file. * * @param string $bucketName The name of your Cloud Storage bucket. * (e.g. 'my-bucket') * @param string $objectName The name of your Cloud Storage object. * (e.g. 'my-object') * @param int $startByte The starting byte at which to begin the download. * (e.g. 1) * @param int $endByte The ending byte at which to end the download. (e.g. 5) * @param string $destination The local destination to save the object. * (e.g. '/path/to/your/file') */function download_byte_range( string $bucketName, string $objectName, int $startByte, int $endByte, string $destination): void { $storage = new StorageClient(); $bucket = $storage->bucket($bucketName); $object = $bucket->object($objectName); $object->downloadToFile($destination, [ 'restOptions' => [ 'headers' => [ 'Range' => "bytes=$startByte-$endByte", ], ], ]); printf( 'Downloaded gs://%s/%s to %s' . PHP_EOL, $bucketName, $objectName, basename($destination) );}Python
fromgoogle.cloudimportstoragedefdownload_byte_range(bucket_name,source_blob_name,start_byte,end_byte,destination_file_name):"""Downloads a blob from the bucket."""# The ID of your GCS bucket# bucket_name = "your-bucket-name"# The ID of your GCS object# source_blob_name = "storage-object-name"# The starting byte at which to begin the download# start_byte = 0# The ending byte at which to end the download# end_byte = 20# The path to which the file should be downloaded# destination_file_name = "local/path/to/file"storage_client=storage.Client()bucket=storage_client.bucket(bucket_name)# Construct a client side representation of a blob.# Note `Bucket.blob` differs from `Bucket.get_blob` as it doesn't retrieve# any content from Google Cloud Storage. As we don't need additional data,# using `Bucket.blob` is preferred here.blob=bucket.blob(source_blob_name)blob.download_to_filename(destination_file_name,start=start_byte,end=end_byte)print("Downloaded bytes{} to{} of object{} from bucket{} to local file{}.".format(start_byte,end_byte,source_blob_name,bucket_name,destination_file_name))Ruby
# The ID of your GCS bucket# bucket_name = "your-unique-bucket-name"# file_name = "Name of a file in the Storage bucket"# The starting byte at which to begin the download# start_byte = 0# The ending byte at which to end the download# end_byte = 20# The path to which the file should be downloaded# local_file_path = "/local/path/to/file.txt"require"google/cloud/storage"storage=Google::Cloud::Storage.newbucket=storage.bucketbucket_namefile=bucket.filefile_namefile.downloadlocal_file_path,range:start_byte..end_byteputs"Downloaded bytes#{start_byte} to#{end_byte} of object#{file_name} from bucket#{bucket_name}"\+" to local file#{local_file_path}."
REST APIs
JSON API
Use theRange header in your request to download a portion ofan object.
Have gcloud CLIinstalled and initialized, which lets you generate an access token for the
Authorizationheader.Use
cURLto call theJSON API with aGETObject request:curl -X GET \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Range: bytes=FIRST_BYTE-LAST_BYTE" \ -o "SAVE_TO_LOCATION" \ "https://storage.googleapis.com/storage/v1/b/BUCKET_NAME/o/OBJECT_NAME?alt=media"
Where:
FIRST_BYTEis the first byte in the rangeof bytes you want to download. For example,1000.LAST_BYTEis the last byte in the range ofbytes you want to download. For example,1999.SAVE_TO_LOCATIONis the path to thelocation where you want to save your object. For example,Desktop/dog.png.BUCKET_NAMEis the name of the bucketcontaining the object you are downloading. For example,my-bucket.OBJECT_NAMEis the URL-encoded name of theobject you are downloading. For example,pets/dog.png,URL-encoded aspets%2Fdog.png.
Range header is silentlyignored and the response instead serves the entire requested object.XML API
Use theRange header in your request to download a portion ofan object.
Have gcloud CLIinstalled and initialized, which lets you generate an access token for the
Authorizationheader.Use
cURLto call theXML API with aGETObject request:curl -X GET \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Range: bytes=FIRST_BYTE-LAST_BYTE" \ -o "SAVE_TO_LOCATION" \ "https://storage.googleapis.com/BUCKET_NAME/OBJECT_NAME"
Where:
FIRST_BYTEis the first byte in the rangeof bytes you want to download. For example,1000.LAST_BYTEis the last byte in the range ofbytes you want to download. For example,1999.SAVE_TO_LOCATIONis the path to thelocation where you want to save your object. For example,$HOME/Desktop/dog.png.BUCKET_NAMEis the name of the bucketcontaining the object you are downloading. For example,my-bucket.OBJECT_NAMEis the URL-encoded name of theobject you are downloading. For example,pets/dog.png,URL-encoded aspets%2Fdog.png.
Range header is silentlyignored and the response instead serves the entire requested object.What's next
- Read theconceptual overview for uploading and downloading, includingadvanced download strategies.
- Transfer data from cloud providers or other online sources, such asURL lists.
- Transfer objects to your Compute Engine instance.
- Learn how you canbill Cloud Storage access charges to requesters.
- Learn how Cloud Storage canserve gzipped files in an uncompressed state.
Try it for yourself
If you're new to Google Cloud, create an account to evaluate how Cloud Storage performs in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
Try Cloud Storage freeExcept as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-19 UTC.