Google Cloud Storage
Google Cloud Storage is an Internet service to store data in Google's cloud. Itallows world-wide storage and retrieval of any amount of data and at any time,taking advantage of Google's own reliable and fast networking infrastructure toperform data operations in a cost effective manner.
The goal of google-cloud is to provide an API that is comfortable to Rubyists.Your authentication credentials are detected automatically in Google CloudPlatform (GCP), including Google Compute Engine (GCE), Google Kubernetes Engine(GKE), Google App Engine (GAE), Google Cloud Functions (GCF) and Cloud Run. Inother environments you can configure authentication easily, either directly inyour code or via environment variables. Read more about the options forconnecting in theAuthentication Guide.
require"googleauth"require"google/cloud/storage"credentials=::Google::Auth::ServiceAccountCredentials.make_creds(json_key_io:::File.open("/path/to/keyfile.json"),scope:"https://www.googleapis.com/auth/devstorage.full_control")storage=Google::Cloud::Storage.new(project_id:"my-project",credentials:credentials)bucket=storage.bucket"my-bucket"file=bucket.file"path/to/my-file.ext"
To learn more about Cloud Storage, read theGoogle Cloud Storage Overview.
Retrieving Buckets
ABucket instance is a container for your data.There is no limit on the number of buckets that you can create in a project. Youcan use buckets to organize and control access to your data. For moreinformation, seeWorking withBuckets.
Each bucket has a globally unique name, which is how they are retrieved: (SeeProject#bucket)
require"google/cloud/storage"storage=Google::Cloud::Storage.newbucket=storage.bucket"my-todo-app"
You can also retrieve all buckets on a project: (SeeProject#buckets)
require"google/cloud/storage"storage=Google::Cloud::Storage.newall_buckets=storage.buckets
If you have a significant number of buckets, you may need to fetch them inmultiple service requests.
Iterating over each bucket, potentially with multiple API calls, by invokingBucket::List#all with a block:
require"google/cloud/storage"storage=Google::Cloud::Storage.newbuckets=storage.bucketsbuckets.alldo|bucket|putsbucket.nameend
Limiting the number of API calls made:
require"google/cloud/storage"storage=Google::Cloud::Storage.newbuckets=storage.bucketsbuckets.all(request_limit:10)do|bucket|putsbucket.nameend
SeeBucket::List for details.
Creating a Bucket
A unique name is all that is needed to create a new bucket: (SeeProject#create_bucket)
require"google/cloud/storage"storage=Google::Cloud::Storage.newbucket=storage.create_bucket"my-todo-app-attachments"
Retrieving Files
AFile instance is an individual data object thatyou store in Google Cloud Storage. Files contain the data stored as well asmetadata describing the data. Files belong to a bucket and cannot be sharedamong buckets. There is no limit on the number of files that you can create in abucket. For more information, seeWorking withObjects.
Files are retrieved by their name, which is the path of the file in the bucket:(SeeBucket#file)
require"google/cloud/storage"storage=Google::Cloud::Storage.newbucket=storage.bucket"my-todo-app"file=bucket.file"avatars/heidi/400x400.png"
You can also retrieve all files in a bucket: (SeeBucket#files)
require"google/cloud/storage"storage=Google::Cloud::Storage.newbucket=storage.bucket"my-todo-app"all_files=bucket.files
Or you can retrieve all files in a specified path:
require"google/cloud/storage"storage=Google::Cloud::Storage.newbucket=storage.bucket"my-todo-app"avatar_files=bucket.filesprefix:"avatars/"
If you have a significant number of files, you may need to fetch them inmultiple service requests.
Iterating over each file, potentially with multiple API calls, by invokingFile::List#all with a block:
require"google/cloud/storage"storage=Google::Cloud::Storage.newbucket=storage.bucket"my-todo-app"files=storage.filesfiles.alldo|file|putsfile.nameend
Limiting the number of API calls made:
require"google/cloud/storage"storage=Google::Cloud::Storage.newfiles=storage.filesfiles.all(request_limit:10)do|file|putsbucket.nameend
SeeFile::List for details.
Creating a File
A new file can be uploaded by specifying the location of a file on the localfile system, and the name/path that the file should be stored in the bucket.(SeeBucket#create_file)
require"google/cloud/storage"storage=Google::Cloud::Storage.newbucket=storage.bucket"my-todo-app"bucket.create_file"/var/todo-app/avatars/heidi/400x400.png","avatars/heidi/400x400.png"
Files can also be created from an in-memory StringIO object:
require"google/cloud/storage"storage=Google::Cloud::Storage.newbucket=storage.bucket"my-todo-app"bucket.create_fileStringIO.new("Hello world!"),"hello-world.txt"
Customer-supplied encryption keys
By default, Google Cloud Storage manages server-side encryption keys on yourbehalf. However, acustomer-supplied encryptionkey can beprovided with theencryption_key option. If given, the same key must beprovided to subsequently download or copy the file. If you use customer-suppliedencryption keys, you must securely manage your keys and ensure that they are notlost. Also, please note that file metadata is not encrypted, with the exceptionof the CRC32C checksum and MD5 hash. The names of files and buckets are also notencrypted, and you can read or update the metadata of an encrypted file withoutproviding the encryption key.
require"google/cloud/storage"storage=Google::Cloud::Storage.newbucket=storage.bucket"my-todo-app"# Key generation shown for example purposes only. Write your own.cipher=OpenSSL::Cipher.new"aes-256-cfb"cipher.encryptkey=cipher.random_keybucket.create_file"/var/todo-app/avatars/heidi/400x400.png","avatars/heidi/400x400.png",encryption_key:key# Store your key and hash securely for later use.file=bucket.file"avatars/heidi/400x400.png",encryption_key:key
UseFile#rotate to rotatecustomer-supplied encryption keys.
require"google/cloud/storage"storage=Google::Cloud::Storage.newbucket=storage.bucket"my-todo-app"# Old key was stored securely for later use.old_key="y\x03\"\x0E\xB6\xD3\x9B\x0E\xAB*\x19\xFAv\xDEY\xBEI..."file=bucket.file"path/to/my-file.ext",encryption_key:old_key# Key generation shown for example purposes only. Write your own.cipher=OpenSSL::Cipher.new"aes-256-cfb"cipher.encryptnew_key=cipher.random_keyfile.rotateencryption_key:old_key,new_encryption_key:new_key
Downloading a File
Files can be downloaded to the local file system. (SeeFile#download)
require"google/cloud/storage"storage=Google::Cloud::Storage.newbucket=storage.bucket"my-todo-app"file=bucket.file"avatars/heidi/400x400.png"file.download"/var/todo-app/avatars/heidi/400x400.png"
Files can also be downloaded to an in-memory StringIO object:
require"google/cloud/storage"storage=Google::Cloud::Storage.newbucket=storage.bucket"my-todo-app"file=bucket.file"hello-world.txt"downloaded=file.downloaddownloaded.rewinddownloaded.read#=> "Hello world!"
Download a public file with an anonymous, unauthenticated client. Useskip_lookup to avoid errors retrieving non-public bucket and file metadata.
require"google/cloud/storage"storage=Google::Cloud::Storage.anonymousbucket=storage.bucket"public-bucket",skip_lookup:truefile=bucket.file"path/to/public-file.ext",skip_lookup:truedownloaded=file.downloaddownloaded.rewinddownloaded.read#=> "Hello world!"
Creating and downloading gzip-encoded files
When uploading a gzip-compressed file, you should passcontent_encoding:"gzip" if you want the file to be eligible fordecompressivetranscoding when it is laterdownloaded. In addition, giving the gzip-compressed file a name containing theoriginal file extension (for example,.txt) will ensure that the file'sContent-Type metadata is set correctly. (You can also set the file'sContent-Type metadata explicitly with thecontent_type option.)
require"zlib"require"google/cloud/storage"storage=Google::Cloud::Storage.newgz=StringIO.new""z=Zlib::GzipWriter.newgzz.write"Hello world!"z.closedata=StringIO.newgz.stringbucket=storage.bucket"my-bucket"bucket.create_filedata,"path/to/gzipped.txt",content_encoding:"gzip"file=bucket.file"path/to/gzipped.txt"# The downloaded data is decompressed by default.file.download"path/to/downloaded/hello.txt"# The downloaded data remains compressed with skip_decompress.file.download"path/to/downloaded/gzipped.txt",skip_decompress:true
Using Signed URLs
Access without authentication can be granted to a file for a specified period oftime. This URL uses a cryptographic signature of your credentials to access thefile. (SeeFile#signed_url)
require"google/cloud/storage"storage=Google::Cloud::Storage.newbucket=storage.bucket"my-todo-app"file=bucket.file"avatars/heidi/400x400.png"shared_url=file.signed_urlmethod:"GET",expires:300# 5 minutes from now
Controlling Access to a Bucket
Access to a bucket is controlled withBucket#acl. A bucket has owners, writers, and readers. Permissions can begranted to an individual user's email address, a group's email address, as wellas many predefined lists. See theAccess Controlguide for more.
Access to a bucket can be granted to a user by appending"user-" to the emailaddress:
require"google/cloud/storage"storage=Google::Cloud::Storage.newbucket=storage.bucket"my-todo-app"email="heidi@example.net"bucket.acl.add_reader"user-#{email}"
Access to a bucket can be granted to a group by appending"group-" to theemail address:
require"google/cloud/storage"storage=Google::Cloud::Storage.newbucket=storage.bucket"my-todo-app"email="authors@example.net"bucket.acl.add_reader"group-#{email}"
Access to a bucket can also be granted to a predefined list of permissions:
require"google/cloud/storage"storage=Google::Cloud::Storage.newbucket=storage.bucket"my-todo-app"bucket.acl.public!
Controlling Access to a File
Access to a file is controlled in two ways, either by the setting the defaultpermissions to all files in a bucket withBucket#default_acl, or by settingpermissions to an individual file withFile#acl.
Access to a file can be granted to a user by appending"user-" to the emailaddress:
require"google/cloud/storage"storage=Google::Cloud::Storage.newbucket=storage.bucket"my-todo-app"file=bucket.file"avatars/heidi/400x400.png"email="heidi@example.net"file.acl.add_reader"user-#{email}"
Access to a file can be granted to a group by appending"group-" to the emailaddress:
require"google/cloud/storage"storage=Google::Cloud::Storage.newbucket=storage.bucket"my-todo-app"file=bucket.file"avatars/heidi/400x400.png"email="authors@example.net"file.acl.add_reader"group-#{email}"
Access to a file can also be granted to a predefined list of permissions:
require"google/cloud/storage"storage=Google::Cloud::Storage.newbucket=storage.bucket"my-todo-app"file=bucket.file"avatars/heidi/400x400.png"file.acl.public!
Assigning payment to the requester
The requester pays feature enables the owner of a bucket to indicate that aclient accessing the bucket or a file it contains must assume the transit costsrelated to the access.
Assign transit costs for bucket and file operations to requesting clients withtherequester_pays flag:
require"google/cloud/storage"storage=Google::Cloud::Storage.newbucket=storage.bucket"my-bucket"bucket.requester_pays=true# API call# Clients must now provide `user_project` option when calling# Project#bucket to access this bucket.
Once therequester_pays flag is enabled for a bucket, a client attempting toaccess the bucket and its files must provide theuser_project option toProject#bucket. If the argument givenistrue, transit costs for operations on the requested bucket or a file itcontains will be billed to the current project for the client. (SeeProject#project for the ID of thecurrent project.)
require"google/cloud/storage"storage=Google::Cloud::Storage.newbucket=storage.bucket"other-project-bucket",user_project:truefiles=bucket.files# Billed to current project
If the argument is a project ID string, and the indicated project is authorizedfor the currently authenticated service account, transit costs will be billed tothe indicated project.
require"google/cloud/storage"storage=Google::Cloud::Storage.newbucket=storage.bucket"other-project-bucket",user_project:"my-other-project"files=bucket.files# Billed to "my-other-project"
Configuring Pub/Sub notification subscriptions
You can configure notifications to send Google Cloud Pub/Sub messages aboutchanges to files in your buckets. For example, you can track files that arecreated and deleted in your bucket. Each notification contains informationdescribing both the event that triggered it and the file that changed.
You can send notifications to any Cloud Pub/Sub topic in any project for whichyour service account has sufficient permissions. As shown below, you need toexplicitly grant permission to your service account to enable Google CloudStorage to publish on behalf of your account. (Even if your current projectcreated and owns the topic.)
require"google/cloud/pubsub"require"google/cloud/storage"pubsub=Google::Cloud::PubSub.newstorage=Google::Cloud::Storage.newtopic_admin=pubsub.topic_admintopic_path=pubsub.topic_path"my-topic"topic=topic_admin.create_topicname:topic_pathpolicy={bindings:[{role:"roles/pubsub.publisher",members:["serviceAccount:#{storage.service_account_email}"]}]}pubsub.iam.set_iam_policyresource:topic_path,policy:policybucket=storage.bucket"my-bucket"notification=bucket.create_notificationtopic.name
Configuring retries and timeout
You can configure how many times API requests may be automatically retried. Whenan API request fails, the response will be inspected to see if the request meetscriteria indicating that it may succeed on retry, such as500 and503 statuscodes or a specific internal error code such asrateLimitExceeded. If it meetsthe criteria, the request will be retried after a delay. If another erroroccurs, the delay will be increased before a subsequent attempt, until theretries limit is reached.
You can also set the requesttimeout value in seconds.
require"google/cloud/storage"storage=Google::Cloud::Storage.newretries:10,timeout:120
The library by default retries all API requests which are always idempotent on a"transient" error.
For API requests which are idempotent only if the some conditions are satisfied(For ex. a file has the same "generation"), the library retries only if thecondition is specified.
Rather than using this default behaviour, you may choose to disable the retrieson your own.
You can passretries as0 to disable retries for all operations regardlessof their idempotencies.
require"google/cloud/storage"storage=Google::Cloud::Storage.newretries:0
You can also disable retries for a particular operation by passingretries as0 in theoptions field.
require"google/cloud/storage"storage=Google::Cloud::Storage.newservice=storage.serviceservice.get_bucketbucket_name,options:{retries:0}
For those API requests which are never idempotent, the library passes retries=0 by default, suppressing any retries.
See theStorage status and errorcodesfor a list of error conditions.
Additional information
Google Cloud Storage can be configured to use logging. To learn more, see theLogging guide.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-11-04 UTC.