Upload objects from memory Stay organized with collections Save and categorize content based on your preferences.
This page shows you how to upload objects from memory to your Cloud Storagebucket by using client libraries. Uploading from memory is useful for when youwant to avoid unnecessary writes from memory to your local file system.
An uploaded object consists of the data you want to store along withany associated metadata. For a conceptual overview, including how to choose theoptimal upload method based on your file size, seeUploads and downloads.
Required roles
To get the permissions that you need to upload objects from memory to a bucket,ask your administrator to grant you the Storage Object User(roles/storage.objectUser) IAM role on the bucket. Thispredefined role contains the permissions required to upload an object to abucket. To see the exact permissions that are required, expand theRequired permissions section:
Required permissions
storage.objects.createstorage.objects.delete- This permission is only required for uploads that overwrite an existingobject.
You can also get these permissions withcustom roles.
For information about granting roles on buckets, seeSet and manage IAM policies on buckets.
Upload an object from memory
Client libraries
For more information, see theCloud StorageC++ API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StorageC# API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StorageGo API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StorageJava API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StorageNode.js API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StoragePHP API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StoragePython API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries. For more information, see theCloud StorageRuby API reference documentation. To authenticate to Cloud Storage, set up Application Default Credentials. For more information, seeSet up authentication for client libraries.C++
namespacegcs=::google::cloud::storage;using::google::cloud::StatusOr;[](gcs::Clientclient,std::stringconst&bucket_name,std::stringconst&object_name){std::stringconsttext="Lorem ipsum dolor sit amet";// For small uploads where the data is contiguous in memory use// `InsertObject()`. For more specific size recommendations see// https://cloud.google.com/storage/docs/uploads-downloads#sizeautometadata=client.InsertObject(bucket_name,object_name,text);if(!metadata)throwstd::move(metadata).status();std::cout <<"Successfully wrote to object " <<metadata->name() <<" its size is: " <<metadata->size() <<"\n";// For larger uploads, or uploads where the data is not contiguous in// memory, use `WriteObject()`. Consider using `std::ostream::write()` for// best performance.std::vector<std::string>v(100,text);gcs::ObjectWriteStreamstream=client.WriteObject(bucket_name,object_name);std::copy(v.begin(),v.end(),std::ostream_iterator<std::string>(stream));stream.Close();metadata=std::move(stream).metadata();if(!metadata)throwstd::move(metadata).status();std::cout <<"Successfully wrote to object " <<metadata->name() <<" its size is: " <<metadata->size() <<"\nFull metadata: " <<*metadata <<"\n";}C#
usingGoogle.Cloud.Storage.V1;usingSystem;usingSystem.IO;usingSystem.Text;publicclassUploadObjectFromMemorySample{publicvoidUploadObjectFromMemory(stringbucketName="unique-bucket-name",stringobjectName="file-name",stringcontents="Hello world!"){varstorage=StorageClient.Create();byte[]byteArray=Encoding.UTF8.GetBytes(contents);MemoryStreamstream=newMemoryStream(byteArray);storage.UploadObject(bucketName,objectName,"application/octet-stream",stream);Console.WriteLine($" {objectName} uploaded to bucket {bucketName} with contents: {contents}");}}Go
import("bytes""context""fmt""io""time""cloud.google.com/go/storage")// streamFileUpload uploads an object via a stream.funcstreamFileUpload(wio.Writer,bucket,objectstring)error{// bucket := "bucket-name"// object := "object-name"ctx:=context.Background()client,err:=storage.NewClient(ctx)iferr!=nil{returnfmt.Errorf("storage.NewClient: %w",err)}deferclient.Close()b:=[]byte("Hello world.")buf:=bytes.NewBuffer(b)ctx,cancel:=context.WithTimeout(ctx,time.Second*50)defercancel()// Upload an object with storage.Writer.wc:=client.Bucket(bucket).Object(object).NewWriter(ctx)wc.ChunkSize=0// note retries are not supported for chunk size 0.if_,err=io.Copy(wc,buf);err!=nil{returnfmt.Errorf("io.Copy: %w",err)}// Data can continue to be added to the file until the writer is closed.iferr:=wc.Close();err!=nil{returnfmt.Errorf("Writer.Close: %w",err)}fmt.Fprintf(w,"%v uploaded to %v.\n",object,bucket)returnnil}Java
importcom.google.cloud.storage.BlobId;importcom.google.cloud.storage.BlobInfo;importcom.google.cloud.storage.Storage;importcom.google.cloud.storage.StorageOptions;importjava.io.IOException;importjava.nio.charset.StandardCharsets;publicclassUploadObjectFromMemory{publicstaticvoiduploadObjectFromMemory(StringprojectId,StringbucketName,StringobjectName,Stringcontents)throwsIOException{// The ID of your GCP project// String projectId = "your-project-id";// The ID of your GCS bucket// String bucketName = "your-unique-bucket-name";// The ID of your GCS object// String objectName = "your-object-name";// The string of contents you wish to upload// String contents = "Hello world!";Storagestorage=StorageOptions.newBuilder().setProjectId(projectId).build().getService();BlobIdblobId=BlobId.of(bucketName,objectName);BlobInfoblobInfo=BlobInfo.newBuilder(blobId).build();byte[]content=contents.getBytes(StandardCharsets.UTF_8);// Optional: set a generation-match precondition to enable automatic retries, avoid potential// race// conditions and data corruptions. The request returns a 412 error if the// preconditions are not met.Storage.BlobTargetOptionprecondition;if(storage.get(bucketName,objectName)==null){// For a target object that does not yet exist, set the DoesNotExist precondition.// This will cause the request to fail if the object is created before the request runs.precondition=Storage.BlobTargetOption.doesNotExist();}else{// If the destination already exists in your bucket, instead set a generation-match// precondition. This will cause the request to fail if the existing object's generation// changes before the request runs.precondition=Storage.BlobTargetOption.generationMatch(storage.get(bucketName,objectName).getGeneration());}storage.create(blobInfo,content,precondition);System.out.println("Object "+objectName+" uploaded to bucket "+bucketName+" with contents "+contents);}}Node.js
/** * TODO(developer): Uncomment the following lines before running the sample. */// The ID of your GCS bucket// const bucketName = 'your-unique-bucket-name';// The contents that you want to upload// const contents = 'these are my contents';// The new ID for your GCS file// const destFileName = 'your-new-file-name';// Imports the Google Cloud Node.js client libraryconst{Storage}=require('@google-cloud/storage');// Creates a clientconststorage=newStorage();asyncfunctionuploadFromMemory(){awaitstorage.bucket(bucketName).file(destFileName).save(contents);console.log(`${destFileName} with contents${contents} uploaded to${bucketName}.`);}uploadFromMemory().catch(console.error);PHP
use Google\Cloud\Storage\StorageClient;/** * Upload an object from memory buffer. * * @param string $bucketName The name of your Cloud Storage bucket. * (e.g. 'my-bucket') * @param string $objectName The name of your Cloud Storage object. * (e.g. 'my-object') * @param string $contents The contents to upload to the file. * (e.g. 'these are my contents') */function upload_object_from_memory( string $bucketName, string $objectName, string $contents): void { $storage = new StorageClient(); if (!$stream = fopen('data://text/plain,' . $contents, 'r')) { throw new \InvalidArgumentException('Unable to open file for reading'); } $bucket = $storage->bucket($bucketName); $bucket->upload($stream, [ 'name' => $objectName, ]); printf('Uploaded %s to gs://%s/%s' . PHP_EOL, $contents, $bucketName, $objectName);}Python
fromgoogle.cloudimportstoragedefupload_blob_from_memory(bucket_name,contents,destination_blob_name):"""Uploads a file to the bucket."""# The ID of your GCS bucket# bucket_name = "your-bucket-name"# The contents to upload to the file# contents = "these are my contents"# The ID of your GCS object# destination_blob_name = "storage-object-name"storage_client=storage.Client()bucket=storage_client.bucket(bucket_name)blob=bucket.blob(destination_blob_name)blob.upload_from_string(contents)print(f"{destination_blob_name} with contents{contents} uploaded to{bucket_name}.")Ruby
# The ID of your GCS bucket# bucket_name = "your-unique-bucket-name"# The ID of your GCS object# file_name = "your-file-name"# The contents to upload to your file# file_content = "Hello, world!"require"google/cloud/storage"storage=Google::Cloud::Storage.newbucket=storage.bucketbucket_name,skip_lookup:truefile=bucket.create_fileStringIO.new(file_content),file_nameputs"Uploaded file#{file.name} to bucket#{bucket_name} with content:#{file_content}"
What's next
- Learn aboutnaming requirements for objects.
- List the objects that have been successfully uploaded to the bucket.
- Control who has access to your objects and buckets.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-18 UTC.