Class Bucket (2.1.0) Stay organized with collections Save and categorize content based on your preferences.
- 3.5.0 (latest)
- 3.4.1
- 3.3.1
- 3.2.0
- 3.1.1
- 3.0.0
- 2.19.0
- 2.17.0
- 2.16.0
- 2.15.0
- 2.14.0
- 2.13.0
- 2.12.0
- 2.11.0
- 2.10.0
- 2.9.0
- 2.8.0
- 2.7.0
- 2.6.0
- 2.5.0
- 2.4.0
- 2.3.0
- 2.2.1
- 2.1.0
- 2.0.0
- 1.44.0
- 1.43.0
- 1.42.3
- 1.41.1
- 1.40.0
- 1.39.0
- 1.38.0
- 1.37.1
- 1.36.2
- 1.35.1
- 1.34.0
- 1.33.0
- 1.32.0
- 1.31.2
- 1.30.0
- 1.29.0
- 1.28.1
- 1.27.0
- 1.26.0
- 1.25.0
- 1.24.1
- 1.23.0
- 1.22.0
- 1.21.0
- 1.20.0
- 1.19.0
- 1.18.0
- 1.17.0
Bucket(client,name=None,user_project=None)A class representing a Bucket on Cloud Storage.
Parameters | |
|---|---|
| Name | Description |
client | ClientA client which holds credentials and project configuration for the bucket (which requires a project). |
name | strThe name of the bucket. Bucket names must start and end with a number or letter. |
user_project | str(Optional) the project ID to be billed for API requests made via this instance. |
Properties
acl
Create our ACL on demand.
client
The client bound to this bucket.
cors
Retrieve or set CORS policies configured for this bucket.
Seehttp://www.w3.org/TR/cors/ andhttps://cloud.google.com/storage/docs/json_api/v1/buckets
Note:The getter for this property returns a list which containscopies of the bucket's CORS policy mappings. Mutating the listor one of its dicts has no effect unless you then re-assign thedict via the setter. E.g.:>>> policies = bucket.cors>>> policies.append({'origin': '/foo', ...})>>> policies[1]['maxAgeSeconds'] = 3600>>> del policies[0]>>> bucket.cors = policies>>> bucket.update()| Returns | |
|---|---|
| Type | Description |
list of dictionaries | A sequence of mappings describing each CORS policy. |
default_event_based_hold
Scalar property getter.
default_kms_key_name
Retrieve / set default KMS encryption key for objects in the bucket.
Seehttps://cloud.google.com/storage/docs/json_api/v1/buckets
:setter: Set default KMS encryption key for items in this bucket.:getter: Get default KMS encryption key for items in this bucket.
| Returns | |
|---|---|
| Type | Description |
str | Default KMS encryption key, orNone if not set. |
default_object_acl
Create our defaultObjectACL on demand.
etag
Retrieve the ETag for the bucket.
Seehttps://tools.ietf.org/html/rfc2616#section-3.11 andhttps://cloud.google.com/storage/docs/json_api/v1/buckets
| Returns | |
|---|---|
| Type | Description |
str or | The bucket etag orNone if the bucket's resource has not been loaded from the server. |
iam_configuration
Retrieve IAM configuration for this bucket.
| Returns | |
|---|---|
| Type | Description |
| an instance for managing the bucket's IAM configuration. |
id
Retrieve the ID for the bucket.
Seehttps://cloud.google.com/storage/docs/json_api/v1/buckets
| Returns | |
|---|---|
| Type | Description |
str or | The ID of the bucket orNone if the bucket's resource has not been loaded from the server. |
labels
Retrieve or set labels assigned to this bucket.
Seehttps://cloud.google.com/storage/docs/json_api/v1/buckets#labels
Note:The getter for this property returns a dict which is acopyof the bucket's labels. Mutating that dict has no effect unlessyou then re-assign the dict via the setter. E.g.:>>> labels = bucket.labels>>> labels['new_key'] = 'some-label'>>> del labels['old_key']>>> bucket.labels = labels>>> bucket.update()| Returns | |
|---|---|
| Type | Description |
| Name-value pairs (string->string) labelling the bucket. |
lifecycle_rules
Retrieve or set lifecycle rules configured for this bucket.
Seehttps://cloud.google.com/storage/docs/lifecycle andhttps://cloud.google.com/storage/docs/json_api/v1/buckets
Note:The getter for this property returns a list which containscopies of the bucket's lifecycle rules mappings. Mutating thelist or one of its dicts has no effect unless you then re-assignthe dict via the setter. E.g.:>>> rules = bucket.lifecycle_rules>>> rules.append({'origin': '/foo', ...})>>> rules[1]['rule']['action']['type'] = 'Delete'>>> del rules[0]>>> bucket.lifecycle_rules = rules>>> bucket.update()| Returns | |
|---|---|
| Type | Description |
generator(dict) | A sequence of mappings describing each lifecycle rule. |
location
Retrieve location configured for this bucket.
Seehttps://cloud.google.com/storage/docs/json_api/v1/buckets andhttps://cloud.google.com/storage/docs/bucket-locations
ReturnsNone if the property has not been set before creation,or if the bucket's resource has not been loaded from the server.
location_type
Retrieve or set the location type for the bucket.
Seehttps://cloud.google.com/storage/docs/storage-classes
:setter: Set the location type for this bucket.:getter: Gets the the location type for this bucket.
| Returns | |
|---|---|
| Type | Description |
str or | If set, one ofMULTI_REGION_LOCATION_TYPE,REGION_LOCATION_TYPE, orDUAL_REGION_LOCATION_TYPE, elseNone. |
metageneration
Retrieve the metageneration for the bucket.
Seehttps://cloud.google.com/storage/docs/json_api/v1/buckets
| Returns | |
|---|---|
| Type | Description |
int or | The metageneration of the bucket orNone if the bucket's resource has not been loaded from the server. |
owner
Retrieve info about the owner of the bucket.
Seehttps://cloud.google.com/storage/docs/json_api/v1/buckets
| Returns | |
|---|---|
| Type | Description |
dict or | Mapping of owner's role/ID. ReturnsNone if the bucket's resource has not been loaded from the server. |
path
The URL path to this bucket.
project_number
Retrieve the number of the project to which the bucket is assigned.
Seehttps://cloud.google.com/storage/docs/json_api/v1/buckets
| Returns | |
|---|---|
| Type | Description |
int or | The project number that owns the bucket orNone if the bucket's resource has not been loaded from the server. |
requester_pays
Does the requester pay for API requests for this bucket?
Seehttps://cloud.google.com/storage/docs/requester-pays fordetails.
:setter: Update whether requester pays for this bucket.:getter: Query whether requester pays for this bucket.
| Returns | |
|---|---|
| Type | Description |
bool | True if requester pays for API requests for the bucket, else False. |
retention_period
Retrieve or set the retention period for items in the bucket.
| Returns | |
|---|---|
| Type | Description |
int or | number of seconds to retain items after upload or release from event-based lock, orNone if the property is not set locally. |
retention_policy_effective_time
Retrieve the effective time of the bucket's retention policy.
| Returns | |
|---|---|
| Type | Description |
datetime.datetime or | point-in time at which the bucket's retention policy is effective, orNone if the property is not set locally. |
retention_policy_locked
Retrieve whthere the bucket's retention policy is locked.
| Returns | |
|---|---|
| Type | Description |
bool | True if the bucket's policy is locked, or else False if the policy is not locked, or the property is not set locally. |
rpo
Get the RPO (Recovery Point Objective) of this bucket
See:https://cloud.google.com/storage/docs/managing-turbo-replication
"ASYNC_TURBO" or "DEFAULT"
self_link
Retrieve the URI for the bucket.
Seehttps://cloud.google.com/storage/docs/json_api/v1/buckets
| Returns | |
|---|---|
| Type | Description |
str or | The self link for the bucket orNone if the bucket's resource has not been loaded from the server. |
storage_class
Retrieve or set the storage class for the bucket.
Seehttps://cloud.google.com/storage/docs/storage-classes
:setter: Set the storage class for this bucket.:getter: Gets the the storage class for this bucket.
| Returns | |
|---|---|
| Type | Description |
str or | If set, one ofNEARLINE_STORAGE_CLASS,COLDLINE_STORAGE_CLASS,ARCHIVE_STORAGE_CLASS,STANDARD_STORAGE_CLASS,MULTI_REGIONAL_LEGACY_STORAGE_CLASS,REGIONAL_LEGACY_STORAGE_CLASS, orDURABLE_REDUCED_AVAILABILITY_LEGACY_STORAGE_CLASS, elseNone. |
time_created
Retrieve the timestamp at which the bucket was created.
Seehttps://cloud.google.com/storage/docs/json_api/v1/buckets
| Returns | |
|---|---|
| Type | Description |
| Datetime object parsed from RFC3339 valid timestamp, orNone if the bucket's resource has not been loaded from the server. |
user_project
Project ID to be billed for API requests made via this bucket.
If unset, API requests are billed to the bucket owner.
A user project is required for all operations on Requester Pays buckets.
Seehttps://cloud.google.com/storage/docs/requester-pays#requirements for details.
versioning_enabled
Is versioning enabled for this bucket?
Seehttps://cloud.google.com/storage/docs/object-versioning fordetails.
:setter: Update whether versioning is enabled for this bucket.:getter: Query whether versioning is enabled for this bucket.
| Returns | |
|---|---|
| Type | Description |
bool | True if enabled, else False. |
Methods
Bucket
Bucket(client,name=None,user_project=None)propertyname Get the bucket's name.
add_lifecycle_delete_rule
add_lifecycle_delete_rule(**kw)Add a "delete" rule to lifestyle rules configured for this bucket.
Seehttps://cloud.google.com/storage/docs/lifecycle andhttps://cloud.google.com/storage/docs/json_api/v1/buckets
.. literalinclude:: snippets.py :start-after: [START add_lifecycle_delete_rule] :end-before: [END add_lifecycle_delete_rule] :dedent: 4
add_lifecycle_set_storage_class_rule
add_lifecycle_set_storage_class_rule(storage_class,**kw)Add a "delete" rule to lifestyle rules configured for this bucket.
Seehttps://cloud.google.com/storage/docs/lifecycle andhttps://cloud.google.com/storage/docs/json_api/v1/buckets
.. literalinclude:: snippets.py :start-after: [START add_lifecycle_set_storage_class_rule] :end-before: [END add_lifecycle_set_storage_class_rule] :dedent: 4
| Parameter | |
|---|---|
| Name | Description |
storage_class | str, one ofnew storage class to assign to matching items. |
blob
blob(blob_name,chunk_size=None,encryption_key=None,kms_key_name=None,generation=None)Factory constructor for blob object.
Note:This will not make an HTTP request; it simply instantiatesa blob object owned by this bucket.| Parameters | |
|---|---|
| Name | Description |
blob_name | strThe name of the blob to be instantiated. |
chunk_size | intThe size of a chunk of data whenever iterating (in bytes). This must be a multiple of 256 KB per the API specification. |
encryption_key | bytes(Optional) 32 byte encryption key for customer-supplied encryption. |
kms_key_name | str(Optional) Resource name of KMS key used to encrypt blob's content. |
generation | long(Optional) If present, selects a specific revision of this object. |
| Returns | |
|---|---|
| Type | Description |
Blob | The blob object created. |
clear_lifecyle_rules
clear_lifecyle_rules()Set lifestyle rules configured for this bucket.
Seehttps://cloud.google.com/storage/docs/lifecycle andhttps://cloud.google.com/storage/docs/json_api/v1/buckets
configure_website
configure_website(main_page_suffix=None,not_found_page=None)Configure website-related properties.
Seehttps://cloud.google.com/storage/docs/hosting-static-website
Note:This (apparently) only worksif your bucket name is a domain name(and to do that, you need to get approved somehow...).If you want this bucket to host a website, just provide the nameof an index page and a page to use when a blob isn't found:.. literalinclude:: snippets.py :start-after: [START configure_website] :end-before: [END configure_website] :dedent: 4
You probably should also make the whole bucket public:
.. literalinclude:: snippets.py :start-after: [START make_public] :end-before: [END make_public] :dedent: 4
This says: "Make the bucket public, and all the stuff already inthe bucket, and anything else I add to the bucket. Just make itall public."
| Parameters | |
|---|---|
| Name | Description |
main_page_suffix | strThe page to use as the main page of a directory. Typically something like index.html. |
not_found_page | strThe file to use when a page isn't found. |
copy_blob
copy_blob(blob,destination_bucket,new_name=None,client=None,preserve_acl=True,source_generation=None,if_generation_match=None,if_generation_not_match=None,if_metageneration_match=None,if_metageneration_not_match=None,if_source_generation_match=None,if_source_generation_not_match=None,if_source_metageneration_match=None,if_source_metageneration_not_match=None,timeout=60,retry=<google.cloud.storage.retry.ConditionalRetryPolicyobject>)Copy the given blob to the given bucket, optionally with a new name.
Ifuser_project is set, bills the API request to that project.
| Parameters | |
|---|---|
| Name | Description |
blob | BlobThe blob to be copied. |
destination_bucket | BucketThe bucket into which the blob should be copied. |
new_name | str(Optional) The new name for the copied file. |
client | Client or(Optional) The client to use. If not passed, falls back to the |
preserve_acl | boolDEPRECATED. This argument is not functional! (Optional) Copies ACL from old blob to new blob. Default: True. |
source_generation | long(Optional) The generation of the blob to be copied. |
if_generation_match | long(Optional) See :ref: |
if_generation_not_match | long(Optional) See :ref: |
if_metageneration_match | long(Optional) See :ref: |
if_metageneration_not_match | long(Optional) See :ref: |
if_source_generation_match | long(Optional) Makes the operation conditional on whether the source object's generation matches the given value. |
if_source_generation_not_match | long(Optional) Makes the operation conditional on whether the source object's generation does not match the given value. |
if_source_metageneration_match | long(Optional) Makes the operation conditional on whether the source object's current metageneration matches the given value. |
if_source_metageneration_not_match | long(Optional) Makes the operation conditional on whether the source object's current metageneration does not match the given value. |
timeout | float or tuple(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry | google.api_core.retry.Retry orgoogle.cloud.storage.retry.ConditionalRetryPolicy(Optional) How to retry the RPC. See: |
| Returns | |
|---|---|
| Type | Description |
Blob | The new Blob. .. rubric:: Example Copy a blob including ACL. >>> from google.cloud import storage >>> client = storage.Client(project="project") >>> bucket = client.bucket("bucket") >>> dst_bucket = client.bucket("destination-bucket") >>> blob = bucket.blob("file.ext") >>> new_blob = bucket.copy_blob(blob, dst_bucket) >>> new_blob.acl.save(blob.acl) |
create
create(client=None,project=None,location=None,predefined_acl=None,predefined_default_object_acl=None,timeout=60,retry=<google.api_core.retry.Retryobject>)DEPRECATED. Creates current bucket.
Note:Direct use of this method is deprecated. UseClient.create_bucket() instead.If the bucket already exists, will raisexref_Conflict.This implements "storage.buckets.insert".
Ifuser_project is set, bills the API request to that project.
| Parameters | |
|---|---|
| Name | Description |
client | Client or(Optional) The client to use. If not passed, falls back to the |
project | str(Optional) The project under which the bucket is to be created. If not passed, uses the project set on the client. |
location | str(Optional) The location of the bucket. If not passed, the default location, US, will be used. Seehttps://cloud.google.com/storage/docs/bucket-locations |
predefined_acl | str(Optional) Name of predefined ACL to apply to bucket. See:https://cloud.google.com/storage/docs/access-control/lists#predefined-acl |
predefined_default_object_acl | str(Optional) Name of predefined ACL to apply to bucket's objects. See:https://cloud.google.com/storage/docs/access-control/lists#predefined-acl |
timeout | float or tuple(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry | google.api_core.retry.Retry orgoogle.cloud.storage.retry.ConditionalRetryPolicy(Optional) How to retry the RPC. See: |
| Exceptions | |
|---|---|
| Type | Description |
ValueError | ifproject is None and client'sproject is also None. |
delete
delete(force=False,client=None,if_metageneration_match=None,if_metageneration_not_match=None,timeout=60,retry=<google.api_core.retry.Retryobject>)Delete this bucket.
The bucketmust be empty in order to submit a delete request. Ifforce=True is passed, this will first attempt to delete all theobjects / blobs in the bucket (i.e. try to empty the bucket).
If the bucket doesn't exist, this will raisexref_NotFound. If the bucket is not empty(andforce=False), will raise xref_Conflict.
Ifforce=True and the bucket contains more than 256 objects / blobsthis will cowardly refuse to delete the objects (or the bucket). Thisis to prevent accidental bucket deletion and to prevent extremely longruntime of this method.
Ifuser_project is set, bills the API request to that project.
| Parameters | |
|---|---|
| Name | Description |
force | boolIf True, empties the bucket's objects then deletes it. |
client | Client or(Optional) The client to use. If not passed, falls back to the |
if_metageneration_match | long(Optional) Make the operation conditional on whether the blob's current metageneration matches the given value. |
if_metageneration_not_match | long(Optional) Make the operation conditional on whether the blob's current metageneration does not match the given value. |
timeout | float or tuple(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry | google.api_core.retry.Retry orgoogle.cloud.storage.retry.ConditionalRetryPolicy(Optional) How to retry the RPC. See: |
| Exceptions | |
|---|---|
| Type | Description |
`ValueError | ifforce isTrue and the bucket contains more than 256 objects / blobs. |
delete_blob
delete_blob(blob_name,client=None,generation=None,if_generation_match=None,if_generation_not_match=None,if_metageneration_match=None,if_metageneration_not_match=None,timeout=60,retry=<google.cloud.storage.retry.ConditionalRetryPolicyobject>)Deletes a blob from the current bucket.
If the blob isn't found (backend 404), raises axref_NotFound.
For example:
.. literalinclude:: snippets.py :start-after: [START delete_blob] :end-before: [END delete_blob] :dedent: 4
Ifuser_project is set, bills the API request to that project.
| Parameters | |
|---|---|
| Name | Description |
blob_name | strA blob name to delete. |
client | Client or(Optional) The client to use. If not passed, falls back to the |
generation | long(Optional) If present, permanently deletes a specific revision of this object. |
if_generation_match | long(Optional) See :ref: |
if_generation_not_match | long(Optional) See :ref: |
if_metageneration_match | long(Optional) See :ref: |
if_metageneration_not_match | long(Optional) See :ref: |
timeout | float or tuple(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry | google.api_core.retry.Retry orgoogle.cloud.storage.retry.ConditionalRetryPolicy(Optional) How to retry the RPC. See: |
| Exceptions | |
|---|---|
| Type | Description |
NotFound | (to suppress the exception, calldelete_blobs, passing a no-opon_error callback, e.g.: .. literalinclude:: snippets.py :start-after: [START delete_blobs] :end-before: [END delete_blobs] :dedent: 4 |
delete_blobs
delete_blobs(blobs,on_error=None,client=None,timeout=60,if_generation_match=None,if_generation_not_match=None,if_metageneration_match=None,if_metageneration_not_match=None,retry=<google.cloud.storage.retry.ConditionalRetryPolicyobject>)Deletes a list of blobs from the current bucket.
Usesdelete_blob to delete each individual blob.
Ifuser_project is set, bills the API request to that project.
| Parameters | |
|---|---|
| Name | Description |
blobs | listA list ofBlob-s or blob names to delete. |
on_error | callable(Optional) Takes single argument: |
client | Client(Optional) The client to use. If not passed, falls back to the |
if_generation_match | list of long(Optional) See :ref: |
if_generation_not_match | list of long(Optional) See :ref: |
if_metageneration_match | list of long(Optional) See :ref: |
if_metageneration_not_match | list of long(Optional) See :ref: |
timeout | float or tuple(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry | google.api_core.retry.Retry orgoogle.cloud.storage.retry.ConditionalRetryPolicy(Optional) How to retry the RPC. See: |
| Exceptions | |
|---|---|
| Type | Description |
NotFound | (ifon_error is not passed). .. rubric:: Example Delete blobs using generation match preconditions. >>> from google.cloud import storage >>> client = storage.Client() >>> bucket = client.bucket("bucket-name") >>> blobs = [bucket.blob("blob-name-1"), bucket.blob("blob-name-2")] >>> if_generation_match = [None] * len(blobs) >>> if_generation_match[0] = "123" # precondition for "blob-name-1" >>> bucket.delete_blobs(blobs, if_generation_match=if_generation_match) |
disable_logging
disable_logging()Disable access logging for this bucket.
Seehttps://cloud.google.com/storage/docs/access-logs#disabling
disable_website
disable_website()Disable the website configuration for this bucket.
This is really just a shortcut for setting the website-relatedattributes toNone.
enable_logging
enable_logging(bucket_name,object_prefix="")Enable access logging for this bucket.
| Parameters | |
|---|---|
| Name | Description |
bucket_name | strname of bucket in which to store access logs |
object_prefix | strprefix for access log filenames |
exists
exists(client=None,timeout=60,if_etag_match=None,if_etag_not_match=None,if_metageneration_match=None,if_metageneration_not_match=None,retry=<google.api_core.retry.Retryobject>)Determines whether or not this bucket exists.
Ifuser_project is set, bills the API request to that project.
| Parameters | |
|---|---|
| Name | Description |
client | Client or(Optional) The client to use. If not passed, falls back to the |
timeout | float or tuple(Optional) The amount of time, in seconds, to wait for the server response. See: |
if_etag_match | Union[str, Set[str]](Optional) Make the operation conditional on whether the bucket's current ETag matches the given value. |
if_etag_not_match | Union[str, Set[str]])(Optional) Make the operation conditional on whether the bucket's current ETag does not match the given value. |
if_metageneration_match | long(Optional) Make the operation conditional on whether the bucket's current metageneration matches the given value. |
if_metageneration_not_match | long(Optional) Make the operation conditional on whether the bucket's current metageneration does not match the given value. |
retry | google.api_core.retry.Retry orgoogle.cloud.storage.retry.ConditionalRetryPolicy(Optional) How to retry the RPC. See: |
| Returns | |
|---|---|
| Type | Description |
bool | True if the bucket exists in Cloud Storage. |
from_string
from_string(uri,client=None)Get a constructor for bucket object by URI.
| Parameters | |
|---|---|
| Name | Description |
uri | strThe bucket uri pass to get bucket object. |
client | Client or(Optional) The client to use. Application code shouldalways pass |
| Returns | |
|---|---|
| Type | Description |
Bucket | The bucket object created. .. rubric:: Example Get a constructor for bucket object by URI.. >>> from google.cloud import storage >>> from google.cloud.storage.bucket import Bucket >>> client = storage.Client() >>> bucket = Bucket.from_string("gs://bucket", client=client) |
generate_signed_url
generate_signed_url(expiration=None,api_access_endpoint="https://storage.googleapis.com",method="GET",headers=None,query_parameters=None,client=None,credentials=None,version=None,virtual_hosted_style=False,bucket_bound_hostname=None,scheme="http",)| Parameters | |
|---|---|
| Name | Description |
expiration | Union[Integer, datetime.datetime, datetime.timedelta]Point in time when the signed URL should expire. If a |
api_access_endpoint | str(Optional) URI base. |
method | strThe HTTP verb that will be used when requesting the URL. |
headers | dict(Optional) Additional HTTP headers to be included as part of the signed URLs. See:https://cloud.google.com/storage/docs/xml-api/reference-headers Requests using the signed URLmust pass the specified header (name and value) with each request for the URL. |
query_parameters | dict(Optional) Additional query parameters to be included as part of the signed URLs. See:https://cloud.google.com/storage/docs/xml-api/reference-headers#query |
client | Client or(Optional) The client to use. If not passed, falls back to the |
credentials | The authorization credentials to attach to requests. These credentials identify this application to the service. If none are specified, the client will attempt to ascertain the credentials from the environment. |
version | str(Optional) The version of signed credential to create. Must be one of 'v2' 'v4'. |
virtual_hosted_style | bool(Optional) If true, then construct the URL relative the bucket's virtual hostname, e.g., ' |
bucket_bound_hostname | str(Optional) If pass, then construct the URL relative to the bucket-bound hostname. Value cane be a bare or with scheme, e.g., 'example.com' or 'http://example.com'. See:https://cloud.google.com/storage/docs/request-endpoints#cname |
scheme | str(Optional) If |
| Exceptions | |
|---|---|
| Type | Description |
`ValueError | when version is invalid. |
`TypeError | when expiration is not a valid type. |
`AttributeError | if credentials is not an instance ofgoogle.auth.credentials.Signing. |
| Returns | |
|---|---|
| Type | Description |
str | A signed URL you can use to access the resource until expiration. |
generate_upload_policy
generate_upload_policy(conditions,expiration=None,client=None)Create a signed upload policy for uploading objects.
This method generates and signs a policy document. You can usepolicy documents_ to allow visitors to a website to upload files toGoogle Cloud Storage without giving them direct write access.
For example:
.. literalinclude:: snippets.py :start-after: [START policy_document] :end-before: [END policy_document] :dedent: 4
.. _policy documents:https://cloud.google.com/storage/docs/xml-api /post-object#policydocument
| Parameters | |
|---|---|
| Name | Description |
expiration | datetime(Optional) Expiration in UTC. If not specified, the policy will expire in 1 hour. |
conditions | listA list of conditions as described in the |
client | Client(Optional) The client to use. If not passed, falls back to the |
| Returns | |
|---|---|
| Type | Description |
dict | A dictionary of (form field name, form field value) of form fields that should be added to your HTML upload form in order to attach the signature. |
get_blob
get_blob(blob_name,client=None,encryption_key=None,generation=None,if_etag_match=None,if_etag_not_match=None,if_generation_match=None,if_generation_not_match=None,if_metageneration_match=None,if_metageneration_not_match=None,timeout=60,retry=<google.api_core.retry.Retryobject>,**kwargs)Get a blob object by name.
This will return None if the blob doesn't exist:
.. literalinclude:: snippets.py :start-after: [START get_blob] :end-before: [END get_blob] :dedent: 4
Ifuser_project is set, bills the API request to that project.
| Parameters | |
|---|---|
| Name | Description |
blob_name | strThe name of the blob to retrieve. |
client | Client or(Optional) The client to use. If not passed, falls back to the |
encryption_key | bytes(Optional) 32 byte encryption key for customer-supplied encryption. Seehttps://cloud.google.com/storage/docs/encryption#customer-supplied. |
generation | long(Optional) If present, selects a specific revision of this object. |
if_etag_match | Union[str, Set[str]](Optional) See :ref: |
if_etag_not_match | Union[str, Set[str]](Optional) See :ref: |
if_generation_match | long(Optional) See :ref: |
if_generation_not_match | long(Optional) See :ref: |
if_metageneration_match | long(Optional) See :ref: |
if_metageneration_not_match | long(Optional) See :ref: |
timeout | float or tuple(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry | google.api_core.retry.Retry orgoogle.cloud.storage.retry.ConditionalRetryPolicy(Optional) How to retry the RPC. See: |
| Returns | |
|---|---|
| Type | Description |
Blob or None | The blob object if it exists, otherwise None. |
get_iam_policy
get_iam_policy(client=None,requested_policy_version=None,timeout=60,retry=<google.api_core.retry.Retryobject>)Retrieve the IAM policy for the bucket.
Seehttps://cloud.google.com/storage/docs/json_api/v1/buckets/getIamPolicy
Ifuser_project is set, bills the API request to that project.
| Parameters | |
|---|---|
| Name | Description |
client | Client or(Optional) The client to use. If not passed, falls back to the |
requested_policy_version | int or(Optional) The version of IAM policies to request. If a policy with a condition is requested without setting this, the server will return an error. This must be set to a value of 3 to retrieve IAM policies containing conditions. This is to prevent client code that isn't aware of IAM conditions from interpreting and modifying policies incorrectly. The service might return a policy with version lower than the one that was requested, based on the feature syntax in the policy fetched. |
timeout | float or tuple(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry | google.api_core.retry.Retry orgoogle.cloud.storage.retry.ConditionalRetryPolicy(Optional) How to retry the RPC. See: |
| Returns | |
|---|---|
| Type | Description |
| the policy instance, based on the resource returned from thegetIamPolicy API request. Example: .. code-block:: python from google.cloud.storage.iam import STORAGE_OBJECT_VIEWER_ROLE policy = bucket.get_iam_policy(requested_policy_version=3) policy.version = 3 # Add a binding to the policy via it's bindings property policy.bindings.append({ "role": STORAGE_OBJECT_VIEWER_ROLE, "members": {"serviceAccount:account@project.iam.gserviceaccount.com", ...}, # Optional: "condition": { "title": "prefix" "description": "Objects matching prefix" "expression": "resource.name.startsWith("projects/project-name/buckets/bucket-name/objects/prefix")" } }) bucket.set_iam_policy(policy) |
get_logging
get_logging()Return info about access logging for this bucket.
| Returns | |
|---|---|
| Type | Description |
dict or None | a dict w/ keys,logBucket andlogObjectPrefix (if logging is enabled), or None (if not). |
get_notification
get_notification(notification_id,client=None,timeout=60,retry=<google.api_core.retry.Retryobject>)Get Pub / Sub notification for this bucket.
See:https://cloud.google.com/storage/docs/json_api/v1/notifications/get
Ifuser_project is set, bills the API request to that project.
| Parameters | |
|---|---|
| Name | Description |
notification_id | strThe notification id to retrieve the notification configuration. |
client | Client or(Optional) The client to use. If not passed, falls back to the |
timeout | float or tuple(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry | google.api_core.retry.Retry orgoogle.cloud.storage.retry.ConditionalRetryPolicy(Optional) How to retry the RPC. See: |
| Returns | |
|---|---|
| Type | Description |
| notification instance. .. rubric:: Example Get notification using notification id. >>> from google.cloud import storage >>> client = storage.Client() >>> bucket = client.get_bucket('my-bucket-name') # API request. >>> notification = bucket.get_notification(notification_id='id') # API request. |
list_blobs
list_blobs(max_results=None,page_token=None,prefix=None,delimiter=None,start_offset=None,end_offset=None,include_trailing_delimiter=None,versions=None,projection='noAcl',fields=None,client=None,timeout=60,retry=<google.api_core.retry.Retryobject>)DEPRECATED. Return an iterator used to find blobs in the bucket.
Note:Direct use of this method is deprecated. UseClient.list_blobs instead.Ifuser_project is set, bills the API request to that project.| Parameters | |
|---|---|
| Name | Description |
max_results | int(Optional) The maximum number of blobs to return. |
page_token | str(Optional) If present, return the next batch of blobs, using the value, which must correspond to the |
prefix | str(Optional) Prefix used to filter blobs. |
delimiter | str(Optional) Delimiter, used with |
start_offset | str(Optional) Filter results to objects whose names are lexicographically equal to or after |
end_offset | str(Optional) Filter results to objects whose names are lexicographically before |
include_trailing_delimiter | boolean(Optional) If true, objects that end in exactly one instance of |
versions | bool(Optional) Whether object versions should be returned as separate blobs. |
projection | str(Optional) If used, must be 'full' or 'noAcl'. Defaults to |
fields | str(Optional) Selector specifying which fields to include in a partial response. Must be a list of fields. For example to get a partial response with just the next page token and the name and language of each blob returned: |
client | Client(Optional) The client to use. If not passed, falls back to the |
timeout | float or tuple(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry | google.api_core.retry.Retry orgoogle.cloud.storage.retry.ConditionalRetryPolicy(Optional) How to retry the RPC. See: |
| Returns | |
|---|---|
| Type | Description |
| Iterator of allBlob in this bucket matching the arguments. .. rubric:: Example List blobs in the bucket with user_project. >>> from google.cloud import storage >>> client = storage.Client() >>> bucket = storage.Bucket(client, "my-bucket-name", user_project="my-project") >>> all_blobs = list(client.list_blobs(bucket)) |
list_notifications
list_notifications(client=None,timeout=60,retry=<google.api_core.retry.Retryobject>)List Pub / Sub notifications for this bucket.
See:https://cloud.google.com/storage/docs/json_api/v1/notifications/list
Ifuser_project is set, bills the API request to that project.
| Parameters | |
|---|---|
| Name | Description |
client | Client or(Optional) The client to use. If not passed, falls back to the |
timeout | float or tuple(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry | google.api_core.retry.Retry orgoogle.cloud.storage.retry.ConditionalRetryPolicy(Optional) How to retry the RPC. See: |
| Returns | |
|---|---|
| Type | Description |
list of | notification instances |
lock_retention_policy
lock_retention_policy(client=None,timeout=60,retry=<google.api_core.retry.Retryobject>)Lock the bucket's retention policy.
| Parameters | |
|---|---|
| Name | Description |
client | Client or(Optional) The client to use. If not passed, falls back to the |
timeout | float or tuple(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry | google.api_core.retry.Retry orgoogle.cloud.storage.retry.ConditionalRetryPolicy(Optional) How to retry the RPC. See: |
| Exceptions | |
|---|---|
| Type | Description |
ValueError | if the bucket has no metageneration (i.e., new or never reloaded); if the bucket has no retention policy assigned; if the bucket's retention policy is already locked. |
make_private
make_private(recursive=False,future=False,client=None,timeout=60,if_metageneration_match=None,if_metageneration_not_match=None,retry=<google.cloud.storage.retry.ConditionalRetryPolicyobject>)Update bucket's ACL, revoking read access for anonymous users.
| Parameters | |
|---|---|
| Name | Description |
recursive | boolIf True, this will make all blobs inside the bucket private as well. |
future | boolIf True, this will make all objects created in the future private as well. |
client | Client or(Optional) The client to use. If not passed, falls back to the |
timeout | float or tuple(Optional) The amount of time, in seconds, to wait for the server response. See: |
if_metageneration_match | long(Optional) Make the operation conditional on whether the blob's current metageneration matches the given value. |
if_metageneration_not_match | long(Optional) Make the operation conditional on whether the blob's current metageneration does not match the given value. |
retry | google.api_core.retry.Retry orgoogle.cloud.storage.retry.ConditionalRetryPolicy(Optional) How to retry the RPC. See: |
| Exceptions | |
|---|---|
| Type | Description |
ValueError | Ifrecursive is True, and the bucket contains more than 256 blobs. This is to prevent extremely long runtime of this method. For such buckets, iterate over the blobs returned bylist_blobs and callmake_private for each blob. |
make_public
make_public(recursive=False,future=False,client=None,timeout=60,if_metageneration_match=None,if_metageneration_not_match=None,retry=<google.cloud.storage.retry.ConditionalRetryPolicyobject>)Update bucket's ACL, granting read access to anonymous users.
| Parameters | |
|---|---|
| Name | Description |
recursive | boolIf True, this will make all blobs inside the bucket public as well. |
future | boolIf True, this will make all objects created in the future public as well. |
client | Client or(Optional) The client to use. If not passed, falls back to the |
timeout | float or tuple(Optional) The amount of time, in seconds, to wait for the server response. See: |
if_metageneration_match | long(Optional) Make the operation conditional on whether the blob's current metageneration matches the given value. |
if_metageneration_not_match | long(Optional) Make the operation conditional on whether the blob's current metageneration does not match the given value. |
retry | google.api_core.retry.Retry orgoogle.cloud.storage.retry.ConditionalRetryPolicy(Optional) How to retry the RPC. See: |
| Exceptions | |
|---|---|
| Type | Description |
ValueError | Ifrecursive is True, and the bucket contains more than 256 blobs. This is to prevent extremely long runtime of this method. For such buckets, iterate over the blobs returned bylist_blobs and callmake_public for each blob. |
notification
notification(topic_name=None,topic_project=None,custom_attributes=None,event_types=None,blob_name_prefix=None,payload_format="NONE",notification_id=None,)Factory: create a notification resource for the bucket.
See:.BucketNotification for parameters.
patch
patch(client=None,timeout=60,if_metageneration_match=None,if_metageneration_not_match=None,retry=<google.cloud.storage.retry.ConditionalRetryPolicyobject>)Sends all changed properties in a PATCH request.
Updates the_properties with the response from the backend.
Ifuser_project is set, bills the API request to that project.
| Parameters | |
|---|---|
| Name | Description |
client | Client orthe client to use. If not passed, falls back to the |
timeout | float or tuple(Optional) The amount of time, in seconds, to wait for the server response. See: |
if_metageneration_match | long(Optional) Make the operation conditional on whether the blob's current metageneration matches the given value. |
if_metageneration_not_match | long(Optional) Make the operation conditional on whether the blob's current metageneration does not match the given value. |
retry | google.api_core.retry.Retry orgoogle.cloud.storage.retry.ConditionalRetryPolicy(Optional) How to retry the RPC. See: |
path_helper
path_helper(bucket_name)Relative URL path for a bucket.
| Parameter | |
|---|---|
| Name | Description |
bucket_name | strThe bucket name in the path. |
| Returns | |
|---|---|
| Type | Description |
str | The relative URL path forbucket_name. |
reload
reload(client=None,projection='noAcl',timeout=60,if_etag_match=None,if_etag_not_match=None,if_metageneration_match=None,if_metageneration_not_match=None,retry=<google.api_core.retry.Retryobject>)Reload properties from Cloud Storage.
Ifuser_project is set, bills the API request to that project.
| Parameters | |
|---|---|
| Name | Description |
client | Client orthe client to use. If not passed, falls back to the |
projection | str(Optional) If used, must be 'full' or 'noAcl'. Defaults to |
timeout | float or tuple(Optional) The amount of time, in seconds, to wait for the server response. See: |
if_etag_match | Union[str, Set[str]](Optional) Make the operation conditional on whether the bucket's current ETag matches the given value. |
if_etag_not_match | Union[str, Set[str]])(Optional) Make the operation conditional on whether the bucket's current ETag does not match the given value. |
if_metageneration_match | long(Optional) Make the operation conditional on whether the bucket's current metageneration matches the given value. |
if_metageneration_not_match | long(Optional) Make the operation conditional on whether the bucket's current metageneration does not match the given value. |
retry | google.api_core.retry.Retry orgoogle.cloud.storage.retry.ConditionalRetryPolicy(Optional) How to retry the RPC. See: |
rename_blob
rename_blob(blob,new_name,client=None,if_generation_match=None,if_generation_not_match=None,if_metageneration_match=None,if_metageneration_not_match=None,if_source_generation_match=None,if_source_generation_not_match=None,if_source_metageneration_match=None,if_source_metageneration_not_match=None,timeout=60,retry=<google.cloud.storage.retry.ConditionalRetryPolicyobject>)Rename the given blob using copy and delete operations.
Ifuser_project is set, bills the API request to that project.
Effectively, copies blob to the same bucket with a new name, thendeletes the blob.
Warning:This method will first duplicate the data and then delete theold blob. This means that with very large objects renamingcould be a very (temporarily) costly or a very slow operation.If you need more control over the copy and deletion, insteaduse<xref uid="google.cloud.storage.blob.Blob">google.cloud.storage.blob.Blob</xref>.copy_to and<xref uid="google.cloud.storage.blob.Blob.delete">google.cloud.storage.blob.Blob.delete</xref> directly.| Parameters | |
|---|---|
| Name | Description |
blob | BlobThe blob to be renamed. |
new_name | strThe new name for this blob. |
client | Client or(Optional) The client to use. If not passed, falls back to the |
if_generation_match | long(Optional) See :ref: |
if_generation_not_match | long(Optional) See :ref: |
if_metageneration_match | long(Optional) See :ref: |
if_metageneration_not_match | long(Optional) See :ref: |
if_source_generation_match | long(Optional) Makes the operation conditional on whether the source object's generation matches the given value. Also used in the (implied) delete request. |
if_source_generation_not_match | long(Optional) Makes the operation conditional on whether the source object's generation does not match the given value. Also used in the (implied) delete request. |
if_source_metageneration_match | long(Optional) Makes the operation conditional on whether the source object's current metageneration matches the given value. Also used in the (implied) delete request. |
if_source_metageneration_not_match | long(Optional) Makes the operation conditional on whether the source object's current metageneration does not match the given value. Also used in the (implied) delete request. |
timeout | float or tuple(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry | google.api_core.retry.Retry orgoogle.cloud.storage.retry.ConditionalRetryPolicy(Optional) How to retry the RPC. See: |
| Returns | |
|---|---|
| Type | Description |
| The newly-renamed blob. |
set_iam_policy
set_iam_policy(policy,client=None,timeout=60,retry=<google.cloud.storage.retry.ConditionalRetryPolicyobject>)Update the IAM policy for the bucket.
Seehttps://cloud.google.com/storage/docs/json_api/v1/buckets/setIamPolicy
Ifuser_project is set, bills the API request to that project.
| Parameters | |
|---|---|
| Name | Description |
policy | policy instance used to update bucket's IAM policy. |
client | Client or(Optional) The client to use. If not passed, falls back to the |
timeout | float or tuple(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry | google.api_core.retry.Retry orgoogle.cloud.storage.retry.ConditionalRetryPolicy(Optional) How to retry the RPC. See: |
| Returns | |
|---|---|
| Type | Description |
| the policy instance, based on the resource returned from thesetIamPolicy API request. |
test_iam_permissions
test_iam_permissions(permissions,client=None,timeout=60,retry=<google.api_core.retry.Retryobject>)API call: test permissions
Seehttps://cloud.google.com/storage/docs/json_api/v1/buckets/testIamPermissions
Ifuser_project is set, bills the API request to that project.
| Parameters | |
|---|---|
| Name | Description |
permissions | list of stringthe permissions to check |
client | Client or(Optional) The client to use. If not passed, falls back to the |
timeout | float or tuple(Optional) The amount of time, in seconds, to wait for the server response. See: |
retry | google.api_core.retry.Retry orgoogle.cloud.storage.retry.ConditionalRetryPolicy(Optional) How to retry the RPC. See: |
| Returns | |
|---|---|
| Type | Description |
list of string | the permissions returned by thetestIamPermissions API request. |
update
update(client=None,timeout=60,if_metageneration_match=None,if_metageneration_not_match=None,retry=<google.cloud.storage.retry.ConditionalRetryPolicyobject>)Sends all properties in a PUT request.
Updates the_properties with the response from the backend.
Ifuser_project is set, bills the API request to that project.
| Parameters | |
|---|---|
| Name | Description |
client | Client orthe client to use. If not passed, falls back to the |
timeout | float or tuple(Optional) The amount of time, in seconds, to wait for the server response. See: |
if_metageneration_match | long(Optional) Make the operation conditional on whether the blob's current metageneration matches the given value. |
if_metageneration_not_match | long(Optional) Make the operation conditional on whether the blob's current metageneration does not match the given value. |
retry | google.api_core.retry.Retry orgoogle.cloud.storage.retry.ConditionalRetryPolicy(Optional) How to retry the RPC. See: |
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-11-05 UTC.