Movatterモバイル変換


[0]ホーム

URL:


Index (C) »Aws »S3 »Client

Class: Aws::S3::Client

Inherits:
Seahorse::Client::Baseshow all
Includes:
ClientStubs
Defined in:
gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb

Overview

An API client for S3. To construct a client, you need to configure a:region and:credentials.

client=Aws::S3::Client.new(region:region_name,credentials:credentials,# ...)

For details on configuring region and credentials seethedeveloper guide.

See#initialize for a full list of supported configuration options.

Instance Attribute Summary

Attributes inherited fromSeahorse::Client::Base

#config,#handlers

API Operationscollapse

Instance Method Summarycollapse

Methods included fromClientStubs

#api_requests,#stub_data,#stub_responses

Methods inherited fromSeahorse::Client::Base

add_plugin,api,clear_plugins,define,new,#operation_names,plugins,remove_plugin,set_api,set_plugins

Methods included fromSeahorse::Client::HandlerBuilder

#handle,#handle_request,#handle_response

Constructor Details

#initialize(options) ⇒Client

Returns a new instance of Client.

Parameters:

  • options(Hash)

Options Hash (options):

  • :plugins(Array<Seahorse::Client::Plugin>) — default:[]]

    A list of plugins to apply to the client. Each plugin is either aclass name or an instance of a plugin class.

  • :credentials(required,Aws::CredentialProvider)

    Your AWS credentials used for authentication. This can be any class that includes and implementsAws::CredentialProvider, or instance of any one of the following classes:

    • Aws::Credentials - Used for configuring static, non-refreshingcredentials.

    • Aws::SharedCredentials - Used for loading static credentials from ashared file, such as~/.aws/config.

    • Aws::AssumeRoleCredentials - Used when you need to assume a role.

    • Aws::AssumeRoleWebIdentityCredentials - Used when you need toassume a role after providing credentials via the web.

    • Aws::SSOCredentials - Used for loading credentials from AWS SSO using anaccess token generated fromaws login.

    • Aws::ProcessCredentials - Used for loading credentials from aprocess that outputs to stdout.

    • Aws::InstanceProfileCredentials - Used for loading credentialsfrom an EC2 IMDS on an EC2 instance.

    • Aws::ECSCredentials - Used for loading credentials frominstances running in ECS.

    • Aws::CognitoIdentityCredentials - Used for loading credentialsfrom the Cognito Identity service.

    When:credentials are not configured directly, the following locations will be searched for credentials:

    • Aws.config[:credentials]

    • The:access_key_id,:secret_access_key,:session_token, and:account_id options.

    • ENV['AWS_ACCESS_KEY_ID'],ENV['AWS_SECRET_ACCESS_KEY'],ENV['AWS_SESSION_TOKEN'], andENV['AWS_ACCOUNT_ID'].

    • ~/.aws/credentials

    • ~/.aws/config

    • EC2/ECS IMDS instance profile - When used by default, the timeouts are very aggressive.Construct and pass an instance ofAws::InstanceProfileCredentials orAws::ECSCredentials toenable retries and extended timeouts. Instance profile credential fetching can be disabled bysettingENV['AWS_EC2_METADATA_DISABLED'] totrue.

  • :region(required,String)

    The AWS region to connect to. The configured:region isused to determine the service:endpoint. When not passed,a default:region is searched for in the following locations:

    • Aws.config[:region]
    • ENV['AWS_REGION']
    • ENV['AMAZON_REGION']
    • ENV['AWS_DEFAULT_REGION']
    • ~/.aws/credentials
    • ~/.aws/config
  • :access_grants(Boolean) — default:false

    Whentrue, the S3 client will use the S3 Access Grants feature toauthenticate requests. Bucket credentials will be fetched from S3Control using theget_data_access API.

  • :access_grants_credentials_provider(Aws::S3::AccessGrantsCredentialsProvider)

    Whenaccess_grants istrue, this option can be used to provideadditional options to the credentials provider, including a privilegesetting, caching, and fallback behavior.

  • :access_key_id(String)
  • :account_id(String)
  • :active_endpoint_cache(Boolean) — default:false

    When set totrue, a thread polling for endpoints will be running inthe background every 60 secs (default). Defaults tofalse.

  • :adaptive_retry_wait_to_fill(Boolean) — default:true

    Used only inadaptive retry mode. When true, the request will sleepuntil there is sufficent client side capacity to retry the request.When false, the request will raise aRetryCapacityNotAvailableError and willnot retry instead of sleeping.

  • :auth_scheme_preference(Array<String>)

    A list of preferred authentication schemes to use when making a request. Supported values are:sigv4,sigv4a,httpBearerAuth, andnoAuth. When set usingENV['AWS_AUTH_SCHEME_PREFERENCE'] or inshared config asauth_scheme_preference, the value should be a comma-separated list.

  • :client_side_monitoring(Boolean) — default:false

    Whentrue, client-side metrics will be collected for all API requests fromthis client.

  • :client_side_monitoring_client_id(String) — default:""

    Allows you to provide an identifier for this client which will be attached toall generated client side metrics. Defaults to an empty string.

  • :client_side_monitoring_host(String) — default:"127.0.0.1"

    Allows you to specify the DNS hostname or IPv4 or IPv6 address that the clientside monitoring agent is running on, where client metrics will be published via UDP.

  • :client_side_monitoring_port(Integer) — default:31000

    Required for publishing client metrics. The port that the client side monitoringagent is running on, where client metrics will be published via UDP.

  • :client_side_monitoring_publisher(Aws::ClientSideMonitoring::Publisher) — default:Aws::ClientSideMonitoring::Publisher

    Allows you to provide a custom client-side monitoring publisher class. By default,will use the Client Side Monitoring Agent Publisher.

  • :compute_checksums(Boolean) — default:true

    This option is deprecated. Please use:request_checksum_calculation instead.Whenfalse,request_checksum_calculation is overridden towhen_required.

  • :convert_params(Boolean) — default:true

    Whentrue, an attempt is made to coerce request parameters intothe required types.

  • :correct_clock_skew(Boolean) — default:true

    Used only instandard and adaptive retry modes. Specifies whether to applya clock skew correction and retry requests with skewed client clocks.

  • :defaults_mode(String) — default:"legacy"

    SeeDefaultsModeConfiguration for a list of theaccepted modes and the configuration defaults that are included.

  • :disable_host_prefix_injection(Boolean) — default:false

    Whentrue, the SDK will not prepend the modeled host prefix to the endpoint.

  • :disable_request_compression(Boolean) — default:false

    When set to 'true' the request body will not be compressedfor supported operations.

  • :disable_s3_express_session_auth(boolean)

    Parameter to indicate whether S3Express session auth should be disabled

  • :endpoint(String,URI::HTTPS,URI::HTTP)

    Normally you should not configure the:endpoint optiondirectly. This is normally constructed from the:regionoption. Configuring:endpoint is normally reserved forconnecting to test or custom endpoints. The endpoint shouldbe a URI formatted like:

    'http://example.com''https://example.com''http://example.com:123'
  • :endpoint_cache_max_entries(Integer) — default:1000

    Used for the maximum size limit of the LRU cache storing endpoints datafor endpoint discovery enabled operations. Defaults to 1000.

  • :endpoint_cache_max_threads(Integer) — default:10

    Used for the maximum threads in use for polling endpoints to be cached, defaults to 10.

  • :endpoint_cache_poll_interval(Integer) — default:60

    When :endpoint_discovery and :active_endpoint_cache is enabled,Use this option to config the time interval in seconds for makingrequests fetching endpoints information. Defaults to 60 sec.

  • :endpoint_discovery(Boolean) — default:false

    When set totrue, endpoint discovery will be enabled for operations when available.

  • :event_stream_handler(Proc)

    When an EventStream or Proc object is provided, it will be used as callback for each chunk of event stream response received along the way.

  • :express_credentials_provider(Aws::S3::ExpressCredentialsProvider)

    Credential Provider for S3 Express endpoints. Manages credentialsfor different buckets.

  • :follow_redirects(Boolean) — default:true

    Whentrue, this client will follow 307 redirects returnedby Amazon S3.

  • :force_path_style(Boolean) — default:false

    When set totrue, the bucket name is always left in therequest URI and never moved to the host as a sub-domain.

  • :ignore_configured_endpoint_urls(Boolean)

    Setting to true disables use of endpoint URLs provided via environmentvariables and the shared configuration file.

  • :input_event_stream_handler(Proc)

    When an EventStream or Proc object is provided, it can be used for sending events for the event stream.

  • :log_formatter(Aws::Log::Formatter) — default:Aws::Log::Formatter.default

    The log formatter.

  • :log_level(Symbol) — default::info

    The log level to send messages to the:logger at.

  • :logger(Logger)

    The Logger instance to send log messages to. If this optionis not set, logging will be disabled.

  • :max_attempts(Integer) — default:3

    An integer representing the maximum number attempts that will be made fora single request, including the initial attempt. For example,setting this value to 5 will result in a request being retried up to4 times. Used instandard andadaptive retry modes.

  • :output_event_stream_handler(Proc)

    When an EventStream or Proc object is provided, it will be used as callback for each chunk of event stream response received along the way.

  • :profile(String) — default:"default"

    Used when loading credentials from the shared credentials file atHOME/.aws/credentials.When not specified, 'default' is used.

  • :request_checksum_calculation(String) — default:"when_supported"

    Determines when a checksum will be calculated for request payloads. Values are:

    • when_supported - (default) When set, a checksum will becalculated for all request payloads of operations modeled with thehttpChecksum trait whererequestChecksumRequired istrue and/or arequestAlgorithmMember is modeled.
    • when_required - When set, a checksum will only be calculated forrequest payloads of operations modeled with thehttpChecksum trait whererequestChecksumRequired istrue or where arequestAlgorithmMemberis modeled and supplied.
  • :request_min_compression_size_bytes(Integer) — default:10240

    The minimum size in bytes that triggers compression for requestbodies. The value must be non-negative integer value between 0and 10485780 bytes inclusive.

  • :require_https_for_sse_cpk(Boolean) — default:true

    Whentrue, the endpointmust be HTTPS for all operationswhere server-side-encryption is used with customer-provided keys.This should only be disabled for local testing.

  • :response_checksum_validation(String) — default:"when_supported"

    Determines when checksum validation will be performed on response payloads. Values are:

    • when_supported - (default) When set, checksum validation is performed on allresponse payloads of operations modeled with thehttpChecksum trait whereresponseAlgorithms is modeled, except when no modeled checksum algorithmsare supported.
    • when_required - When set, checksum validation is not performed onresponse payloads of operations unless the checksum algorithm is supported andtherequestValidationModeMember member is set toENABLED.
  • :retry_backoff(Proc)

    A proc or lambda used for backoff. Defaults to 2**retries * retry_base_delay.This option is only used in thelegacy retry mode.

  • :retry_base_delay(Float) — default:0.3

    The base delay in seconds used by the default backoff function. This optionis only used in thelegacy retry mode.

  • :retry_jitter(Symbol) — default::none

    A delay randomiser function used by the default backoff function.Some predefined functions can be referenced by name - :none, :equal, :full,otherwise a Proc that takes and returns a number. This option is only usedin thelegacy retry mode.

    @seehttps://www.awsarchitectureblog.com/2015/03/backoff.html

  • :retry_limit(Integer) — default:3

    The maximum number of times to retry failed requests. Only~ 500 level server errors and certain ~ 400 level client errorsare retried. Generally, these are throttling errors, datachecksum errors, networking errors, timeout errors, auth errors,endpoint discovery, and errors from expired credentials.This option is only used in thelegacy retry mode.

  • :retry_max_delay(Integer) — default:0

    The maximum number of seconds to delay between retries (0 for no limit)used by the default backoff function. This option is only used in thelegacy retry mode.

  • :retry_mode(String) — default:"legacy"

    Specifies which retry algorithm to use. Values are:

    • legacy - The pre-existing retry behavior. This is default value ifno retry mode is provided.

    • standard - A standardized set of retry rules across the AWS SDKs.This includes support for retry quotas, which limit the number ofunsuccessful retries a client can make.

    • adaptive - An experimental retry mode that includes all thefunctionality ofstandard mode along with automatic client sidethrottling. This is a provisional mode that may change behaviorin the future.

  • :s3_disable_multiregion_access_points(Boolean) — default:false

    When set tofalse this will option will raise errors when multi-regionaccess point ARNs are used. Multi-region access points can potentiallyresult in cross region requests.

  • :s3_us_east_1_regional_endpoint(String) — default:"legacy"

    Pass inregional to enable theus-east-1 regional endpoint.Defaults tolegacy mode which uses the global endpoint.

  • :s3_use_arn_region(Boolean) — default:true

    For S3 ARNs passed into the:bucket parameter, this option willuse the region in the ARN, allowing for cross-region requests tobe made. Set tofalse to use the client's region instead.

  • :sdk_ua_app_id(String)

    A unique and opaque application ID that is appended to theUser-Agent header as app/sdk_ua_app_id. It should have amaximum length of 50. This variable is sourced from environmentvariable AWS_SDK_UA_APP_ID or the shared config profile attribute sdk_ua_app_id.

  • :secret_access_key(String)
  • :session_token(String)
  • :sigv4a_signing_region_set(Array)

    A list of regions that should be signed with SigV4a signing. Whennot passed, a default:sigv4a_signing_region_set is searched forin the following locations:

    • Aws.config[:sigv4a_signing_region_set]
    • ENV['AWS_SIGV4A_SIGNING_REGION_SET']
    • ~/.aws/config
  • :stub_responses(Boolean) — default:false

    Causes the client to return stubbed responses. By defaultfake responses are generated and returned. You can specifythe response data to return or errors to raise by callingClientStubs#stub_responses. SeeClientStubs for more information.

    Please note When response stubbing is enabled, no HTTPrequests are made, and retries are disabled.

  • :telemetry_provider(Aws::Telemetry::TelemetryProviderBase) — default:Aws::Telemetry::NoOpTelemetryProvider

    Allows you to provide a telemetry provider, which is used toemit telemetry data. By default, usesNoOpTelemetryProvider whichwill not record or emit any telemetry data. The SDK supports thefollowing telemetry providers:

    • OpenTelemetry (OTel) - To use the OTel provider, install and require theopentelemetry-sdk gem and then, pass in an instance of aAws::Telemetry::OTelProvider for telemetry provider.
  • :token_provider(Aws::TokenProvider)

    Your Bearer token used for authentication. This can be any class that includes and implementsAws::TokenProvider, or instance of any one of the following classes:

    • Aws::StaticTokenProvider - Used for configuring static, non-refreshingtokens.

    • Aws::SSOTokenProvider - Used for loading tokens from AWS SSO using anaccess token generated fromaws login.

    When:token_provider is not configured directly, theAws::TokenProviderChainwill be used to search for tokens configured for your profile in shared configuration files.

  • :use_accelerate_endpoint(Boolean) — default:false

    When set totrue, accelerated bucket endpoints will be usedfor all object operations. You must first enable accelerate foreach bucket.Go here for more information.

  • :use_dualstack_endpoint(Boolean)

    When set totrue, dualstack enabled endpoints (with.aws TLD)will be used if available.

  • :use_fips_endpoint(Boolean)

    When set totrue, fips compatible endpoints will be used if available.When afips region is used, the region is normalized and this configis set totrue.

  • :validate_params(Boolean) — default:true

    Whentrue, request parameters are validated beforesending the request.

  • :endpoint_provider(Aws::S3::EndpointProvider)

    The endpoint provider used to resolve endpoints. Any object that responds to#resolve_endpoint(parameters) whereparameters is a Struct similar toAws::S3::EndpointParameters.

  • :http_continue_timeout(Float) — default:1

    The number of seconds to wait for a 100-continue response before sending therequest body. This option has no effect unless the request has "Expect"header set to "100-continue". Defaults tonil which disables thisbehaviour. This value can safely be set per request on the session.

  • :http_idle_timeout(Float) — default:5

    The number of seconds a connection is allowed to sit idle before itis considered stale. Stale connections are closed and removed from thepool before making a request.

  • :http_open_timeout(Float) — default:15

    The default number of seconds to wait for response data.This value can safely be set per-request on the session.

  • :http_proxy(URI::HTTP,String)

    A proxy to send requests through. Formatted like 'http://proxy.com:123'.

  • :http_read_timeout(Float) — default:60

    The default number of seconds to wait for response data.This value can safely be set per-request on the session.

  • :http_wire_trace(Boolean) — default:false

    Whentrue, HTTP debug output will be sent to the:logger.

  • :on_chunk_received(Proc)

    When a Proc object is provided, it will be used as callback when each chunkof the response body is received. It provides three arguments: the chunk,the number of bytes received, and the total number ofbytes in the response (or nil if the server did not send acontent-length).

  • :on_chunk_sent(Proc)

    When a Proc object is provided, it will be used as callback when each chunkof the request body is sent. It provides three arguments: the chunk,the number of bytes read from the body, and the total number ofbytes in the body.

  • :raise_response_errors(Boolean) — default:true

    Whentrue, response errors are raised.

  • :ssl_ca_bundle(String)

    Full path to the SSL certificate authority bundle file that should be used whenverifying peer certificates. If you do not pass:ssl_ca_bundle or:ssl_ca_directory the the system default will be used if available.

  • :ssl_ca_directory(String)

    Full path of the directory that contains the unbundled SSL certificateauthority files for verifying peer certificates. If you donot pass:ssl_ca_bundle or:ssl_ca_directory the the systemdefault will be used if available.

  • :ssl_ca_store(String)

    Sets the X509::Store to verify peer certificate.

  • :ssl_cert(OpenSSL::X509::Certificate)

    Sets a client certificate when creating http connections.

  • :ssl_key(OpenSSL::PKey)

    Sets a client key when creating http connections.

  • :ssl_timeout(Float)

    Sets the SSL timeout in seconds

  • :ssl_verify_peer(Boolean) — default:true

    Whentrue, SSL peer certificates are verified when establishing a connection.

577578579
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 577definitialize(*args)superend

Instance Method Details

#abort_multipart_upload(params = {}) ⇒Types::AbortMultipartUploadOutput

This operation aborts a multipart upload. After a multipart upload isaborted, no additional parts can be uploaded using that upload ID. Thestorage consumed by any previously uploaded parts will be freed.However, if any part uploads are currently in progress, those partuploads might or might not succeed. As a result, it might be necessaryto abort a given multipart upload multiple times in order tocompletely free all storage consumed by all parts.

To verify that all parts have been removed and prevent getting chargedfor the part storage, you should call theListParts API operationand ensure that the parts list is empty.

*Directory buckets - If multipart uploads in a directory bucket are in progress, you can't delete the bucket until all the in-progress multipart uploads are aborted or completed. To delete these in-progress multipart uploads, use theListMultipartUploads operation to list the in-progress multipart uploads in the bucket and use theAbortMultipartUpload operation to abort all the in-progress multipart uploads.

  • Directory buckets - For directory buckets, you must makerequests for this API operation to the Zonal endpoint. Theseendpoints support virtual-hosted-style requests in the formathttps://amzn-s3-demo-bucket.s3express-zone-id.region-code.amazonaws.com/key-name. Path-style requests are not supported. For more information aboutendpoints in Availability Zones, seeRegional and Zonal endpointsfor directory buckets in Availability Zones in theAmazon S3User Guide. For more information about endpoints in Local Zones,seeConcepts for directory buckets in Local Zones in theAmazon S3 User Guide.

Permissions
  • General purpose bucket permissions - For information aboutpermissions required to use the multipart upload, seeMultipartUpload and Permissions in theAmazon S3 User Guide.

  • Directory bucket permissions - To grant access to this APIoperation on a directory bucket, we recommend that you use theCreateSession API operation for session-basedauthorization. Specifically, you grant thes3express:CreateSession permission to the directory bucket in abucket policy or an IAM identity-based policy. Then, you make theCreateSession API call on the bucket to obtain a session token.With the session token in your request header, you can make APIrequests to this operation. After the session token expires, youmake anotherCreateSession API call to generate a new sessiontoken for use. Amazon Web Services CLI or SDKs create session andrefresh the session token automatically to avoid serviceinterruptions when a session expires. For more information aboutauthorization, seeCreateSession.

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax isBucket-name.s3express-zone-id.region-code.amazonaws.com.

The following operations are related toAbortMultipartUpload:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To abort a multipart upload

# The following example aborts a multipart upload.resp=client.abort_multipart_upload({bucket:"examplebucket",key:"bigobject",upload_id:"xadcOB_7YPBOJuoFiQ9cz4P3Pe6FIZwO4f7wN93uHsNBEw97pl5eNwzExg0LAT2dUN91cOmrEQHDsP3WA60CEg--",})resp.to_houtputsthefollowing:{}

Request syntax with placeholder values

resp=client.abort_multipart_upload({bucket:"BucketName",# requiredkey:"ObjectKey",# requiredupload_id:"MultipartUploadId",# requiredrequest_payer:"requester",# accepts requesterexpected_bucket_owner:"AccountId",if_match_initiated_time:Time.now,})

Response structure

resp.request_charged#=> String, one of "requester"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket name to which the upload was taking place.

    Directory buckets - When you use this operation with a directorybucket, you must use virtual-hosted-style requests in the formatBucket-name.s3express-zone-id.region-code.amazonaws.com. Path-stylerequests are not supported. Directory bucket names must be unique inthe chosen Zone (Availability Zone or Local Zone). Bucket names mustfollow the formatbucket-base-name--zone-id--x-s3 (for example,amzn-s3-demo-bucket--usw2-az1--x-s3). For information about bucketnaming restrictions, seeDirectory bucket naming rules in theAmazon S3 User Guide.

    Access points - When you use this action with an access point forgeneral purpose buckets, you must provide the alias of the accesspoint in place of the bucket name or specify the access point ARN.When you use this action with an access point for directory buckets,you must provide the access point name in place of the bucket name.When using the access point ARN, you must direct requests to theaccess point hostname. The access point hostname takes the formAccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.When using this action with an access point through the Amazon WebServices SDKs, you provide the access point ARN in place of the bucketname. For more information about access point ARNs, seeUsing accesspoints in theAmazon S3 User Guide.

    Object Lambda access points are not supported by directory buckets.

    S3 on Outposts - When you use this action with S3 on Outposts, youmust direct requests to the S3 on Outposts hostname. The S3 onOutposts hostname takes the formAccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com.When you use this action with S3 on Outposts, the destination bucketmust be the Outposts access point ARN or the access point alias. Formore information about S3 on Outposts, seeWhat is S3 onOutposts? in theAmazon S3 User Guide.

  • :key(required,String)

    Key of the object for which the multipart upload was initiated.

  • :upload_id(required,String)

    Upload ID that identifies the multipart upload.

  • :request_payer(String)

    Confirms that the requester knows that they will be charged for therequest. Bucket owners need not specify this parameter in theirrequests. If either the source or destination S3 bucket has RequesterPays enabled, the requester will pay for corresponding charges to copythe object. For information about downloading objects from RequesterPays buckets, seeDownloading Objects in Requester Pays Bucketsin theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

  • :if_match_initiated_time(Time,DateTime,Date,Integer,String)

    If present, this header aborts an in progress multipart upload only ifit was initiated on the provided timestamp. If the initiated timestampof the multipart upload does not match the provided value, theoperation returns a412 Precondition Failed error. If the initiatedtimestamp matches or if the multipart upload doesn’t exist, theoperation returns a204 Success (No Content) response.

    This functionality is only supported for directory buckets.

Returns:

See Also:

792793794795
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 792defabort_multipart_upload(params={},options={})req=build_request(:abort_multipart_upload,params)req.send_request(options)end

#complete_multipart_upload(params = {}) ⇒Types::CompleteMultipartUploadOutput

Completes a multipart upload by assembling previously uploaded parts.

You first initiate the multipart upload and then upload all partsusing theUploadPart operation or theUploadPartCopyoperation. After successfully uploading all relevant parts of anupload, you call thisCompleteMultipartUpload operation to completethe upload. Upon receiving this request, Amazon S3 concatenates allthe parts in ascending order by part number to create a new object. Inthe CompleteMultipartUpload request, you must provide the parts listand ensure that the parts list is complete. TheCompleteMultipartUpload API operation concatenates the parts that youprovide in the list. For each part in the list, you must provide thePartNumber value and theETag value that are returned after thatpart was uploaded.

The processing of a CompleteMultipartUpload request could take severalminutes to finalize. After Amazon S3 begins processing the request, itsends an HTTP response header that specifies a200 OK response.While processing is in progress, Amazon S3 periodically sends whitespace characters to keep the connection from timing out. A requestcould fail after the initial200 OK response has been sent. Thismeans that a200 OK response can contain either a success or anerror. The error response might be embedded in the200 OK response.If you call this API operation directly, make sure to design yourapplication to parse the contents of the response and handle itappropriately. If you use Amazon Web Services SDKs, SDKs handle thiscondition. The SDKs detect the embedded error and apply error handlingper your configuration settings (including automatically retrying therequest as appropriate). If the condition persists, the SDKs throw anexception (or, for the SDKs that don't use exceptions, they return anerror).

Note that ifCompleteMultipartUpload fails, applications should beprepared to retry any failed requests (including 500 error responses).For more information, seeAmazon S3 Error Best Practices.

You can't useContent-Type: application/x-www-form-urlencoded forthe CompleteMultipartUpload requests. Also, if you don't provide aContent-Type header,CompleteMultipartUpload can still return a200 OK response.

For more information about multipart uploads, seeUploading ObjectsUsing Multipart Upload in theAmazon S3 User Guide.

Directory buckets - For directory buckets, you must make requestsfor this API operation to the Zonal endpoint. These endpoints supportvirtual-hosted-style requests in the formathttps://amzn-s3-demo-bucket.s3express-zone-id.region-code.amazonaws.com/key-name. Path-style requests are not supported. For more information aboutendpoints in Availability Zones, seeRegional and Zonal endpoints fordirectory buckets in Availability Zones in theAmazon S3 UserGuide. For more information about endpoints in Local Zones, seeConcepts for directory buckets in Local Zones in theAmazon S3User Guide.

Permissions
  • General purpose bucket permissions - For information aboutpermissions required to use the multipart upload API, seeMultipart Upload and Permissions in theAmazon S3 UserGuide.

    If you provide anadditional checksum value in yourMultipartUpload requests and the object is encrypted with KeyManagement Service, you must have permission to use thekms:Decrypt action for theCompleteMultipartUpload request tosucceed.

  • Directory bucket permissions - To grant access to this APIoperation on a directory bucket, we recommend that you use theCreateSession API operation for session-basedauthorization. Specifically, you grant thes3express:CreateSession permission to the directory bucket in abucket policy or an IAM identity-based policy. Then, you make theCreateSession API call on the bucket to obtain a session token.With the session token in your request header, you can make APIrequests to this operation. After the session token expires, youmake anotherCreateSession API call to generate a new sessiontoken for use. Amazon Web Services CLI or SDKs create session andrefresh the session token automatically to avoid serviceinterruptions when a session expires. For more information aboutauthorization, seeCreateSession.

    If the object is encrypted with SSE-KMS, you must also have thekms:GenerateDataKey andkms:Decrypt permissions in IAMidentity-based policies and KMS key policies for the KMS key.

Special errors
  • Error Code:EntityTooSmall

    • Description: Your proposed upload is smaller than the minimumallowed object size. Each part must be at least 5 MB in size,except the last part.

    • HTTP Status Code: 400 Bad Request

  • Error Code:InvalidPart

    • Description: One or more of the specified parts could not befound. The part might not have been uploaded, or the specifiedETag might not have matched the uploaded part's ETag.

    • HTTP Status Code: 400 Bad Request

  • Error Code:InvalidPartOrder

    • Description: The list of parts was not in ascending order. Theparts list must be specified in order by part number.

    • HTTP Status Code: 400 Bad Request

  • Error Code:NoSuchUpload

    • Description: The specified multipart upload does not exist. Theupload ID might be invalid, or the multipart upload might havebeen aborted or completed.

    • HTTP Status Code: 404 Not Found

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax isBucket-name.s3express-zone-id.region-code.amazonaws.com.

The following operations are related toCompleteMultipartUpload:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To complete multipart upload

# The following example completes a multipart upload.resp=client.complete_multipart_upload({bucket:"examplebucket",key:"bigobject",multipart_upload:{parts:[{etag:"\"d8c2eafd90c266e19ab9dcacc479f8af\"",part_number:1,},{etag:"\"d8c2eafd90c266e19ab9dcacc479f8af\"",part_number:2,},],},upload_id:"7YPBOJuoFiQ9cz4P3Pe6FIZwO4f7wN93uHsNBEw97pl5eNwzExg0LAT2dUN91cOmrEQHDsP3WA60CEg--",})resp.to_houtputsthefollowing:{bucket:"acexamplebucket",etag:"\"4d9031c7644d8081c2829f4ea23c55f7-2\"",key:"bigobject",location:"https://examplebucket.s3.<Region>.amazonaws.com/bigobject",}

Request syntax with placeholder values

resp=client.complete_multipart_upload({bucket:"BucketName",# requiredkey:"ObjectKey",# requiredmultipart_upload:{parts:[{etag:"ETag",checksum_crc32:"ChecksumCRC32",checksum_crc32c:"ChecksumCRC32C",checksum_crc64nvme:"ChecksumCRC64NVME",checksum_sha1:"ChecksumSHA1",checksum_sha256:"ChecksumSHA256",part_number:1,},],},upload_id:"MultipartUploadId",# requiredchecksum_crc32:"ChecksumCRC32",checksum_crc32c:"ChecksumCRC32C",checksum_crc64nvme:"ChecksumCRC64NVME",checksum_sha1:"ChecksumSHA1",checksum_sha256:"ChecksumSHA256",checksum_type:"COMPOSITE",# accepts COMPOSITE, FULL_OBJECTmpu_object_size:1,request_payer:"requester",# accepts requesterexpected_bucket_owner:"AccountId",if_match:"IfMatch",if_none_match:"IfNoneMatch",sse_customer_algorithm:"SSECustomerAlgorithm",sse_customer_key:"SSECustomerKey",sse_customer_key_md5:"SSECustomerKeyMD5",})

Response structure

resp.location#=> Stringresp.bucket#=> Stringresp.key#=> Stringresp.expiration#=> Stringresp.etag#=> Stringresp.checksum_crc32#=> Stringresp.checksum_crc32c#=> Stringresp.checksum_crc64nvme#=> Stringresp.checksum_sha1#=> Stringresp.checksum_sha256#=> Stringresp.checksum_type#=> String, one of "COMPOSITE", "FULL_OBJECT"resp.server_side_encryption#=> String, one of "AES256", "aws:fsx", "aws:kms", "aws:kms:dsse"resp.version_id#=> Stringresp.ssekms_key_id#=> Stringresp.bucket_key_enabled#=> Booleanresp.request_charged#=> String, one of "requester"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    Name of the bucket to which the multipart upload was initiated.

    Directory buckets - When you use this operation with a directorybucket, you must use virtual-hosted-style requests in the formatBucket-name.s3express-zone-id.region-code.amazonaws.com. Path-stylerequests are not supported. Directory bucket names must be unique inthe chosen Zone (Availability Zone or Local Zone). Bucket names mustfollow the formatbucket-base-name--zone-id--x-s3 (for example,amzn-s3-demo-bucket--usw2-az1--x-s3). For information about bucketnaming restrictions, seeDirectory bucket naming rules in theAmazon S3 User Guide.

    Access points - When you use this action with an access point forgeneral purpose buckets, you must provide the alias of the accesspoint in place of the bucket name or specify the access point ARN.When you use this action with an access point for directory buckets,you must provide the access point name in place of the bucket name.When using the access point ARN, you must direct requests to theaccess point hostname. The access point hostname takes the formAccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.When using this action with an access point through the Amazon WebServices SDKs, you provide the access point ARN in place of the bucketname. For more information about access point ARNs, seeUsing accesspoints in theAmazon S3 User Guide.

    Object Lambda access points are not supported by directory buckets.

    S3 on Outposts - When you use this action with S3 on Outposts, youmust direct requests to the S3 on Outposts hostname. The S3 onOutposts hostname takes the formAccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com.When you use this action with S3 on Outposts, the destination bucketmust be the Outposts access point ARN or the access point alias. Formore information about S3 on Outposts, seeWhat is S3 onOutposts? in theAmazon S3 User Guide.

  • :key(required,String)

    Object key for which the multipart upload was initiated.

  • :multipart_upload(Types::CompletedMultipartUpload)

    The container for the multipart upload request information.

  • :upload_id(required,String)

    ID for the initiated multipart upload.

  • :checksum_crc32(String)

    This header can be used as a data integrity check to verify that thedata received is the same data that was originally sent. This headerspecifies the Base64 encoded, 32-bitCRC32 checksum of the object.For more information, seeChecking object integrity in theAmazon S3 User Guide.

  • :checksum_crc32c(String)

    This header can be used as a data integrity check to verify that thedata received is the same data that was originally sent. This headerspecifies the Base64 encoded, 32-bitCRC32C checksum of the object.For more information, seeChecking object integrity in theAmazon S3 User Guide.

  • :checksum_crc64nvme(String)

    This header can be used as a data integrity check to verify that thedata received is the same data that was originally sent. This headerspecifies the Base64 encoded, 64-bitCRC64NVME checksum of theobject. TheCRC64NVME checksum is always a full object checksum. Formore information, seeChecking object integrity in the Amazon S3 UserGuide.

  • :checksum_sha1(String)

    This header can be used as a data integrity check to verify that thedata received is the same data that was originally sent. This headerspecifies the Base64 encoded, 160-bitSHA1 digest of the object. Formore information, seeChecking object integrity in theAmazon S3User Guide.

  • :checksum_sha256(String)

    This header can be used as a data integrity check to verify that thedata received is the same data that was originally sent. This headerspecifies the Base64 encoded, 256-bitSHA256 digest of the object.For more information, seeChecking object integrity in theAmazon S3 User Guide.

  • :checksum_type(String)

    This header specifies the checksum type of the object, whichdetermines how part-level checksums are combined to create anobject-level checksum for multipart objects. You can use this headeras a data integrity check to verify that the checksum type that isreceived is the same checksum that was specified. If the checksum typedoesn’t match the checksum type that was specified for the objectduring theCreateMultipartUpload request, it’ll result in aBadDigest error. For more information, see Checking object integrityin the Amazon S3 User Guide.

  • :mpu_object_size(Integer)

    The expected total object size of the multipart upload request. Ifthere’s a mismatch between the specified object size value and theactual object size value, it results in anHTTP 400 InvalidRequesterror.

  • :request_payer(String)

    Confirms that the requester knows that they will be charged for therequest. Bucket owners need not specify this parameter in theirrequests. If either the source or destination S3 bucket has RequesterPays enabled, the requester will pay for corresponding charges to copythe object. For information about downloading objects from RequesterPays buckets, seeDownloading Objects in Requester Pays Bucketsin theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

  • :if_match(String)

    Uploads the object only if the ETag (entity tag) value provided duringthe WRITE operation matches the ETag of the object in S3. If the ETagvalues do not match, the operation returns a412 Precondition Failederror.

    If a conflicting operation occurs during the upload S3 returns a409ConditionalRequestConflict response. On a 409 failure you shouldfetch the object's ETag, re-initiate the multipart upload withCreateMultipartUpload, and re-upload each part.

    Expects the ETag value as a string.

    For more information about conditional requests, seeRFC 7232, orConditional requests in theAmazon S3 User Guide.

  • :if_none_match(String)

    Uploads the object only if the object key name does not already existin the bucket specified. Otherwise, Amazon S3 returns a412Precondition Failed error.

    If a conflicting operation occurs during the upload S3 returns a409ConditionalRequestConflict response. On a 409 failure you shouldre-initiate the multipart upload withCreateMultipartUpload andre-upload each part.

    Expects the '*' (asterisk) character.

    For more information about conditional requests, seeRFC 7232, orConditional requests in theAmazon S3 User Guide.

  • :sse_customer_algorithm(String)

    The server-side encryption (SSE) algorithm used to encrypt the object.This parameter is required only when the object was created using achecksum algorithm or if your bucket policy requires the use of SSE-C.For more information, seeProtecting data using SSE-C keys in theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :sse_customer_key(String)

    The server-side encryption (SSE) customer managed key. This parameteris needed only when the object was created using a checksum algorithm.For more information, seeProtecting data using SSE-C keys in theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :sse_customer_key_md5(String)

    The MD5 server-side encryption (SSE) customer managed key. Thisparameter is needed only when the object was created using a checksumalgorithm. For more information, seeProtecting data using SSE-Ckeys in theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

Returns:

See Also:

1292129312941295
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 1292defcomplete_multipart_upload(params={},options={})req=build_request(:complete_multipart_upload,params)req.send_request(options)end

#copy_object(params = {}) ⇒Types::CopyObjectOutput

Creates a copy of an object that is already stored in Amazon S3.

End of support notice: As of October 1, 2025, Amazon S3 hasdiscontinued support for Email Grantee Access Control Lists (ACLs). Ifyou attempt to use an Email Grantee ACL in a request after October 1,2025, the request will receive anHTTP 405 (Method Not Allowed)error.

This change affects the following Amazon Web Services Regions: USEast(N. Virginia), US West (N. California), US West (Oregon), Asia Pacific(Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe(Ireland), and South America (São Paulo).

You can store individual objects of up to 50 TB in Amazon S3. Youcreate a copy of your object up to 5 GB in size in a single atomicaction using this API. However, to copy an object greater than 5 GB,you must use the multipart upload Upload Part - Copy (UploadPartCopy)API. For more information, seeCopy Object Using the REST MultipartUpload API.

You can copy individual objects between general purpose buckets,between directory buckets, and between general purpose buckets anddirectory buckets.

* Amazon S3 supports copy operations using Multi-Region Access Points only as a destination when using the Multi-Region Access Point ARN.

  • Directory buckets - For directory buckets, you must makerequests for this API operation to the Zonal endpoint. Theseendpoints support virtual-hosted-style requests in the formathttps://amzn-s3-demo-bucket.s3express-zone-id.region-code.amazonaws.com/key-name. Path-style requests are not supported. For more information aboutendpoints in Availability Zones, seeRegional and Zonal endpointsfor directory buckets in Availability Zones in theAmazon S3User Guide. For more information about endpoints in Local Zones,seeConcepts for directory buckets in Local Zones in theAmazon S3 User Guide.

  • VPC endpoints don't support cross-Region requests (includingcopies). If you're using VPC endpoints, your source and destinationbuckets should be in the same Amazon Web Services Region as your VPCendpoint.

Both the Region that you want to copy the object from and the Regionthat you want to copy the object to must be enabled for your account.For more information about how to enable a Region for your account,seeEnable or disable a Region for standalone accounts in theAmazon Web Services Account Management Guide.

Amazon S3 transfer acceleration does not support cross-Region copies.If you request a cross-Region copy using a transfer accelerationendpoint, you get a400 Bad Request error. For more information, seeTransfer Acceleration.

Authentication and authorization

AllCopyObject requests must be authenticated and signed by usingIAM credentials (access key ID and secret access key for the IAMidentities). All headers with thex-amz- prefix, includingx-amz-copy-source, must be signed. For more information, seeRESTAuthentication.

Directory buckets - You must use the IAM credentials toauthenticate and authorize your access to theCopyObject APIoperation, instead of using the temporary security credentialsthrough theCreateSession API operation.

Amazon Web Services CLI or SDKs handles authentication andauthorization on your behalf.

Permissions

You must haveread access to the source object andwrite accessto the destination bucket.

  • General purpose bucket permissions - You must have permissionsin an IAM policy based on the source and destination bucket typesin aCopyObject operation.

    • If the source object is in a general purpose bucket, you musthaves3:GetObject permission to read thesource object that is being copied.

    • If the destination bucket is a general purpose bucket, you musthaves3:PutObject permission to write theobject copy to the destination bucket.

  • Directory bucket permissions - You must have permissions in abucket policy or an IAM identity-based policy based on the sourceand destination bucket types in aCopyObject operation.

    • If the source object that you want to copy is in a directorybucket, you must have thes3express:CreateSession permission in theAction element of a policy to read the object. By default, thesession is in theReadWrite mode. If you want to restrict theaccess, you can explicitly set thes3express:SessionModecondition key toReadOnly on the copy source bucket.

    • If the copy destination is a directory bucket, you must have thes3express:CreateSession permission in theAction element of a policy to write the object to thedestination. Thes3express:SessionMode condition key can't beset toReadOnly on the copy destination bucket.If the object is encrypted with SSE-KMS, you must also have thekms:GenerateDataKey andkms:Decrypt permissions in IAMidentity-based policies and KMS key policies for the KMS key.

    For example policies, seeExample bucket policies for S3 ExpressOne Zone andAmazon Web Services Identity and AccessManagement (IAM) identity-based policies for S3 Express OneZone in theAmazon S3 User Guide.

Response and special errors

When the request is an HTTP 1.1 request, the response is chunkencoded. When the request is not an HTTP 1.1 request, the responsewould not contain theContent-Length. You always need to read theentire response body to check if the copy succeeds.

  • If the copy is successful, you receive a response with informationabout the copied object.

  • A copy request might return an error when Amazon S3 receives thecopy request or while Amazon S3 is copying the files. A200 OKresponse can contain either a success or an error.

    • If the error occurs before the copy action starts, you receive astandard Amazon S3 error.

    • If the error occurs during the copy operation, the errorresponse is embedded in the200 OK response. For example, in across-region copy, you may encounter throttling and receive a200 OK response. For more information, seeResolve the Error200 response when copying objects to Amazon S3. The200 OKstatus code means the copy was accepted, but it doesn't meanthe copy is complete. Another example is when you disconnectfrom Amazon S3 before the copy is complete, Amazon S3 mightcancel the copy and you may receive a200 OK response. Youmust stay connected to Amazon S3 until the entire response issuccessfully received and processed.

      If you call this API operation directly, make sure to designyour application to parse the content of the response and handleit appropriately. If you use Amazon Web Services SDKs, SDKshandle this condition. The SDKs detect the embedded error andapply error handling per your configuration settings (includingautomatically retrying the request as appropriate). If thecondition persists, the SDKs throw an exception (or, for theSDKs that don't use exceptions, they return an error).

Charge

The copy request charge is based on the storage class and Regionthat you specify for the destination object. The request can alsoresult in a data retrieval charge for the source if the sourcestorage class bills for data retrieval. If the copy source is in adifferent region, the data transfer is billed to the copy sourceaccount. For pricing information, seeAmazon S3 pricing.

HTTP Host header syntax
  • Directory buckets - The HTTP Host header syntax isBucket-name.s3express-zone-id.region-code.amazonaws.com.

  • Amazon S3 on Outposts - When you use this action with S3 onOutposts through the REST API, you must direct requests to the S3on Outposts hostname. The S3 on Outposts hostname takes the formAccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com.The hostname isn't required when you use the Amazon Web ServicesCLI or SDKs.

The following operations are related toCopyObject:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To copy an object

# The following example copies an object from one bucket to another.resp=client.copy_object({bucket:"destinationbucket",copy_source:"/sourcebucket/HappyFacejpg",key:"HappyFaceCopyjpg",})resp.to_houtputsthefollowing:{copy_object_result:{etag:"\"6805f2cfc46c0f04559748bb039d69ae\"",last_modified:Time.parse("2016-12-15T17:38:53.000Z"),},}

Request syntax with placeholder values

resp=client.copy_object({acl:"private",# accepts private, public-read, public-read-write, authenticated-read, aws-exec-read, bucket-owner-read, bucket-owner-full-controlbucket:"BucketName",# requiredcache_control:"CacheControl",checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVMEcontent_disposition:"ContentDisposition",content_encoding:"ContentEncoding",content_language:"ContentLanguage",content_type:"ContentType",copy_source:"CopySource",# requiredcopy_source_if_match:"CopySourceIfMatch",copy_source_if_modified_since:Time.now,copy_source_if_none_match:"CopySourceIfNoneMatch",copy_source_if_unmodified_since:Time.now,expires:Time.now,grant_full_control:"GrantFullControl",grant_read:"GrantRead",grant_read_acp:"GrantReadACP",grant_write_acp:"GrantWriteACP",if_match:"IfMatch",if_none_match:"IfNoneMatch",key:"ObjectKey",# requiredmetadata:{"MetadataKey"=>"MetadataValue",},metadata_directive:"COPY",# accepts COPY, REPLACEtagging_directive:"COPY",# accepts COPY, REPLACEserver_side_encryption:"AES256",# accepts AES256, aws:fsx, aws:kms, aws:kms:dssestorage_class:"STANDARD",# accepts STANDARD, REDUCED_REDUNDANCY, STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING, GLACIER, DEEP_ARCHIVE, OUTPOSTS, GLACIER_IR, SNOW, EXPRESS_ONEZONE, FSX_OPENZFS, FSX_ONTAPwebsite_redirect_location:"WebsiteRedirectLocation",sse_customer_algorithm:"SSECustomerAlgorithm",sse_customer_key:"SSECustomerKey",sse_customer_key_md5:"SSECustomerKeyMD5",ssekms_key_id:"SSEKMSKeyId",ssekms_encryption_context:"SSEKMSEncryptionContext",bucket_key_enabled:false,copy_source_sse_customer_algorithm:"CopySourceSSECustomerAlgorithm",copy_source_sse_customer_key:"CopySourceSSECustomerKey",copy_source_sse_customer_key_md5:"CopySourceSSECustomerKeyMD5",request_payer:"requester",# accepts requestertagging:"TaggingHeader",object_lock_mode:"GOVERNANCE",# accepts GOVERNANCE, COMPLIANCEobject_lock_retain_until_date:Time.now,object_lock_legal_hold_status:"ON",# accepts ON, OFFexpected_bucket_owner:"AccountId",expected_source_bucket_owner:"AccountId",})

Response structure

resp.copy_object_result.etag#=> Stringresp.copy_object_result.last_modified#=> Timeresp.copy_object_result.checksum_type#=> String, one of "COMPOSITE", "FULL_OBJECT"resp.copy_object_result.checksum_crc32#=> Stringresp.copy_object_result.checksum_crc32c#=> Stringresp.copy_object_result.checksum_crc64nvme#=> Stringresp.copy_object_result.checksum_sha1#=> Stringresp.copy_object_result.checksum_sha256#=> Stringresp.expiration#=> Stringresp.copy_source_version_id#=> Stringresp.version_id#=> Stringresp.server_side_encryption#=> String, one of "AES256", "aws:fsx", "aws:kms", "aws:kms:dsse"resp.sse_customer_algorithm#=> Stringresp.sse_customer_key_md5#=> Stringresp.ssekms_key_id#=> Stringresp.ssekms_encryption_context#=> Stringresp.bucket_key_enabled#=> Booleanresp.request_charged#=> String, one of "requester"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :acl(String)

    The canned access control list (ACL) to apply to the object.

    When you copy an object, the ACL metadata is not preserved and is settoprivate by default. Only the owner has full access control. Tooverride the default ACL setting, specify a new ACL when you generatea copy request. For more information, seeUsing ACLs.

    If the destination bucket that you're copying objects to uses thebucket owner enforced setting for S3 Object Ownership, ACLs aredisabled and no longer affect permissions. Buckets that use thissetting only acceptPUT requests that don't specify an ACL orPUTrequests that specify bucket owner full control ACLs, such as thebucket-owner-full-control canned ACL or an equivalent form of thisACL expressed in the XML format. For more information, seeControlling ownership of objects and disabling ACLs in theAmazon S3 User Guide.

    * If your destination bucket uses the bucket owner enforced setting for Object Ownership, all objects written to the bucket by any account will be owned by the bucket owner.

    • This functionality is not supported for directory buckets.

    • This functionality is not supported for Amazon S3 on Outposts.

  • :bucket(required,String)

    The name of the destination bucket.

    Directory buckets - When you use this operation with a directorybucket, you must use virtual-hosted-style requests in the formatBucket-name.s3express-zone-id.region-code.amazonaws.com. Path-stylerequests are not supported. Directory bucket names must be unique inthe chosen Zone (Availability Zone or Local Zone). Bucket names mustfollow the formatbucket-base-name--zone-id--x-s3 (for example,amzn-s3-demo-bucket--usw2-az1--x-s3). For information about bucketnaming restrictions, seeDirectory bucket naming rules in theAmazon S3 User Guide.

    Copying objects across different Amazon Web Services Regions isn'tsupported when the source or destination bucket is in Amazon WebServices Local Zones. The source and destination buckets must have thesame parent Amazon Web Services Region. Otherwise, you get an HTTP400 Bad Request error with the error codeInvalidRequest.

    Access points - When you use this action with an access point forgeneral purpose buckets, you must provide the alias of the accesspoint in place of the bucket name or specify the access point ARN.When you use this action with an access point for directory buckets,you must provide the access point name in place of the bucket name.When using the access point ARN, you must direct requests to theaccess point hostname. The access point hostname takes the formAccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.When using this action with an access point through the Amazon WebServices SDKs, you provide the access point ARN in place of the bucketname. For more information about access point ARNs, seeUsing accesspoints in theAmazon S3 User Guide.

    Object Lambda access points are not supported by directory buckets.

    S3 on Outposts - When you use this action with S3 on Outposts, youmust use the Outpost bucket access point ARN or the access point aliasfor the destination bucket. You can only copy objects within the sameOutpost bucket. It's not supported to copy objects across differentAmazon Web Services Outposts, between buckets on the same Outposts, orbetween Outposts buckets and any other bucket types. For moreinformation about S3 on Outposts, seeWhat is S3 on Outposts? intheS3 on Outposts guide. When you use this action with S3 onOutposts through the REST API, you must direct requests to the S3 onOutposts hostname, in the formatAccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com.The hostname isn't required when you use the Amazon Web Services CLIor SDKs.

  • :cache_control(String)

    Specifies the caching behavior along the request/reply chain.

  • :checksum_algorithm(String)

    Indicates the algorithm that you want Amazon S3 to use to create thechecksum for the object. For more information, seeChecking objectintegrity in theAmazon S3 User Guide.

    When you copy an object, if the source object has a checksum, thatchecksum value will be copied to the new object by default. If theCopyObject request does not include thisx-amz-checksum-algorithmheader, the checksum algorithm will be copied from the source objectto the destination object (if it's present on the source object). Youcan optionally specify a different checksum algorithm to use with thex-amz-checksum-algorithm header. Unrecognized or unsupported valueswill respond with the HTTP status code400 Bad Request.

    For directory buckets, when you use Amazon Web Services SDKs,CRC32is the default checksum algorithm that's used for performance.

  • :content_disposition(String)

    Specifies presentational information for the object. Indicates whetheran object should be displayed in a web browser or downloaded as afile. It allows specifying the desired filename for the downloadedfile.

  • :content_encoding(String)

    Specifies what content encodings have been applied to the object andthus what decoding mechanisms must be applied to obtain the media-typereferenced by the Content-Type header field.

    For directory buckets, only theaws-chunked value is supported inthis header field.

  • :content_language(String)

    The language the content is in.

  • :content_type(String)

    A standard MIME type that describes the format of the object data.

  • :copy_source(required,String)

    Specifies the source object for the copy operation. The source objectcan be up to 5 GB. If the source object is an object that was uploadedby using a multipart upload, the object copy will be a single partobject after the source object is copied to the destination bucket.

    You specify the value of the copy source in one of two formats,depending on whether you want to access the source object through anaccess point:

    • For objects not accessed through an access point, specify the nameof the source bucket and the key of the source object, separated bya slash (/). For example, to copy the objectreports/january.pdffrom the general purpose bucketawsexamplebucket, useawsexamplebucket/reports/january.pdf. The value must beURL-encoded. To copy the objectreports/january.pdf from thedirectory bucketawsexamplebucket--use1-az5--x-s3, useawsexamplebucket--use1-az5--x-s3/reports/january.pdf. The valuemust be URL-encoded.

    • For objects accessed through access points, specify the AmazonResource Name (ARN) of the object as accessed through the accesspoint, in the formatarn:aws:s3:<Region>:<account-id>:accesspoint/<access-point-name>/object/<key>.For example, to copy the objectreports/january.pdf through accesspointmy-access-point owned by account123456789012 in Regionus-west-2, use the URL encoding ofarn:aws:s3:us-west-2:123456789012:accesspoint/my-access-point/object/reports/january.pdf.The value must be URL encoded.

      * Amazon S3 supports copy operations using Access points only when the source and destination buckets are in the same Amazon Web Services Region.

      • Access points are not supported by directory buckets.

      Alternatively, for objects accessed through Amazon S3 on Outposts,specify the ARN of the object as accessed in the formatarn:aws:s3-outposts:<Region>:<account-id>:outpost/<outpost-id>/object/<key>.For example, to copy the objectreports/january.pdf throughoutpostmy-outpost owned by account123456789012 in Regionus-west-2, use the URL encoding ofarn:aws:s3-outposts:us-west-2:123456789012:outpost/my-outpost/object/reports/january.pdf.The value must be URL-encoded.

    If your source bucket versioning is enabled, thex-amz-copy-sourceheader by default identifies the current version of an object to copy.If the current version is a delete marker, Amazon S3 behaves as if theobject was deleted. To copy a different version, use theversionIdquery parameter. Specifically, append?versionId=<version-id> to thevalue (for example,awsexamplebucket/reports/january.pdf?versionId=QUpfdndhfd8438MNFDN93jdnJFkdmqnh893).If you don't specify a version ID, Amazon S3 copies the latestversion of the source object.

    If you enable versioning on the destination bucket, Amazon S3generates a unique version ID for the copied object. This version IDis different from the version ID of the source object. Amazon S3returns the version ID of the copied object in thex-amz-version-idresponse header in the response.

    If you do not enable versioning or suspend it on the destinationbucket, the version ID that Amazon S3 generates in thex-amz-version-id response header is always null.

    Directory buckets - S3 Versioning isn't enabled and supported fordirectory buckets.

  • :copy_source_if_match(String)

    Copies the object if its entity tag (ETag) matches the specified tag.

    If both thex-amz-copy-source-if-match andx-amz-copy-source-if-unmodified-since headers are present in therequest and evaluate as follows, Amazon S3 returns200 OK and copiesthe data:

    • x-amz-copy-source-if-match condition evaluates to true

    • x-amz-copy-source-if-unmodified-since condition evaluates to false

  • :copy_source_if_modified_since(Time,DateTime,Date,Integer,String)

    Copies the object if it has been modified since the specified time.

    If both thex-amz-copy-source-if-none-match andx-amz-copy-source-if-modified-since headers are present in therequest and evaluate as follows, Amazon S3 returns the412Precondition Failed response code:

    • x-amz-copy-source-if-none-match condition evaluates to false

    • x-amz-copy-source-if-modified-since condition evaluates to true

  • :copy_source_if_none_match(String)

    Copies the object if its entity tag (ETag) is different than thespecified ETag.

    If both thex-amz-copy-source-if-none-match andx-amz-copy-source-if-modified-since headers are present in therequest and evaluate as follows, Amazon S3 returns the412Precondition Failed response code:

    • x-amz-copy-source-if-none-match condition evaluates to false

    • x-amz-copy-source-if-modified-since condition evaluates to true

  • :copy_source_if_unmodified_since(Time,DateTime,Date,Integer,String)

    Copies the object if it hasn't been modified since the specifiedtime.

    If both thex-amz-copy-source-if-match andx-amz-copy-source-if-unmodified-since headers are present in therequest and evaluate as follows, Amazon S3 returns200 OK and copiesthe data:

    • x-amz-copy-source-if-match condition evaluates to true

    • x-amz-copy-source-if-unmodified-since condition evaluates to false

  • :expires(Time,DateTime,Date,Integer,String)

    The date and time at which the object is no longer cacheable.

  • :grant_full_control(String)

    Gives the grantee READ, READ_ACP, and WRITE_ACP permissions on theobject.

    * This functionality is not supported for directory buckets.

    • This functionality is not supported for Amazon S3 on Outposts.

  • :grant_read(String)

    Allows grantee to read the object data and its metadata.

    * This functionality is not supported for directory buckets.

    • This functionality is not supported for Amazon S3 on Outposts.

  • :grant_read_acp(String)

    Allows grantee to read the object ACL.

    * This functionality is not supported for directory buckets.

    • This functionality is not supported for Amazon S3 on Outposts.

  • :grant_write_acp(String)

    Allows grantee to write the ACL for the applicable object.

    * This functionality is not supported for directory buckets.

    • This functionality is not supported for Amazon S3 on Outposts.

  • :if_match(String)

    Copies the object if the entity tag (ETag) of the destination objectmatches the specified tag. If the ETag values do not match, theoperation returns a412 Precondition Failed error. If a concurrentoperation occurs during the upload S3 returns a409ConditionalRequestConflict response. On a 409 failure you shouldfetch the object's ETag and retry the upload.

    Expects the ETag value as a string.

    For more information about conditional requests, seeRFC 7232.

  • :if_none_match(String)

    Copies the object only if the object key name at the destination doesnot already exist in the bucket specified. Otherwise, Amazon S3returns a412 Precondition Failed error. If a concurrent operationoccurs during the upload S3 returns a409 ConditionalRequestConflictresponse. On a 409 failure you should retry the upload.

    Expects the '*' (asterisk) character.

    For more information about conditional requests, seeRFC 7232.

  • :key(required,String)

    The key of the destination object.

  • :metadata(Hash<String,String>)

    A map of metadata to store with the object in S3.

  • :metadata_directive(String)

    Specifies whether the metadata is copied from the source object orreplaced with metadata that's provided in the request. When copyingan object, you can preserve all metadata (the default) or specify newmetadata. If this header isn’t specified,COPY is the defaultbehavior.

    General purpose bucket - For general purpose buckets, when yougrant permissions, you can use thes3:x-amz-metadata-directivecondition key to enforce certain metadata behavior when objects areuploaded. For more information, seeAmazon S3 condition keyexamples in theAmazon S3 User Guide.

    x-amz-website-redirect-location is unique to each object and is notcopied when using thex-amz-metadata-directive header. To copy thevalue, you must specifyx-amz-website-redirect-location in therequest header.

  • :tagging_directive(String)

    Specifies whether the object tag-set is copied from the source objector replaced with the tag-set that's provided in the request.

    The default value isCOPY.

    Directory buckets - For directory buckets in aCopyObjectoperation, only the empty tag-set is supported. Any requests thatattempt to write non-empty tags into directory buckets will receive a501 Not Implemented status code. When the destination bucket is adirectory bucket, you will receive a501 Not Implemented response inany of the following situations:

    • When you attempt toCOPY the tag-set from an S3 source object thathas non-empty tags.

    • When you attempt toREPLACE the tag-set of a source object and seta non-empty value tox-amz-tagging.

    • When you don't set thex-amz-tagging-directive header and thesource object has non-empty tags. This is because the default valueofx-amz-tagging-directive isCOPY.

    Because only the empty tag-set is supported for directory buckets in aCopyObject operation, the following situations are allowed:

    • When you attempt toCOPY the tag-set from a directory bucketsource object that has no tags to a general purpose bucket. Itcopies an empty tag-set to the destination object.

    • When you attempt toREPLACE the tag-set of a directory bucketsource object and set thex-amz-tagging value of the directorybucket destination object to empty.

    • When you attempt toREPLACE the tag-set of a general purposebucket source object that has non-empty tags and set thex-amz-tagging value of the directory bucket destination object toempty.

    • When you attempt toREPLACE the tag-set of a directory bucketsource object and don't set thex-amz-tagging value of thedirectory bucket destination object. This is because the defaultvalue ofx-amz-tagging is the empty value.

  • :server_side_encryption(String)

    The server-side encryption algorithm used when storing this object inAmazon S3. Unrecognized or unsupported values won’t write adestination object and will receive a400 Bad Request response.

    Amazon S3 automatically encrypts all new objects that are copied to anS3 bucket. When copying an object, if you don't specify encryptioninformation in your copy request, the encryption setting of the targetobject is set to the default encryption configuration of thedestination bucket. By default, all buckets have a base level ofencryption configuration that uses server-side encryption with AmazonS3 managed keys (SSE-S3). If the destination bucket has a differentdefault encryption configuration, Amazon S3 uses the correspondingencryption key to encrypt the target object copy.

    With server-side encryption, Amazon S3 encrypts your data as it writesyour data to disks in its data centers and decrypts the data when youaccess it. For more information about server-side encryption, seeUsing Server-Side Encryption in theAmazon S3 User Guide.

    General purpose buckets

    • For general purpose buckets, there are the following supportedoptions for server-side encryption: server-side encryption with KeyManagement Service (KMS) keys (SSE-KMS), dual-layer server-sideencryption with Amazon Web Services KMS keys (DSSE-KMS), andserver-side encryption with customer-provided encryption keys(SSE-C). Amazon S3 uses the corresponding KMS key, or acustomer-provided key to encrypt the target object copy.

    • When you perform aCopyObject operation, if you want to use adifferent type of encryption setting for the target object, you canspecify appropriate encryption-related headers to encrypt the targetobject with an Amazon S3 managed key, a KMS key, or acustomer-provided key. If the encryption setting in your request isdifferent from the default encryption configuration of thedestination bucket, the encryption setting in your request takesprecedence.

    Directory buckets

    • For directory buckets, there are only two supported options forserver-side encryption: server-side encryption with Amazon S3managed keys (SSE-S3) (AES256) and server-side encryption with KMSkeys (SSE-KMS) (aws:kms). We recommend that the bucket's defaultencryption uses the desired encryption configuration and you don'toverride the bucket default encryption in yourCreateSessionrequests orPUT object requests. Then, new objects areautomatically encrypted with the desired encryption settings. Formore information, seeProtecting data with server-sideencryption in theAmazon S3 User Guide. For more informationabout the encryption overriding behaviors in directory buckets, seeSpecifying server-side encryption with KMS for new objectuploads.

    • To encrypt new object copies to a directory bucket with SSE-KMS, werecommend you specify SSE-KMS as the directory bucket's defaultencryption configuration with a KMS key (specifically, acustomermanaged key). TheAmazon Web Services managed key(aws/s3) isn't supported. Your SSE-KMS configuration can onlysupport 1customer managed key per directory bucket for thelifetime of the bucket. After you specify a customer managed key forSSE-KMS, you can't override the customer managed key for thebucket's SSE-KMS configuration. Then, when you perform aCopyObject operation and want to specify server-side encryptionsettings for new object copies with SSE-KMS in theencryption-related request headers, you must ensure the encryptionkey is the same customer managed key that you specified for thedirectory bucket's default encryption configuration.

    • S3 access points for Amazon FSx - When accessing data storedin Amazon FSx file systems using S3 access points, the only validserver side encryption option isaws:fsx. All Amazon FSx filesystems have encryption configured by default and are encrypted atrest. Data is automatically encrypted before being written to thefile system, and automatically decrypted as it is read. Theseprocesses are handled transparently by Amazon FSx.

  • :storage_class(String)

    If thex-amz-storage-class header is not used, the copied objectwill be stored in theSTANDARD Storage Class by default. TheSTANDARD storage class provides high durability and highavailability. Depending on performance needs, you can specify adifferent Storage Class.

    *Directory buckets - Directory buckets only supportEXPRESS_ONEZONE (the S3 Express One Zone storage class) in Availability Zones andONEZONE_IA (the S3 One Zone-Infrequent Access storage class) in Dedicated Local Zones. Unsupported storage class values won't write a destination object and will respond with the HTTP status code400 Bad Request.

    • Amazon S3 on Outposts - S3 on Outposts only uses theOUTPOSTS Storage Class.

    You can use theCopyObject action to change the storage class of anobject that is already stored in Amazon S3 by using thex-amz-storage-class header. For more information, seeStorageClasses in theAmazon S3 User Guide.

    Before using an object as a source object for the copy operation, youmust restore a copy of it if it meets any of the following conditions:

    • The storage class of the source object isGLACIER orDEEP_ARCHIVE.

    • The storage class of the source object isINTELLIGENT_TIERING andit'sS3 Intelligent-Tiering access tier isArchive Access orDeep Archive Access.

    For more information, seeRestoreObject andCopying Objectsin theAmazon S3 User Guide.

  • :website_redirect_location(String)

    If the destination bucket is configured as a website, redirectsrequests for this object copy to another object in the same bucket orto an external URL. Amazon S3 stores the value of this header in theobject metadata. This value is unique to each object and is not copiedwhen using thex-amz-metadata-directive header. Instead, you may optto provide this header in combination with thex-amz-metadata-directive header.

    This functionality is not supported for directory buckets.

  • :sse_customer_algorithm(String)

    Specifies the algorithm to use when encrypting the object (forexample,AES256).

    When you perform aCopyObject operation, if you want to use adifferent type of encryption setting for the target object, you canspecify appropriate encryption-related headers to encrypt the targetobject with an Amazon S3 managed key, a KMS key, or acustomer-provided key. If the encryption setting in your request isdifferent from the default encryption configuration of the destinationbucket, the encryption setting in your request takes precedence.

    This functionality is not supported when the destination bucket is adirectory bucket.

  • :sse_customer_key(String)

    Specifies the customer-provided encryption key for Amazon S3 to use inencrypting data. This value is used to store the object and then it isdiscarded. Amazon S3 does not store the encryption key. The key mustbe appropriate for use with the algorithm specified in thex-amz-server-side-encryption-customer-algorithm header.

    This functionality is not supported when the destination bucket is adirectory bucket.

  • :sse_customer_key_md5(String)

    Specifies the 128-bit MD5 digest of the encryption key according toRFC 1321. Amazon S3 uses this header for a message integrity check toensure that the encryption key was transmitted without error.

    This functionality is not supported when the destination bucket is adirectory bucket.

  • :ssekms_key_id(String)

    Specifies the KMS key ID (Key ID, Key ARN, or Key Alias) to use forobject encryption. All GET and PUT requests for an object protected byKMS will fail if they're not made via SSL or using SigV4. Forinformation about configuring any of the officially supported AmazonWeb Services SDKs and Amazon Web Services CLI, seeSpecifying theSignature Version in Request Authentication in theAmazon S3 UserGuide.

    Directory buckets - To encrypt data using SSE-KMS, it'srecommended to specify thex-amz-server-side-encryption header toaws:kms. Then, thex-amz-server-side-encryption-aws-kms-key-idheader implicitly uses the bucket's default KMS customer managed keyID. If you want to explicitly set thex-amz-server-side-encryption-aws-kms-key-id header, it must match thebucket's default customer managed key (using key ID or ARN, notalias). Your SSE-KMS configuration can only support 1customermanaged key per directory bucket's lifetime. TheAmazon WebServices managed key (aws/s3) isn't supported. Incorrect keyspecification results in an HTTP400 Bad Request error.

  • :ssekms_encryption_context(String)

    Specifies the Amazon Web Services KMS Encryption Context as anadditional encryption context to use for the destination objectencryption. The value of this header is a base64-encoded UTF-8 stringholding JSON with the encryption context key-value pairs.

    General purpose buckets - This value must be explicitly added tospecify encryption context forCopyObject requests if you want anadditional encryption context for your destination object. Theadditional encryption context of the source object won't be copied tothe destination object. For more information, seeEncryptioncontext in theAmazon S3 User Guide.

    Directory buckets - You can optionally provide an explicitencryption context value. The value must match the default encryptioncontext - the bucket Amazon Resource Name (ARN). An additionalencryption context value is not supported.

  • :bucket_key_enabled(Boolean)

    Specifies whether Amazon S3 should use an S3 Bucket Key for objectencryption with server-side encryption using Key Management Service(KMS) keys (SSE-KMS). If a target object uses SSE-KMS, you can enablean S3 Bucket Key for the object.

    Setting this header totrue causes Amazon S3 to use an S3 Bucket Keyfor object encryption with SSE-KMS. Specifying this header with a COPYaction doesn’t affect bucket-level settings for S3 Bucket Key.

    For more information, seeAmazon S3 Bucket Keys in theAmazon S3User Guide.

    Directory buckets - S3 Bucket Keys aren't supported, when youcopy SSE-KMS encrypted objects from general purpose buckets todirectory buckets, from directory buckets to general purpose buckets,or between directory buckets, throughCopyObject. In this case,Amazon S3 makes a call to KMS every time a copy request is made for aKMS-encrypted object.

  • :copy_source_sse_customer_algorithm(String)

    Specifies the algorithm to use when decrypting the source object (forexample,AES256).

    If the source object for the copy is stored in Amazon S3 using SSE-C,you must provide the necessary encryption information in your requestso that Amazon S3 can decrypt the object for copying.

    This functionality is not supported when the source object is in adirectory bucket.

  • :copy_source_sse_customer_key(String)

    Specifies the customer-provided encryption key for Amazon S3 to use todecrypt the source object. The encryption key provided in this headermust be the same one that was used when the source object was created.

    If the source object for the copy is stored in Amazon S3 using SSE-C,you must provide the necessary encryption information in your requestso that Amazon S3 can decrypt the object for copying.

    This functionality is not supported when the source object is in adirectory bucket.

  • :copy_source_sse_customer_key_md5(String)

    Specifies the 128-bit MD5 digest of the encryption key according toRFC 1321. Amazon S3 uses this header for a message integrity check toensure that the encryption key was transmitted without error.

    If the source object for the copy is stored in Amazon S3 using SSE-C,you must provide the necessary encryption information in your requestso that Amazon S3 can decrypt the object for copying.

    This functionality is not supported when the source object is in adirectory bucket.

  • :request_payer(String)

    Confirms that the requester knows that they will be charged for therequest. Bucket owners need not specify this parameter in theirrequests. If either the source or destination S3 bucket has RequesterPays enabled, the requester will pay for corresponding charges to copythe object. For information about downloading objects from RequesterPays buckets, seeDownloading Objects in Requester Pays Bucketsin theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :tagging(String)

    The tag-set for the object copy in the destination bucket. This valuemust be used in conjunction with thex-amz-tagging-directive if youchooseREPLACE for thex-amz-tagging-directive. If you chooseCOPY for thex-amz-tagging-directive, you don't need to set thex-amz-tagging header, because the tag-set will be copied from thesource object directly. The tag-set must be encoded as URL Queryparameters.

    The default value is the empty value.

    Directory buckets - For directory buckets in aCopyObjectoperation, only the empty tag-set is supported. Any requests thatattempt to write non-empty tags into directory buckets will receive a501 Not Implemented status code. When the destination bucket is adirectory bucket, you will receive a501 Not Implemented response inany of the following situations:

    • When you attempt toCOPY the tag-set from an S3 source object thathas non-empty tags.

    • When you attempt toREPLACE the tag-set of a source object and seta non-empty value tox-amz-tagging.

    • When you don't set thex-amz-tagging-directive header and thesource object has non-empty tags. This is because the default valueofx-amz-tagging-directive isCOPY.

    Because only the empty tag-set is supported for directory buckets in aCopyObject operation, the following situations are allowed:

    • When you attempt toCOPY the tag-set from a directory bucketsource object that has no tags to a general purpose bucket. Itcopies an empty tag-set to the destination object.

    • When you attempt toREPLACE the tag-set of a directory bucketsource object and set thex-amz-tagging value of the directorybucket destination object to empty.

    • When you attempt toREPLACE the tag-set of a general purposebucket source object that has non-empty tags and set thex-amz-tagging value of the directory bucket destination object toempty.

    • When you attempt toREPLACE the tag-set of a directory bucketsource object and don't set thex-amz-tagging value of thedirectory bucket destination object. This is because the defaultvalue ofx-amz-tagging is the empty value.

  • :object_lock_mode(String)

    The Object Lock mode that you want to apply to the object copy.

    This functionality is not supported for directory buckets.

  • :object_lock_retain_until_date(Time,DateTime,Date,Integer,String)

    The date and time when you want the Object Lock of the object copy toexpire.

    This functionality is not supported for directory buckets.

  • :object_lock_legal_hold_status(String)

    Specifies whether you want to apply a legal hold to the object copy.

    This functionality is not supported for directory buckets.

  • :expected_bucket_owner(String)

    The account ID of the expected destination bucket owner. If theaccount ID that you provide does not match the actual owner of thedestination bucket, the request fails with the HTTP status code403Forbidden (access denied).

  • :expected_source_bucket_owner(String)

    The account ID of the expected source bucket owner. If the account IDthat you provide does not match the actual owner of the source bucket,the request fails with the HTTP status code403 Forbidden (accessdenied).

Returns:

See Also:

2418241924202421
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 2418defcopy_object(params={},options={})req=build_request(:copy_object,params)req.send_request(options)end

#create_bucket(params = {}) ⇒Types::CreateBucketOutput

This action creates an Amazon S3 bucket. To create an Amazon S3 onOutposts bucket, seeCreateBucket.

Creates a new S3 bucket. To create a bucket, you must set up Amazon S3and have a valid Amazon Web Services Access Key ID to authenticaterequests. Anonymous requests are never allowed to create buckets. Bycreating the bucket, you become the bucket owner.

There are two types of buckets: general purpose buckets and directorybuckets. For more information about these bucket types, seeCreating,configuring, and working with Amazon S3 buckets in theAmazon S3User Guide.

*General purpose buckets - If you send yourCreateBucket request to thes3.amazonaws.com global endpoint, the request goes to theus-east-1 Region. So the signature calculations in Signature Version 4 must useus-east-1 as the Region, even if the location constraint in the request specifies another Region where the bucket is to be created. If you create a bucket in a Region other than US East (N. Virginia), your application must be able to handle 307 redirect. For more information, seeVirtual hosting of buckets in theAmazon S3 User Guide.

Permissions
  • General purpose bucket permissions - In addition to thes3:CreateBucket permission, the following permissions arerequired in a policy when yourCreateBucket request includesspecific headers:

    • Access control lists (ACLs) - In yourCreateBucketrequest, if you specify an access control list (ACL) and set ittopublic-read,public-read-write,authenticated-read, orif you explicitly specify any other custom ACLs, boths3:CreateBucket ands3:PutBucketAcl permissions arerequired. In yourCreateBucket request, if you set the ACL toprivate, or if you don't specify any ACLs, only thes3:CreateBucket permission is required.

    • Object Lock - In yourCreateBucket request, if you setx-amz-bucket-object-lock-enabled to true, thes3:PutBucketObjectLockConfiguration ands3:PutBucketVersioning permissions are required.

    • S3 Object Ownership - If yourCreateBucket requestincludes thex-amz-object-ownership header, then thes3:PutBucketOwnershipControls permission is required.

      To set an ACL on a bucket as part of aCreateBucket request,you must explicitly set S3 Object Ownership for the bucket to adifferent value than the default,BucketOwnerEnforced.Additionally, if your desired bucket ACL grants public access,you must first create the bucket (without the bucket ACL) andthen explicitly disable Block Public Access on the bucket beforeusingPutBucketAcl to set the ACL. If you try to create abucket with a public ACL, the request will fail.

      For the majority of modern use cases in S3, we recommend thatyou keep all Block Public Access settings enabled and keep ACLsdisabled. If you would like to share data with users outside ofyour account, you can use bucket policies as needed. For moreinformation, seeControlling ownership of objects and disablingACLs for your bucket andBlocking public access to yourAmazon S3 storage in theAmazon S3 User Guide.

    • S3 Block Public Access - If your specific use case requiresgranting public access to your S3 resources, you can disableBlock Public Access. Specifically, you can create a new bucketwith Block Public Access enabled, then separately call theDeletePublicAccessBlock API. To use this operation, youmust have thes3:PutBucketPublicAccessBlock permission. Formore information about S3 Block Public Access, seeBlockingpublic access to your Amazon S3 storage in theAmazon S3User Guide.

  • Directory bucket permissions - You must have thes3express:CreateBucket permission in an IAM identity-basedpolicy instead of a bucket policy. Cross-account access to thisAPI operation isn't supported. This operation can only beperformed by the Amazon Web Services account that owns theresource. For more information about directory bucket policies andpermissions, seeAmazon Web Services Identity and AccessManagement (IAM) for S3 Express One Zone in theAmazon S3User Guide.

    The permissions for ACLs, Object Lock, S3 Object Ownership, and S3Block Public Access are not supported for directory buckets. Fordirectory buckets, all Block Public Access settings are enabled atthe bucket level and S3 Object Ownership is set to Bucket ownerenforced (ACLs disabled). These settings can't be modified.

    For more information about permissions for creating and workingwith directory buckets, seeDirectory buckets in theAmazonS3 User Guide. For more information about supported S3 featuresfor directory buckets, seeFeatures of S3 Express One Zonein theAmazon S3 User Guide.

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax iss3express-control.region-code.amazonaws.com.

The following operations are related toCreateBucket:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To create a bucket in a specific region

# The following example creates a bucket. The request specifies an AWS region where to create the bucket.resp=client.create_bucket({bucket:"examplebucket",create_bucket_configuration:{location_constraint:"eu-west-1",},})resp.to_houtputsthefollowing:{location:"http://examplebucket.<Region>.s3.amazonaws.com/",}

Example: To create a bucket

# The following example creates a bucket.resp=client.create_bucket({bucket:"examplebucket",})resp.to_houtputsthefollowing:{location:"/examplebucket",}

Request syntax with placeholder values

resp=client.create_bucket({acl:"private",# accepts private, public-read, public-read-write, authenticated-readbucket:"BucketName",# requiredcreate_bucket_configuration:{location_constraint:"af-south-1",# accepts af-south-1, ap-east-1, ap-northeast-1, ap-northeast-2, ap-northeast-3, ap-south-1, ap-south-2, ap-southeast-1, ap-southeast-2, ap-southeast-3, ap-southeast-4, ap-southeast-5, ca-central-1, cn-north-1, cn-northwest-1, EU, eu-central-1, eu-central-2, eu-north-1, eu-south-1, eu-south-2, eu-west-1, eu-west-2, eu-west-3, il-central-1, me-central-1, me-south-1, sa-east-1, us-east-2, us-gov-east-1, us-gov-west-1, us-west-1, us-west-2location:{type:"AvailabilityZone",# accepts AvailabilityZone, LocalZonename:"LocationNameAsString",},bucket:{data_redundancy:"SingleAvailabilityZone",# accepts SingleAvailabilityZone, SingleLocalZonetype:"Directory",# accepts Directory},tags:[{key:"ObjectKey",# requiredvalue:"Value",# required},],},grant_full_control:"GrantFullControl",grant_read:"GrantRead",grant_read_acp:"GrantReadACP",grant_write:"GrantWrite",grant_write_acp:"GrantWriteACP",object_lock_enabled_for_bucket:false,object_ownership:"BucketOwnerPreferred",# accepts BucketOwnerPreferred, ObjectWriter, BucketOwnerEnforced})

Response structure

resp.location#=> Stringresp.bucket_arn#=> String

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :acl(String)

    The canned ACL to apply to the bucket.

    This functionality is not supported for directory buckets.

  • :bucket(required,String)

    The name of the bucket to create.

    General purpose buckets - For information about bucket namingrestrictions, seeBucket naming rules in theAmazon S3 UserGuide.

    Directory buckets - When you use this operation with adirectory bucket, you must use path-style requests in the formathttps://s3express-control.region-code.amazonaws.com/bucket-name.Virtual-hosted-style requests aren't supported. Directory bucketnames must be unique in the chosen Zone (Availability Zone or LocalZone). Bucket names must also follow the formatbucket-base-name--zone-id--x-s3 (for example,DOC-EXAMPLE-BUCKET--usw2-az1--x-s3). For information about bucketnaming restrictions, seeDirectory bucket naming rules in theAmazon S3 User Guide

  • :create_bucket_configuration(Types::CreateBucketConfiguration)

    The configuration information for the bucket.

  • :grant_full_control(String)

    Allows grantee the read, write, read ACP, and write ACP permissions onthe bucket.

    This functionality is not supported for directory buckets.

  • :grant_read(String)

    Allows grantee to list the objects in the bucket.

    This functionality is not supported for directory buckets.

  • :grant_read_acp(String)

    Allows grantee to read the bucket ACL.

    This functionality is not supported for directory buckets.

  • :grant_write(String)

    Allows grantee to create new objects in the bucket.

    For the bucket and object owners of existing objects, also allowsdeletions and overwrites of those objects.

    This functionality is not supported for directory buckets.

  • :grant_write_acp(String)

    Allows grantee to write the ACL for the applicable bucket.

    This functionality is not supported for directory buckets.

  • :object_lock_enabled_for_bucket(Boolean)

    Specifies whether you want S3 Object Lock to be enabled for the newbucket.

    This functionality is not supported for directory buckets.

  • :object_ownership(String)

    The container element for object ownership for a bucket's ownershipcontrols.

    BucketOwnerPreferred - Objects uploaded to the bucket changeownership to the bucket owner if the objects are uploaded with thebucket-owner-full-control canned ACL.

    ObjectWriter - The uploading account will own the object if theobject is uploaded with thebucket-owner-full-control canned ACL.

    BucketOwnerEnforced - Access control lists (ACLs) are disabled andno longer affect permissions. The bucket owner automatically owns andhas full control over every object in the bucket. The bucket onlyaccepts PUT requests that don't specify an ACL or specify bucketowner full control ACLs (such as the predefinedbucket-owner-full-control canned ACL or a custom ACL in XML formatthat grants the same permissions).

    By default,ObjectOwnership is set toBucketOwnerEnforced and ACLsare disabled. We recommend keeping ACLs disabled, except in uncommonuse cases where you must control access for each object individually.For more information about S3 Object Ownership, seeControllingownership of objects and disabling ACLs for your bucket in theAmazon S3 User Guide.

    This functionality is not supported for directory buckets. Directorybuckets use the bucket owner enforced setting for S3 Object Ownership.

Returns:

See Also:

2754275527562757
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 2754defcreate_bucket(params={},options={})req=build_request(:create_bucket,params)req.send_request(options)end

#create_bucket_metadata_configuration(params = {}) ⇒Struct

Creates an S3 Metadata V2 metadata configuration for a general purposebucket. For more information, seeAccelerating data discovery with S3Metadata in theAmazon S3 User Guide.

Permissions

To use this operation, you must have the following permissions. Formore information, seeSetting up permissions for configuringmetadata tables in theAmazon S3 User Guide.

If you want to encrypt your metadata tables with server-sideencryption with Key Management Service (KMS) keys (SSE-KMS), youneed additional permissions in your KMS key policy. For moreinformation, see Setting up permissions for configuring metadatatables in theAmazon S3 User Guide.

If you also want to integrate your table bucket with Amazon WebServices analytics services so that you can query your metadatatable, you need additional permissions. For more information, seeIntegrating Amazon S3 Tables with Amazon Web Services analyticsservices in theAmazon S3 User Guide.

To query your metadata tables, you need additional permissions. Formore information, see Permissions for querying metadata tablesin theAmazon S3 User Guide.

  • s3:CreateBucketMetadataTableConfiguration

    The IAM policy action name is the same for the V1 and V2 APIoperations.

  • s3tables:CreateTableBucket

  • s3tables:CreateNamespace

  • s3tables:GetTable

  • s3tables:CreateTable

  • s3tables:PutTablePolicy

  • s3tables:PutTableEncryption

  • kms:DescribeKey

The following operations are related toCreateBucketMetadataConfiguration:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.({bucket:"BucketName",# requiredcontent_md5:"ContentMD5",checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVMEmetadata_configuration:{# requiredjournal_table_configuration:{# requiredrecord_expiration:{# requiredexpiration:"ENABLED",# required, accepts ENABLED, DISABLEDdays:1,},encryption_configuration:{sse_algorithm:"aws:kms",# required, accepts aws:kms, AES256kms_key_arn:"KmsKeyArn",},},inventory_table_configuration:{configuration_state:"ENABLED",# required, accepts ENABLED, DISABLEDencryption_configuration:{sse_algorithm:"aws:kms",# required, accepts aws:kms, AES256kms_key_arn:"KmsKeyArn",},},},expected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The general purpose bucket that you want to create the metadataconfiguration for.

  • :content_md5(String)

    TheContent-MD5 header for the metadata configuration.

  • :checksum_algorithm(String)

    The checksum algorithm to use with your metadata configuration.

  • :metadata_configuration(required,Types::MetadataConfiguration)

    The contents of your metadata configuration.

  • :expected_bucket_owner(String)

    The expected owner of the general purpose bucket that corresponds toyour metadata configuration.

Returns:

See Also:

2883288428852886
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 2883def(params={},options={})req=build_request(:create_bucket_metadata_configuration,params)req.send_request(options)end

#create_bucket_metadata_table_configuration(params = {}) ⇒Struct

We recommend that you create your S3 Metadata configurations by usingthe V2CreateBucketMetadataConfiguration API operation. We nolonger recommend using the V1CreateBucketMetadataTableConfigurationAPI operation.

If you created your S3 Metadata configuration before July 15, 2025,werecommend that you delete and re-create your configuration by usingCreateBucketMetadataConfiguration so that you can expire journaltable records and create a live inventory table.

Creates a V1 S3 Metadata configuration for a general purpose bucket.For more information, seeAccelerating data discovery with S3Metadata in theAmazon S3 User Guide.

Permissions

To use this operation, you must have the following permissions. Formore information, seeSetting up permissions for configuringmetadata tables in theAmazon S3 User Guide.

If you want to encrypt your metadata tables with server-sideencryption with Key Management Service (KMS) keys (SSE-KMS), youneed additional permissions. For more information, see Setting uppermissions for configuring metadata tables in theAmazon S3User Guide.

If you also want to integrate your table bucket with Amazon WebServices analytics services so that you can query your metadatatable, you need additional permissions. For more information, seeIntegrating Amazon S3 Tables with Amazon Web Services analyticsservices in theAmazon S3 User Guide.

  • s3:CreateBucketMetadataTableConfiguration

  • s3tables:CreateNamespace

  • s3tables:GetTable

  • s3tables:CreateTable

  • s3tables:PutTablePolicy

The following operations are related toCreateBucketMetadataTableConfiguration:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.({bucket:"BucketName",# requiredcontent_md5:"ContentMD5",checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVMEmetadata_table_configuration:{# requireds3_tables_destination:{# requiredtable_bucket_arn:"S3TablesBucketArn",# requiredtable_name:"S3TablesName",# required},},expected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The general purpose bucket that you want to create the metadata tableconfiguration for.

  • :content_md5(String)

    TheContent-MD5 header for the metadata table configuration.

  • :checksum_algorithm(String)

    The checksum algorithm to use with your metadata table configuration.

  • :metadata_table_configuration(required,Types::MetadataTableConfiguration)

    The contents of your metadata table configuration.

  • :expected_bucket_owner(String)

    The expected owner of the general purpose bucket that corresponds toyour metadata table configuration.

Returns:

See Also:

2989299029912992
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 2989def(params={},options={})req=build_request(:create_bucket_metadata_table_configuration,params)req.send_request(options)end

#create_multipart_upload(params = {}) ⇒Types::CreateMultipartUploadOutput

End of support notice: As of October 1, 2025, Amazon S3 hasdiscontinued support for Email Grantee Access Control Lists (ACLs). Ifyou attempt to use an Email Grantee ACL in a request after October 1,2025, the request will receive anHTTP 405 (Method Not Allowed)error.

This change affects the following Amazon Web Services Regions: USEast(N. Virginia), US West (N. California), US West (Oregon), Asia Pacific(Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe(Ireland), and South America (São Paulo).

This action initiates a multipart upload and returns an upload ID.This upload ID is used to associate all of the parts in the specificmultipart upload. You specify this upload ID in each of yoursubsequent upload part requests (seeUploadPart). You alsoinclude this upload ID in the final request to either complete orabort the multipart upload request. For more information aboutmultipart uploads, seeMultipart Upload Overview in theAmazonS3 User Guide.

After you initiate a multipart upload and upload one or more parts, tostop being charged for storing the uploaded parts, you must eithercomplete or abort the multipart upload. Amazon S3 frees up the spaceused to store the parts and stops charging you for storing them onlyafter you either complete or abort a multipart upload.

If you have configured a lifecycle rule to abort incomplete multipartuploads, the created multipart upload must be completed within thenumber of days specified in the bucket lifecycle configuration.Otherwise, the incomplete multipart upload becomes eligible for anabort action and Amazon S3 aborts the multipart upload. For moreinformation, seeAborting Incomplete Multipart Uploads Using a BucketLifecycle Configuration.

*Directory buckets - S3 Lifecycle is not supported by directory buckets.

  • Directory buckets - For directory buckets, you must makerequests for this API operation to the Zonal endpoint. Theseendpoints support virtual-hosted-style requests in the formathttps://amzn-s3-demo-bucket.s3express-zone-id.region-code.amazonaws.com/key-name. Path-style requests are not supported. For more information aboutendpoints in Availability Zones, seeRegional and Zonal endpointsfor directory buckets in Availability Zones in theAmazon S3User Guide. For more information about endpoints in Local Zones,seeConcepts for directory buckets in Local Zones in theAmazon S3 User Guide.

Request signing

For request signing, multipart upload is just a series of regularrequests. You initiate a multipart upload, send one or more requeststo upload parts, and then complete the multipart upload process. Yousign each request individually. There is nothing special aboutsigning multipart upload requests. For more information aboutsigning, seeAuthenticating Requests (Amazon Web Services SignatureVersion 4) in theAmazon S3 User Guide.

Permissions
  • General purpose bucket permissions - To perform a multipartupload with encryption using an Key Management Service (KMS) KMSkey, the requester must have permission to thekms:Decrypt andkms:GenerateDataKey actions on the key. The requester must alsohave permissions for thekms:GenerateDataKey action for theCreateMultipartUpload API. Then, the requester needs permissionsfor thekms:Decrypt action on theUploadPart andUploadPartCopy APIs. These permissions are required becauseAmazon S3 must decrypt and read data from the encrypted file partsbefore it completes the multipart upload. For more information,seeMultipart upload API and permissions andProtecting datausing server-side encryption with Amazon Web Services KMS intheAmazon S3 User Guide.

  • Directory bucket permissions - To grant access to this APIoperation on a directory bucket, we recommend that you use theCreateSession API operation for session-basedauthorization. Specifically, you grant thes3express:CreateSession permission to the directory bucket in abucket policy or an IAM identity-based policy. Then, you make theCreateSession API call on the bucket to obtain a session token.With the session token in your request header, you can make APIrequests to this operation. After the session token expires, youmake anotherCreateSession API call to generate a new sessiontoken for use. Amazon Web Services CLI or SDKs create session andrefresh the session token automatically to avoid serviceinterruptions when a session expires. For more information aboutauthorization, seeCreateSession.

Encryption
  • General purpose buckets - Server-side encryption is for dataencryption at rest. Amazon S3 encrypts your data as it writes itto disks in its data centers and decrypts it when you access it.Amazon S3 automatically encrypts all new objects that are uploadedto an S3 bucket. When doing a multipart upload, if you don'tspecify encryption information in your request, the encryptionsetting of the uploaded parts is set to the default encryptionconfiguration of the destination bucket. By default, all bucketshave a base level of encryption configuration that usesserver-side encryption with Amazon S3 managed keys (SSE-S3). Ifthe destination bucket has a default encryption configuration thatuses server-side encryption with an Key Management Service (KMS)key (SSE-KMS), or a customer-provided encryption key (SSE-C),Amazon S3 uses the corresponding KMS key, or a customer-providedkey to encrypt the uploaded parts. When you perform aCreateMultipartUpload operation, if you want to use a differenttype of encryption setting for the uploaded parts, you can requestthat Amazon S3 encrypts the object with a different encryption key(such as an Amazon S3 managed key, a KMS key, or acustomer-provided key). When the encryption setting in yourrequest is different from the default encryption configuration ofthe destination bucket, the encryption setting in your requesttakes precedence. If you choose to provide your own encryptionkey, the request headers you provide inUploadPart andUploadPartCopy requests must match the headers you used intheCreateMultipartUpload request.

    • Use KMS keys (SSE-KMS) that include the Amazon Web Servicesmanaged key (aws/s3) and KMS customer managed keys stored inKey Management Service (KMS) – If you want Amazon Web Servicesto manage the keys used to encrypt data, specify the followingheaders in the request.

      • x-amz-server-side-encryption

      • x-amz-server-side-encryption-aws-kms-key-id

      • x-amz-server-side-encryption-context * If you specifyx-amz-server-side-encryption:aws:kms, butdon't providex-amz-server-side-encryption-aws-kms-key-id,Amazon S3 uses the Amazon Web Services managed key (aws/s3key) in KMS to protect the data.

      • To perform a multipart upload with encryption by using anAmazon Web Services KMS key, the requester must havepermission to thekms:Decrypt andkms:GenerateDataKey*actions on the key. These permissions are required becauseAmazon S3 must decrypt and read data from the encrypted fileparts before it completes the multipart upload. For moreinformation, seeMultipart upload API and permissions andProtecting data using server-side encryption with Amazon WebServices KMS in theAmazon S3 User Guide.

      • If your Identity and Access Management (IAM) user or role isin the same Amazon Web Services account as the KMS key, thenyou must have these permissions on the key policy. If your IAMuser or role is in a different account from the key, then youmust have the permissions on both the key policy and your IAMuser or role.

      • AllGET andPUT requests for an object protected by KMSfail if you don't make them by using Secure Sockets Layer(SSL), Transport Layer Security (TLS), or Signature Version 4.For information about configuring any of the officiallysupported Amazon Web Services SDKs and Amazon Web ServicesCLI, seeSpecifying the Signature Version in RequestAuthentication in theAmazon S3 User Guide.

      For more information about server-side encryption with KMS keys(SSE-KMS), seeProtecting Data Using Server-Side Encryptionwith KMS keys in theAmazon S3 User Guide.

    • Use customer-provided encryption keys (SSE-C) – If you want tomanage your own encryption keys, provide all the followingheaders in the request.

  • Directory buckets - For directory buckets, there are only twosupported options for server-side encryption: server-sideencryption with Amazon S3 managed keys (SSE-S3) (AES256) andserver-side encryption with KMS keys (SSE-KMS) (aws:kms). Werecommend that the bucket's default encryption uses the desiredencryption configuration and you don't override the bucketdefault encryption in yourCreateSession requests orPUTobject requests. Then, new objects are automatically encryptedwith the desired encryption settings. For more information, seeProtecting data with server-side encryption in theAmazonS3 User Guide. For more information about the encryptionoverriding behaviors in directory buckets, seeSpecifyingserver-side encryption with KMS for new object uploads.

    In the Zonal endpoint API calls (exceptCopyObject andUploadPartCopy) using the REST API, the encryption requestheaders must match the encryption settings that are specified intheCreateSession request. You can't override the values of theencryption settings (x-amz-server-side-encryption,x-amz-server-side-encryption-aws-kms-key-id,x-amz-server-side-encryption-context, andx-amz-server-side-encryption-bucket-key-enabled) that arespecified in theCreateSession request. You don't need toexplicitly specify these encryption settings values in Zonalendpoint API calls, and Amazon S3 will use the encryption settingsvalues from theCreateSession request to protect new objects inthe directory bucket.

    When you use the CLI or the Amazon Web Services SDKs, forCreateSession, the session token refreshes automatically toavoid service interruptions when a session expires. The CLI or theAmazon Web Services SDKs use the bucket's default encryptionconfiguration for theCreateSession request. It's not supportedto override the encryption settings values in theCreateSessionrequest. So in the Zonal endpoint API calls (exceptCopyObject andUploadPartCopy), the encryption requestheaders must match the default encryption configuration of thedirectory bucket.

    For directory buckets, when you perform aCreateMultipartUploadoperation and anUploadPartCopy operation, the request headersyou provide in theCreateMultipartUpload request must match thedefault encryption configuration of the destination bucket.

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax isBucket-name.s3express-zone-id.region-code.amazonaws.com.

The following operations are related toCreateMultipartUpload:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To initiate a multipart upload

# The following example initiates a multipart upload.resp=client.create_multipart_upload({bucket:"examplebucket",key:"largeobject",})resp.to_houtputsthefollowing:{bucket:"examplebucket",key:"largeobject",upload_id:"ibZBv_75gd9r8lH_gqXatLdxMVpAlj6ZQjEs.OwyF3953YdwbcQnMA2BLGn8Lx12fQNICtMw5KyteFeHw.Sjng--",}

Request syntax with placeholder values

resp=client.create_multipart_upload({acl:"private",# accepts private, public-read, public-read-write, authenticated-read, aws-exec-read, bucket-owner-read, bucket-owner-full-controlbucket:"BucketName",# requiredcache_control:"CacheControl",content_disposition:"ContentDisposition",content_encoding:"ContentEncoding",content_language:"ContentLanguage",content_type:"ContentType",expires:Time.now,grant_full_control:"GrantFullControl",grant_read:"GrantRead",grant_read_acp:"GrantReadACP",grant_write_acp:"GrantWriteACP",key:"ObjectKey",# requiredmetadata:{"MetadataKey"=>"MetadataValue",},server_side_encryption:"AES256",# accepts AES256, aws:fsx, aws:kms, aws:kms:dssestorage_class:"STANDARD",# accepts STANDARD, REDUCED_REDUNDANCY, STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING, GLACIER, DEEP_ARCHIVE, OUTPOSTS, GLACIER_IR, SNOW, EXPRESS_ONEZONE, FSX_OPENZFS, FSX_ONTAPwebsite_redirect_location:"WebsiteRedirectLocation",sse_customer_algorithm:"SSECustomerAlgorithm",sse_customer_key:"SSECustomerKey",sse_customer_key_md5:"SSECustomerKeyMD5",ssekms_key_id:"SSEKMSKeyId",ssekms_encryption_context:"SSEKMSEncryptionContext",bucket_key_enabled:false,request_payer:"requester",# accepts requestertagging:"TaggingHeader",object_lock_mode:"GOVERNANCE",# accepts GOVERNANCE, COMPLIANCEobject_lock_retain_until_date:Time.now,object_lock_legal_hold_status:"ON",# accepts ON, OFFexpected_bucket_owner:"AccountId",checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVMEchecksum_type:"COMPOSITE",# accepts COMPOSITE, FULL_OBJECT})

Response structure

resp.abort_date#=> Timeresp.abort_rule_id#=> Stringresp.bucket#=> Stringresp.key#=> Stringresp.upload_id#=> Stringresp.server_side_encryption#=> String, one of "AES256", "aws:fsx", "aws:kms", "aws:kms:dsse"resp.sse_customer_algorithm#=> Stringresp.sse_customer_key_md5#=> Stringresp.ssekms_key_id#=> Stringresp.ssekms_encryption_context#=> Stringresp.bucket_key_enabled#=> Booleanresp.request_charged#=> String, one of "requester"resp.checksum_algorithm#=> String, one of "CRC32", "CRC32C", "SHA1", "SHA256", "CRC64NVME"resp.checksum_type#=> String, one of "COMPOSITE", "FULL_OBJECT"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :acl(String)

    The canned ACL to apply to the object. Amazon S3 supports a set ofpredefined ACLs, known ascanned ACLs. Each canned ACL has apredefined set of grantees and permissions. For more information, seeCanned ACL in theAmazon S3 User Guide.

    By default, all objects are private. Only the owner has full accesscontrol. When uploading an object, you can grant access permissions toindividual Amazon Web Services accounts or to predefined groupsdefined by Amazon S3. These permissions are then added to the accesscontrol list (ACL) on the new object. For more information, seeUsingACLs. One way to grant the permissions using the request headersis to specify a canned ACL with thex-amz-acl request header.

    * This functionality is not supported for directory buckets.

    • This functionality is not supported for Amazon S3 on Outposts.

  • :bucket(required,String)

    The name of the bucket where the multipart upload is initiated andwhere the object is uploaded.

    Directory buckets - When you use this operation with a directorybucket, you must use virtual-hosted-style requests in the formatBucket-name.s3express-zone-id.region-code.amazonaws.com. Path-stylerequests are not supported. Directory bucket names must be unique inthe chosen Zone (Availability Zone or Local Zone). Bucket names mustfollow the formatbucket-base-name--zone-id--x-s3 (for example,amzn-s3-demo-bucket--usw2-az1--x-s3). For information about bucketnaming restrictions, seeDirectory bucket naming rules in theAmazon S3 User Guide.

    Access points - When you use this action with an access point forgeneral purpose buckets, you must provide the alias of the accesspoint in place of the bucket name or specify the access point ARN.When you use this action with an access point for directory buckets,you must provide the access point name in place of the bucket name.When using the access point ARN, you must direct requests to theaccess point hostname. The access point hostname takes the formAccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.When using this action with an access point through the Amazon WebServices SDKs, you provide the access point ARN in place of the bucketname. For more information about access point ARNs, seeUsing accesspoints in theAmazon S3 User Guide.

    Object Lambda access points are not supported by directory buckets.

    S3 on Outposts - When you use this action with S3 on Outposts, youmust direct requests to the S3 on Outposts hostname. The S3 onOutposts hostname takes the formAccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com.When you use this action with S3 on Outposts, the destination bucketmust be the Outposts access point ARN or the access point alias. Formore information about S3 on Outposts, seeWhat is S3 onOutposts? in theAmazon S3 User Guide.

  • :cache_control(String)

    Specifies caching behavior along the request/reply chain.

  • :content_disposition(String)

    Specifies presentational information for the object.

  • :content_encoding(String)

    Specifies what content encodings have been applied to the object andthus what decoding mechanisms must be applied to obtain the media-typereferenced by the Content-Type header field.

    For directory buckets, only theaws-chunked value is supported inthis header field.

  • :content_language(String)

    The language that the content is in.

  • :content_type(String)

    A standard MIME type describing the format of the object data.

  • :expires(Time,DateTime,Date,Integer,String)

    The date and time at which the object is no longer cacheable.

  • :grant_full_control(String)

    Specify access permissions explicitly to give the grantee READ,READ_ACP, and WRITE_ACP permissions on the object.

    By default, all objects are private. Only the owner has full accesscontrol. When uploading an object, you can use this header toexplicitly grant access permissions to specific Amazon Web Servicesaccounts or groups. This header maps to specific permissions thatAmazon S3 supports in an ACL. For more information, seeAccessControl List (ACL) Overview in theAmazon S3 User Guide.

    You specify each grantee as a type=value pair, where the type is oneof the following:

    • id – if the value specified is the canonical user ID of an AmazonWeb Services account

    • uri – if you are granting permissions to a predefined group

    • emailAddress – if the value specified is the email address of anAmazon Web Services account

      Using email addresses to specify a grantee is only supported in thefollowing Amazon Web Services Regions:

      • US East (N. Virginia)

      • US West (N. California)

      • US West (Oregon)

      • Asia Pacific (Singapore)

      • Asia Pacific (Sydney)

      • Asia Pacific (Tokyo)

      • Europe (Ireland)

      • South America (São Paulo)

      For a list of all the Amazon S3 supported Regions and endpoints, seeRegions and Endpoints in the Amazon Web Services GeneralReference.

    For example, the followingx-amz-grant-read header grants the AmazonWeb Services accounts identified by account IDs permissions to readobject data and its metadata:

    x-amz-grant-read:,

    * This functionality is not supported for directory buckets.

    • This functionality is not supported for Amazon S3 on Outposts.

  • :grant_read(String)

    Specify access permissions explicitly to allow grantee to read theobject data and its metadata.

    By default, all objects are private. Only the owner has full accesscontrol. When uploading an object, you can use this header toexplicitly grant access permissions to specific Amazon Web Servicesaccounts or groups. This header maps to specific permissions thatAmazon S3 supports in an ACL. For more information, seeAccessControl List (ACL) Overview in theAmazon S3 User Guide.

    You specify each grantee as a type=value pair, where the type is oneof the following:

    • id – if the value specified is the canonical user ID of an AmazonWeb Services account

    • uri – if you are granting permissions to a predefined group

    • emailAddress – if the value specified is the email address of anAmazon Web Services account

      Using email addresses to specify a grantee is only supported in thefollowing Amazon Web Services Regions:

      • US East (N. Virginia)

      • US West (N. California)

      • US West (Oregon)

      • Asia Pacific (Singapore)

      • Asia Pacific (Sydney)

      • Asia Pacific (Tokyo)

      • Europe (Ireland)

      • South America (São Paulo)

      For a list of all the Amazon S3 supported Regions and endpoints, seeRegions and Endpoints in the Amazon Web Services GeneralReference.

    For example, the followingx-amz-grant-read header grants the AmazonWeb Services accounts identified by account IDs permissions to readobject data and its metadata:

    x-amz-grant-read:,

    * This functionality is not supported for directory buckets.

    • This functionality is not supported for Amazon S3 on Outposts.

  • :grant_read_acp(String)

    Specify access permissions explicitly to allows grantee to read theobject ACL.

    By default, all objects are private. Only the owner has full accesscontrol. When uploading an object, you can use this header toexplicitly grant access permissions to specific Amazon Web Servicesaccounts or groups. This header maps to specific permissions thatAmazon S3 supports in an ACL. For more information, seeAccessControl List (ACL) Overview in theAmazon S3 User Guide.

    You specify each grantee as a type=value pair, where the type is oneof the following:

    • id – if the value specified is the canonical user ID of an AmazonWeb Services account

    • uri – if you are granting permissions to a predefined group

    • emailAddress – if the value specified is the email address of anAmazon Web Services account

      Using email addresses to specify a grantee is only supported in thefollowing Amazon Web Services Regions:

      • US East (N. Virginia)

      • US West (N. California)

      • US West (Oregon)

      • Asia Pacific (Singapore)

      • Asia Pacific (Sydney)

      • Asia Pacific (Tokyo)

      • Europe (Ireland)

      • South America (São Paulo)

      For a list of all the Amazon S3 supported Regions and endpoints, seeRegions and Endpoints in the Amazon Web Services GeneralReference.

    For example, the followingx-amz-grant-read header grants the AmazonWeb Services accounts identified by account IDs permissions to readobject data and its metadata:

    x-amz-grant-read:,

    * This functionality is not supported for directory buckets.

    • This functionality is not supported for Amazon S3 on Outposts.

  • :grant_write_acp(String)

    Specify access permissions explicitly to allows grantee to allowgrantee to write the ACL for the applicable object.

    By default, all objects are private. Only the owner has full accesscontrol. When uploading an object, you can use this header toexplicitly grant access permissions to specific Amazon Web Servicesaccounts or groups. This header maps to specific permissions thatAmazon S3 supports in an ACL. For more information, seeAccessControl List (ACL) Overview in theAmazon S3 User Guide.

    You specify each grantee as a type=value pair, where the type is oneof the following:

    • id – if the value specified is the canonical user ID of an AmazonWeb Services account

    • uri – if you are granting permissions to a predefined group

    • emailAddress – if the value specified is the email address of anAmazon Web Services account

      Using email addresses to specify a grantee is only supported in thefollowing Amazon Web Services Regions:

      • US East (N. Virginia)

      • US West (N. California)

      • US West (Oregon)

      • Asia Pacific (Singapore)

      • Asia Pacific (Sydney)

      • Asia Pacific (Tokyo)

      • Europe (Ireland)

      • South America (São Paulo)

      For a list of all the Amazon S3 supported Regions and endpoints, seeRegions and Endpoints in the Amazon Web Services GeneralReference.

    For example, the followingx-amz-grant-read header grants the AmazonWeb Services accounts identified by account IDs permissions to readobject data and its metadata:

    x-amz-grant-read:,

    * This functionality is not supported for directory buckets.

    • This functionality is not supported for Amazon S3 on Outposts.

  • :key(required,String)

    Object key for which the multipart upload is to be initiated.

  • :metadata(Hash<String,String>)

    A map of metadata to store with the object in S3.

  • :server_side_encryption(String)

    The server-side encryption algorithm used when you store this objectin Amazon S3 or Amazon FSx.

    • Directory buckets - For directory buckets, there are onlytwo supported options for server-side encryption: server-sideencryption with Amazon S3 managed keys (SSE-S3) (AES256) andserver-side encryption with KMS keys (SSE-KMS) (aws:kms). Werecommend that the bucket's default encryption uses the desiredencryption configuration and you don't override the bucket defaultencryption in yourCreateSession requests orPUT objectrequests. Then, new objects are automatically encrypted with thedesired encryption settings. For more information, seeProtectingdata with server-side encryption in theAmazon S3 User Guide.For more information about the encryption overriding behaviors indirectory buckets, seeSpecifying server-side encryption with KMSfor new object uploads.

      In the Zonal endpoint API calls (exceptCopyObject andUploadPartCopy) using the REST API, the encryption requestheaders must match the encryption settings that are specified in theCreateSession request. You can't override the values of theencryption settings (x-amz-server-side-encryption,x-amz-server-side-encryption-aws-kms-key-id,x-amz-server-side-encryption-context, andx-amz-server-side-encryption-bucket-key-enabled) that arespecified in theCreateSession request. You don't need toexplicitly specify these encryption settings values in Zonalendpoint API calls, and Amazon S3 will use the encryption settingsvalues from theCreateSession request to protect new objects inthe directory bucket.

      When you use the CLI or the Amazon Web Services SDKs, forCreateSession, the session token refreshes automatically to avoidservice interruptions when a session expires. The CLI or the AmazonWeb Services SDKs use the bucket's default encryption configurationfor theCreateSession request. It's not supported to override theencryption settings values in theCreateSession request. So in theZonal endpoint API calls (exceptCopyObject andUploadPartCopy), the encryption request headers must match thedefault encryption configuration of the directory bucket.

    • S3 access points for Amazon FSx - When accessing data storedin Amazon FSx file systems using S3 access points, the only validserver side encryption option isaws:fsx. All Amazon FSx filesystems have encryption configured by default and are encrypted atrest. Data is automatically encrypted before being written to thefile system, and automatically decrypted as it is read. Theseprocesses are handled transparently by Amazon FSx.

  • :storage_class(String)

    By default, Amazon S3 uses the STANDARD Storage Class to store newlycreated objects. The STANDARD storage class provides high durabilityand high availability. Depending on performance needs, you can specifya different Storage Class. For more information, seeStorageClasses in theAmazon S3 User Guide.

    * Directory buckets only supportEXPRESS_ONEZONE (the S3 Express One Zone storage class) in Availability Zones andONEZONE_IA (the S3 One Zone-Infrequent Access storage class) in Dedicated Local Zones.

    • Amazon S3 on Outposts only uses the OUTPOSTS Storage Class.

  • :website_redirect_location(String)

    If the bucket is configured as a website, redirects requests for thisobject to another object in the same bucket or to an external URL.Amazon S3 stores the value of this header in the object metadata.

    This functionality is not supported for directory buckets.

  • :sse_customer_algorithm(String)

    Specifies the algorithm to use when encrypting the object (forexample, AES256).

    This functionality is not supported for directory buckets.

  • :sse_customer_key(String)

    Specifies the customer-provided encryption key for Amazon S3 to use inencrypting data. This value is used to store the object and then it isdiscarded; Amazon S3 does not store the encryption key. The key mustbe appropriate for use with the algorithm specified in thex-amz-server-side-encryption-customer-algorithm header.

    This functionality is not supported for directory buckets.

  • :sse_customer_key_md5(String)

    Specifies the 128-bit MD5 digest of the customer-provided encryptionkey according to RFC 1321. Amazon S3 uses this header for a messageintegrity check to ensure that the encryption key was transmittedwithout error.

    This functionality is not supported for directory buckets.

  • :ssekms_key_id(String)

    Specifies the KMS key ID (Key ID, Key ARN, or Key Alias) to use forobject encryption. If the KMS key doesn't exist in the same accountthat's issuing the command, you must use the full Key ARN not the KeyID.

    General purpose buckets - If you specifyx-amz-server-side-encryption withaws:kms oraws:kms:dsse, thisheader specifies the ID (Key ID, Key ARN, or Key Alias) of the KMS keyto use. If you specifyx-amz-server-side-encryption:aws:kms orx-amz-server-side-encryption:aws:kms:dsse, but do not providex-amz-server-side-encryption-aws-kms-key-id, Amazon S3 uses theAmazon Web Services managed key (aws/s3) to protect the data.

    Directory buckets - To encrypt data using SSE-KMS, it'srecommended to specify thex-amz-server-side-encryption header toaws:kms. Then, thex-amz-server-side-encryption-aws-kms-key-idheader implicitly uses the bucket's default KMS customer managed keyID. If you want to explicitly set thex-amz-server-side-encryption-aws-kms-key-id header, it must match thebucket's default customer managed key (using key ID or ARN, notalias). Your SSE-KMS configuration can only support 1customermanaged key per directory bucket's lifetime. TheAmazon WebServices managed key (aws/s3) isn't supported. Incorrect keyspecification results in an HTTP400 Bad Request error.

  • :ssekms_encryption_context(String)

    Specifies the Amazon Web Services KMS Encryption Context to use forobject encryption. The value of this header is a Base64 encoded stringof a UTF-8 encoded JSON, which contains the encryption context askey-value pairs.

    Directory buckets - You can optionally provide an explicitencryption context value. The value must match the default encryptioncontext - the bucket Amazon Resource Name (ARN). An additionalencryption context value is not supported.

  • :bucket_key_enabled(Boolean)

    Specifies whether Amazon S3 should use an S3 Bucket Key for objectencryption with server-side encryption using Key Management Service(KMS) keys (SSE-KMS).

    General purpose buckets - Setting this header totrue causesAmazon S3 to use an S3 Bucket Key for object encryption with SSE-KMS.Also, specifying this header with a PUT action doesn't affectbucket-level settings for S3 Bucket Key.

    Directory buckets - S3 Bucket Keys are always enabled forGETandPUT operations in a directory bucket and can’t be disabled. S3Bucket Keys aren't supported, when you copy SSE-KMS encrypted objectsfrom general purpose buckets to directory buckets, from directorybuckets to general purpose buckets, or between directory buckets,throughCopyObject,UploadPartCopy,the Copy operation inBatch Operations, orthe import jobs. In this case, Amazon S3makes a call to KMS every time a copy request is made for aKMS-encrypted object.

  • :request_payer(String)

    Confirms that the requester knows that they will be charged for therequest. Bucket owners need not specify this parameter in theirrequests. If either the source or destination S3 bucket has RequesterPays enabled, the requester will pay for corresponding charges to copythe object. For information about downloading objects from RequesterPays buckets, seeDownloading Objects in Requester Pays Bucketsin theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :tagging(String)

    The tag-set for the object. The tag-set must be encoded as URL Queryparameters.

    This functionality is not supported for directory buckets.

  • :object_lock_mode(String)

    Specifies the Object Lock mode that you want to apply to the uploadedobject.

    This functionality is not supported for directory buckets.

  • :object_lock_retain_until_date(Time,DateTime,Date,Integer,String)

    Specifies the date and time when you want the Object Lock to expire.

    This functionality is not supported for directory buckets.

  • :object_lock_legal_hold_status(String)

    Specifies whether you want to apply a legal hold to the uploadedobject.

    This functionality is not supported for directory buckets.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

  • :checksum_algorithm(String)

    Indicates the algorithm that you want Amazon S3 to use to create thechecksum for the object. For more information, seeChecking objectintegrity in theAmazon S3 User Guide.

  • :checksum_type(String)

    Indicates the checksum type that you want Amazon S3 to use tocalculate the object’s checksum value. For more information, seeChecking object integrity in the Amazon S3 User Guide.

Returns:

See Also:

3973397439753976
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 3973defcreate_multipart_upload(params={},options={})req=build_request(:create_multipart_upload,params)req.send_request(options)end

#create_session(params = {}) ⇒Types::CreateSessionOutput

Creates a session that establishes temporary security credentials tosupport fast authentication and authorization for the Zonal endpointAPI operations on directory buckets. For more information about Zonalendpoint API operations that include the Availability Zone in therequest endpoint, seeS3 Express One Zone APIs in theAmazon S3User Guide.

To make Zonal endpoint API requests on a directory bucket, use theCreateSession API operation. Specifically, you grants3express:CreateSession permission to a bucket in a bucket policy oran IAM identity-based policy. Then, you use IAM credentials to maketheCreateSession API request on the bucket, which returns temporarysecurity credentials that include the access key ID, secret accesskey, session token, and expiration. These credentials have associatedpermissions to access the Zonal endpoint API operations. After thesession is created, you don’t need to use other policies to grantpermissions to each Zonal endpoint API individually. Instead, in yourZonal endpoint API requests, you sign your requests by applying thetemporary security credentials of the session to the request headersand following the SigV4 protocol for authentication. You also applythe session token to thex-amz-s3session-token request header forauthorization. Temporary security credentials are scoped to the bucketand expire after 5 minutes. After the expiration time, any calls thatyou make with those credentials will fail. You must use IAMcredentials again to make aCreateSession API request that generatesa new set of temporary credentials for use. Temporary credentialscannot be extended or refreshed beyond the original specifiedinterval.

If you use Amazon Web Services SDKs, SDKs handle the session tokenrefreshes automatically to avoid service interruptions when a sessionexpires. We recommend that you use the Amazon Web Services SDKs toinitiate and manage requests to the CreateSession API. For moreinformation, seePerformance guidelines and design patterns intheAmazon S3 User Guide.

* You must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the formathttps://bucket-name.s3express-zone-id.region-code.amazonaws.com. Path-style requests are not supported. For more information about endpoints in Availability Zones, seeRegional and Zonal endpoints for directory buckets in Availability Zones in theAmazon S3 User Guide. For more information about endpoints in Local Zones, seeConcepts for directory buckets in Local Zones in theAmazon S3 User Guide.

  • CopyObject API operation - Unlike other Zonalendpoint API operations, theCopyObject API operation doesn't usethe temporary security credentials returned from theCreateSessionAPI operation for authentication and authorization. For informationabout authentication and authorization of theCopyObject APIoperation on directory buckets, seeCopyObject.

  • HeadBucket API operation - Unlike other Zonalendpoint API operations, theHeadBucket API operation doesn't usethe temporary security credentials returned from theCreateSessionAPI operation for authentication and authorization. For informationabout authentication and authorization of theHeadBucket APIoperation on directory buckets, seeHeadBucket.

Permissions

To obtain temporary security credentials, you must create a bucketpolicy or an IAM identity-based policy that grantss3express:CreateSession permission to the bucket. In a policy, youcan have thes3express:SessionMode condition key to control whocan create aReadWrite orReadOnly session. For more informationaboutReadWrite orReadOnly sessions, seex-amz-create-session-mode. For example policies, seeExamplebucket policies for S3 Express One Zone andAmazon Web ServicesIdentity and Access Management (IAM) identity-based policies for S3Express One Zone in theAmazon S3 User Guide.

To grant cross-account access to Zonal endpoint API operations, thebucket policy should also grant both accounts thes3express:CreateSession permission.

If you want to encrypt objects with SSE-KMS, you must also have thekms:GenerateDataKey and thekms:Decrypt permissions in IAMidentity-based policies and KMS key policies for the target KMS key.

Encryption

For directory buckets, there are only two supported options forserver-side encryption: server-side encryption with Amazon S3managed keys (SSE-S3) (AES256) and server-side encryption with KMSkeys (SSE-KMS) (aws:kms). We recommend that the bucket's defaultencryption uses the desired encryption configuration and you don'toverride the bucket default encryption in yourCreateSessionrequests orPUT object requests. Then, new objects areautomatically encrypted with the desired encryption settings. Formore information, seeProtecting data with server-sideencryption in theAmazon S3 User Guide. For more informationabout the encryption overriding behaviors in directory buckets, seeSpecifying server-side encryption with KMS for new objectuploads.

ForZonal endpoint (object-level) API operations exceptCopyObject andUploadPartCopy, you authenticate andauthorize requests throughCreateSession for low latency. Toencrypt new objects in a directory bucket with SSE-KMS, you mustspecify SSE-KMS as the directory bucket's default encryptionconfiguration with a KMS key (specifically, acustomer managedkey). Then, when a session is created for Zonal endpoint APIoperations, new objects are automatically encrypted and decryptedwith SSE-KMS and S3 Bucket Keys during the session.

Only 1customer managed key is supported per directory bucketfor the lifetime of the bucket. TheAmazon Web Services managedkey (aws/s3) isn't supported. After you specify SSE-KMS asyour bucket's default encryption configuration with a customermanaged key, you can't change the customer managed key for thebucket's SSE-KMS configuration.

In the Zonal endpoint API calls (exceptCopyObject andUploadPartCopy) using the REST API, you can't override thevalues of the encryption settings (x-amz-server-side-encryption,x-amz-server-side-encryption-aws-kms-key-id,x-amz-server-side-encryption-context, andx-amz-server-side-encryption-bucket-key-enabled) from theCreateSession request. You don't need to explicitly specify theseencryption settings values in Zonal endpoint API calls, and AmazonS3 will use the encryption settings values from theCreateSessionrequest to protect new objects in the directory bucket.

When you use the CLI or the Amazon Web Services SDKs, forCreateSession, the session token refreshes automatically to avoidservice interruptions when a session expires. The CLI or the AmazonWeb Services SDKs use the bucket's default encryption configurationfor theCreateSession request. It's not supported to override theencryption settings values in theCreateSession request. Also, inthe Zonal endpoint API calls (exceptCopyObject andUploadPartCopy), it's not supported to override the values ofthe encryption settings from theCreateSession request.

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax isBucket-name.s3express-zone-id.region-code.amazonaws.com.

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.create_session({session_mode:"ReadOnly",# accepts ReadOnly, ReadWritebucket:"BucketName",# requiredserver_side_encryption:"AES256",# accepts AES256, aws:fsx, aws:kms, aws:kms:dssessekms_key_id:"SSEKMSKeyId",ssekms_encryption_context:"SSEKMSEncryptionContext",bucket_key_enabled:false,})

Response structure

resp.server_side_encryption#=> String, one of "AES256", "aws:fsx", "aws:kms", "aws:kms:dsse"resp.ssekms_key_id#=> Stringresp.ssekms_encryption_context#=> Stringresp.bucket_key_enabled#=> Booleanresp.credentials.access_key_id#=> Stringresp.credentials.secret_access_key#=> Stringresp.credentials.session_token#=> Stringresp.credentials.expiration#=> Time

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :session_mode(String)

    Specifies the mode of the session that will be created, eitherReadWrite orReadOnly. By default, aReadWrite session iscreated. AReadWrite session is capable of executing all the Zonalendpoint API operations on a directory bucket. AReadOnly session isconstrained to execute the following Zonal endpoint API operations:GetObject,HeadObject,ListObjectsV2,GetObjectAttributes,ListParts, andListMultipartUploads.

  • :bucket(required,String)

    The name of the bucket that you create a session for.

  • :server_side_encryption(String)

    The server-side encryption algorithm to use when you store objects inthe directory bucket.

    For directory buckets, there are only two supported options forserver-side encryption: server-side encryption with Amazon S3 managedkeys (SSE-S3) (AES256) and server-side encryption with KMS keys(SSE-KMS) (aws:kms). By default, Amazon S3 encrypts data withSSE-S3. For more information, seeProtecting data with server-sideencryption in theAmazon S3 User Guide.

    S3 access points for Amazon FSx - When accessing data storedin Amazon FSx file systems using S3 access points, the only validserver side encryption option isaws:fsx. All Amazon FSx filesystems have encryption configured by default and are encrypted atrest. Data is automatically encrypted before being written to the filesystem, and automatically decrypted as it is read. These processes arehandled transparently by Amazon FSx.

  • :ssekms_key_id(String)

    If you specifyx-amz-server-side-encryption withaws:kms, you mustspecify thex-amz-server-side-encryption-aws-kms-key-id header withthe ID (Key ID or Key ARN) of the KMS symmetric encryption customermanaged key to use. Otherwise, you get an HTTP400 Bad Requesterror. Only use the key ID or key ARN. The key alias format of the KMSkey isn't supported. Also, if the KMS key doesn't exist in the sameaccount that't issuing the command, you must use the full Key ARN notthe Key ID.

    Your SSE-KMS configuration can only support 1customer managedkey per directory bucket's lifetime. TheAmazon Web Servicesmanaged key (aws/s3) isn't supported.

  • :ssekms_encryption_context(String)

    Specifies the Amazon Web Services KMS Encryption Context as anadditional encryption context to use for object encryption. The valueof this header is a Base64 encoded string of a UTF-8 encoded JSON,which contains the encryption context as key-value pairs. This valueis stored as object metadata and automatically gets passed on toAmazon Web Services KMS for futureGetObject operations on thisobject.

    General purpose buckets - This value must be explicitly addedduringCopyObject operations if you want an additional encryptioncontext for your object. For more information, seeEncryptioncontext in theAmazon S3 User Guide.

    Directory buckets - You can optionally provide an explicitencryption context value. The value must match the default encryptioncontext - the bucket Amazon Resource Name (ARN). An additionalencryption context value is not supported.

  • :bucket_key_enabled(Boolean)

    Specifies whether Amazon S3 should use an S3 Bucket Key for objectencryption with server-side encryption using KMS keys (SSE-KMS).

    S3 Bucket Keys are always enabled forGET andPUT operations in adirectory bucket and can’t be disabled. S3 Bucket Keys aren'tsupported, when you copy SSE-KMS encrypted objects from generalpurpose buckets to directory buckets, from directory buckets togeneral purpose buckets, or between directory buckets, throughCopyObject,UploadPartCopy,the Copy operation in BatchOperations, orthe import jobs. In this case, Amazon S3 makesa call to KMS every time a copy request is made for a KMS-encryptedobject.

Returns:

See Also:

4279428042814282
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4279defcreate_session(params={},options={})req=build_request(:create_session,params)req.send_request(options)end

#delete_bucket(params = {}) ⇒Struct

Deletes the S3 bucket. All objects (including all object versions anddelete markers) in the bucket must be deleted before the bucket itselfcan be deleted.

*Directory buckets - If multipart uploads in a directory bucket are in progress, you can't delete the bucket until all the in-progress multipart uploads are aborted or completed.

Permissions
  • General purpose bucket permissions - You must have thes3:DeleteBucket permission on the specified bucket in a policy.

  • Directory bucket permissions - You must have thes3express:DeleteBucket permission in an IAM identity-basedpolicy instead of a bucket policy. Cross-account access to thisAPI operation isn't supported. This operation can only beperformed by the Amazon Web Services account that owns theresource. For more information about directory bucket policies andpermissions, seeAmazon Web Services Identity and AccessManagement (IAM) for S3 Express One Zone in theAmazon S3User Guide.

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax iss3express-control.region-code.amazonaws.com.

The following operations are related toDeleteBucket:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To delete a bucket

# The following example deletes the specified bucket.resp=client.delete_bucket({bucket:"forrandall2",})

Request syntax with placeholder values

resp=client.delete_bucket({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    Specifies the bucket being deleted.

    Directory buckets - When you use this operation with adirectory bucket, you must use path-style requests in the formathttps://s3express-control.region-code.amazonaws.com/bucket-name.Virtual-hosted-style requests aren't supported. Directory bucketnames must be unique in the chosen Zone (Availability Zone or LocalZone). Bucket names must also follow the formatbucket-base-name--zone-id--x-s3 (for example,DOC-EXAMPLE-BUCKET--usw2-az1--x-s3). For information about bucketnaming restrictions, seeDirectory bucket naming rules in theAmazon S3 User Guide

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

    For directory buckets, this header is not supported in this APIoperation. If you specify this header, the request fails with the HTTPstatus code501 Not Implemented.

Returns:

See Also:

4393439443954396
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4393defdelete_bucket(params={},options={})req=build_request(:delete_bucket,params)req.send_request(options)end

#delete_bucket_analytics_configuration(params = {}) ⇒Struct

This operation is not supported for directory buckets.

Deletes an analytics configuration for the bucket (specified by theanalytics configuration ID).

To use this operation, you must have permissions to perform thes3:PutAnalyticsConfiguration action. The bucket owner has thispermission by default. The bucket owner can grant this permission toothers. For more information about permissions, seePermissionsRelated to Bucket Subresource Operations andManaging AccessPermissions to Your Amazon S3 Resources.

For information about the Amazon S3 analytics feature, seeAmazon S3Analytics – Storage Class Analysis.

The following operations are related toDeleteBucketAnalyticsConfiguration:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.delete_bucket_analytics_configuration({bucket:"BucketName",# requiredid:"AnalyticsId",# requiredexpected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket from which an analytics configuration isdeleted.

  • :id(required,String)

    The ID that identifies the analytics configuration.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

4463446444654466
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4463defdelete_bucket_analytics_configuration(params={},options={})req=build_request(:delete_bucket_analytics_configuration,params)req.send_request(options)end

#delete_bucket_cors(params = {}) ⇒Struct

This operation is not supported for directory buckets.

Deletes thecors configuration information set for the bucket.

To use this operation, you must have permission to perform thes3:PutBucketCORS action. The bucket owner has this permission bydefault and can grant this permission to others.

For information aboutcors, seeEnabling Cross-Origin ResourceSharing in theAmazon S3 User Guide.

Related Resources

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To delete cors configuration on a bucket.

# The following example deletes CORS configuration on a bucket.resp=client.delete_bucket_cors({bucket:"examplebucket",})

Request syntax with placeholder values

resp=client.delete_bucket_cors({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    Specifies the bucket whosecors configuration is being deleted.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

4527452845294530
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4527defdelete_bucket_cors(params={},options={})req=build_request(:delete_bucket_cors,params)req.send_request(options)end

#delete_bucket_encryption(params = {}) ⇒Struct

This implementation of the DELETE action resets the default encryptionfor the bucket as server-side encryption with Amazon S3 managed keys(SSE-S3).

*General purpose buckets - For information about the bucket default encryption feature, seeAmazon S3 Bucket Default Encryption in theAmazon S3 User Guide.

Permissions
  • General purpose bucket permissions - Thes3:PutEncryptionConfiguration permission is required in apolicy. The bucket owner has this permission by default. Thebucket owner can grant this permission to others. For moreinformation about permissions, seePermissions Related to BucketOperations andManaging Access Permissions to Your Amazon S3Resources.

  • Directory bucket permissions - To grant access to this APIoperation, you must have thes3express:PutEncryptionConfiguration permission in an IAMidentity-based policy instead of a bucket policy. Cross-accountaccess to this API operation isn't supported. This operation canonly be performed by the Amazon Web Services account that owns theresource. For more information about directory bucket policies andpermissions, seeAmazon Web Services Identity and AccessManagement (IAM) for S3 Express One Zone in theAmazon S3User Guide.

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax iss3express-control.region-code.amazonaws.com.

The following operations are related toDeleteBucketEncryption:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.delete_bucket_encryption({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket containing the server-side encryptionconfiguration to delete.

    Directory buckets - When you use this operation with adirectory bucket, you must use path-style requests in the formathttps://s3express-control.region-code.amazonaws.com/bucket-name.Virtual-hosted-style requests aren't supported. Directory bucketnames must be unique in the chosen Zone (Availability Zone or LocalZone). Bucket names must also follow the formatbucket-base-name--zone-id--x-s3 (for example,DOC-EXAMPLE-BUCKET--usw2-az1--x-s3). For information about bucketnaming restrictions, seeDirectory bucket naming rules in theAmazon S3 User Guide

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

    For directory buckets, this header is not supported in this APIoperation. If you specify this header, the request fails with the HTTPstatus code501 Not Implemented.

Returns:

See Also:

4636463746384639
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4636defdelete_bucket_encryption(params={},options={})req=build_request(:delete_bucket_encryption,params)req.send_request(options)end

#delete_bucket_intelligent_tiering_configuration(params = {}) ⇒Struct

This operation is not supported for directory buckets.

Deletes the S3 Intelligent-Tiering configuration from the specifiedbucket.

The S3 Intelligent-Tiering storage class is designed to optimizestorage costs by automatically moving data to the most cost-effectivestorage access tier, without performance impact or operationaloverhead. S3 Intelligent-Tiering delivers automatic cost savings inthree low latency and high throughput access tiers. To get the loweststorage cost on data that can be accessed in minutes to hours, you canchoose to activate additional archiving capabilities.

The S3 Intelligent-Tiering storage class is the ideal storage classfor data with unknown, changing, or unpredictable access patterns,independent of object size or retention period. If the size of anobject is less than 128 KB, it is not monitored and not eligible forauto-tiering. Smaller objects can be stored, but they are alwayscharged at the Frequent Access tier rates in the S3Intelligent-Tiering storage class.

For more information, seeStorage class for automatically optimizingfrequently and infrequently accessed objects.

Operations related toDeleteBucketIntelligentTieringConfigurationinclude:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.delete_bucket_intelligent_tiering_configuration({bucket:"BucketName",# requiredid:"IntelligentTieringId",# requiredexpected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the Amazon S3 bucket whose configuration you want tomodify or retrieve.

  • :id(required,String)

    The ID used to identify the S3 Intelligent-Tiering configuration.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

4713471447154716
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4713defdelete_bucket_intelligent_tiering_configuration(params={},options={})req=build_request(:delete_bucket_intelligent_tiering_configuration,params)req.send_request(options)end

#delete_bucket_inventory_configuration(params = {}) ⇒Struct

This operation is not supported for directory buckets.

Deletes an S3 Inventory configuration (identified by the inventory ID)from the bucket.

To use this operation, you must have permissions to perform thes3:PutInventoryConfiguration action. The bucket owner has thispermission by default. The bucket owner can grant this permission toothers. For more information about permissions, seePermissionsRelated to Bucket Subresource Operations andManaging AccessPermissions to Your Amazon S3 Resources.

For information about the Amazon S3 inventory feature, seeAmazon S3Inventory.

Operations related toDeleteBucketInventoryConfiguration include:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.delete_bucket_inventory_configuration({bucket:"BucketName",# requiredid:"InventoryId",# requiredexpected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket containing the inventory configuration todelete.

  • :id(required,String)

    The ID used to identify the inventory configuration.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

4782478347844785
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4782defdelete_bucket_inventory_configuration(params={},options={})req=build_request(:delete_bucket_inventory_configuration,params)req.send_request(options)end

#delete_bucket_lifecycle(params = {}) ⇒Struct

Deletes the lifecycle configuration from the specified bucket. AmazonS3 removes all the lifecycle configuration rules in the lifecyclesubresource associated with the bucket. Your objects never expire, andAmazon S3 no longer automatically deletes any objects on the basis ofrules contained in the deleted lifecycle configuration.

Permissions
  • General purpose bucket permissions - By default, all Amazon S3resources are private, including buckets, objects, and relatedsubresources (for example, lifecycle configuration and websiteconfiguration). Only the resource owner (that is, the Amazon WebServices account that created it) can access the resource. Theresource owner can optionally grant access permissions to othersby writing an access policy. For this operation, a user must havethes3:PutLifecycleConfiguration permission.

    For more information about permissions, seeManaging AccessPermissions to Your Amazon S3 Resources.^

  • Directory bucket permissions - You must have thes3express:PutLifecycleConfiguration permission in an IAMidentity-based policy to use this operation. Cross-account accessto this API operation isn't supported. The resource owner canoptionally grant access permissions to others by creating a roleor user for them as long as they are within the same account asthe owner and resource.

    For more information about directory bucket policies andpermissions, seeAuthorizing Regional endpoint APIs with IAMin theAmazon S3 User Guide.

    Directory buckets - For directory buckets, you must makerequests for this API operation to the Regional endpoint. Theseendpoints support path-style requests in the formathttps://s3express-control.region-code.amazonaws.com/bucket-name. Virtual-hosted-style requests aren't supported. For moreinformation about endpoints in Availability Zones, seeRegionaland Zonal endpoints for directory buckets in AvailabilityZones in theAmazon S3 User Guide. For more informationabout endpoints in Local Zones, seeConcepts for directorybuckets in Local Zones in theAmazon S3 User Guide.

    ^

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax iss3express-control.region.amazonaws.com.

For more information about the object expiration, seeElements toDescribe Lifecycle Actions.

Related actions include:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To delete lifecycle configuration on a bucket.

# The following example deletes lifecycle configuration on a bucket.resp=client.delete_bucket_lifecycle({bucket:"examplebucket",})

Request syntax with placeholder values

resp=client.delete_bucket_lifecycle({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket name of the lifecycle to delete.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

    This parameter applies to general purpose buckets only. It is notsupported for directory bucket lifecycle configurations.

Returns:

See Also:

4896489748984899
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4896defdelete_bucket_lifecycle(params={},options={})req=build_request(:delete_bucket_lifecycle,params)req.send_request(options)end

#delete_bucket_metadata_configuration(params = {}) ⇒Struct

Deletes an S3 Metadata configuration from a general purpose bucket.For more information, seeAccelerating data discovery with S3Metadata in theAmazon S3 User Guide.

You can use the V2DeleteBucketMetadataConfiguration API operationwith V1 or V2 metadata configurations. However, if you try to use theV1DeleteBucketMetadataTableConfiguration API operation with V2configurations, you will receive an HTTP405 Method Not Allowederror.

Permissions

To use this operation, you must have thes3:DeleteBucketMetadataTableConfiguration permission. For moreinformation, seeSetting up permissions for configuring metadatatables in theAmazon S3 User Guide.

The IAM policy action name is the same for the V1 and V2 APIoperations.

The following operations are related toDeleteBucketMetadataConfiguration:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The general purpose bucket that you want to remove the metadataconfiguration from.

  • :expected_bucket_owner(String)

    The expected bucket owner of the general purpose bucket that you wantto remove the metadata table configuration from.

Returns:

See Also:

4970497149724973
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 4970def(params={},options={})req=build_request(:delete_bucket_metadata_configuration,params)req.send_request(options)end

#delete_bucket_metadata_table_configuration(params = {}) ⇒Struct

We recommend that you delete your S3 Metadata configurations by usingthe V2DeleteBucketMetadataTableConfiguration API operation. Weno longer recommend using the V1DeleteBucketMetadataTableConfiguration API operation.

If you created your S3 Metadata configuration before July 15, 2025,werecommend that you delete and re-create your configuration by usingCreateBucketMetadataConfiguration so that you can expire journaltable records and create a live inventory table.

Deletes a V1 S3 Metadata configuration from a general purpose bucket.For more information, seeAccelerating data discovery with S3Metadata in theAmazon S3 User Guide.

You can use the V2DeleteBucketMetadataConfiguration API operationwith V1 or V2 metadata table configurations. However, if you try touse the V1DeleteBucketMetadataTableConfiguration API operation withV2 configurations, you will receive an HTTP405 Method Not Allowederror.

Make sure that you update your processes to use the new V2 APIoperations (CreateBucketMetadataConfiguration,GetBucketMetadataConfiguration, andDeleteBucketMetadataConfiguration) instead of the V1 API operations.

Permissions

To use this operation, you must have thes3:DeleteBucketMetadataTableConfiguration permission. For moreinformation, seeSetting up permissions for configuring metadatatables in theAmazon S3 User Guide.

The following operations are related toDeleteBucketMetadataTableConfiguration:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The general purpose bucket that you want to remove the metadata tableconfiguration from.

  • :expected_bucket_owner(String)

    The expected bucket owner of the general purpose bucket that you wantto remove the metadata table configuration from.

Returns:

See Also:

5051505250535054
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 5051def(params={},options={})req=build_request(:delete_bucket_metadata_table_configuration,params)req.send_request(options)end

#delete_bucket_metrics_configuration(params = {}) ⇒Struct

This operation is not supported for directory buckets.

Deletes a metrics configuration for the Amazon CloudWatch requestmetrics (specified by the metrics configuration ID) from the bucket.Note that this doesn't include the daily storage metrics.

To use this operation, you must have permissions to perform thes3:PutMetricsConfiguration action. The bucket owner has thispermission by default. The bucket owner can grant this permission toothers. For more information about permissions, seePermissionsRelated to Bucket Subresource Operations andManaging AccessPermissions to Your Amazon S3 Resources.

For information about CloudWatch request metrics for Amazon S3, seeMonitoring Metrics with Amazon CloudWatch.

The following operations are related toDeleteBucketMetricsConfiguration:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.delete_bucket_metrics_configuration({bucket:"BucketName",# requiredid:"MetricsId",# requiredexpected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket containing the metrics configuration to delete.

  • :id(required,String)

    The ID used to identify the metrics configuration. The ID has a 64character limit and can only contain letters, numbers, periods,dashes, and underscores.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

5125512651275128
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 5125defdelete_bucket_metrics_configuration(params={},options={})req=build_request(:delete_bucket_metrics_configuration,params)req.send_request(options)end

#delete_bucket_ownership_controls(params = {}) ⇒Struct

This operation is not supported for directory buckets.

RemovesOwnershipControls for an Amazon S3 bucket. To use thisoperation, you must have thes3:PutBucketOwnershipControlspermission. For more information about Amazon S3 permissions, seeSpecifying Permissions in a Policy.

For information about Amazon S3 Object Ownership, seeUsing ObjectOwnership.

The following operations are related toDeleteBucketOwnershipControls:

  • GetBucketOwnershipControls

  • PutBucketOwnershipControls

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.delete_bucket_ownership_controls({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The Amazon S3 bucket whoseOwnershipControls you want to delete.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

5179518051815182
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 5179defdelete_bucket_ownership_controls(params={},options={})req=build_request(:delete_bucket_ownership_controls,params)req.send_request(options)end

#delete_bucket_policy(params = {}) ⇒Struct

Deletes the policy of a specified bucket.

Directory buckets - For directory buckets, you must makerequests for this API operation to the Regional endpoint. Theseendpoints support path-style requests in the formathttps://s3express-control.region-code.amazonaws.com/bucket-name.Virtual-hosted-style requests aren't supported. For more informationabout endpoints in Availability Zones, seeRegional and Zonalendpoints for directory buckets in Availability Zones in theAmazon S3 User Guide. For more information about endpoints in LocalZones, seeConcepts for directory buckets in Local Zones in theAmazon S3 User Guide.

Permissions

If you are using an identity other than the root user of the AmazonWeb Services account that owns the bucket, the calling identity mustboth have theDeleteBucketPolicy permissions on the specifiedbucket and belong to the bucket owner's account in order to usethis operation.

If you don't haveDeleteBucketPolicy permissions, Amazon S3returns a403 Access Denied error. If you have the correctpermissions, but you're not using an identity that belongs to thebucket owner's account, Amazon S3 returns a405 Method NotAllowed error.

To ensure that bucket owners don't inadvertently lock themselvesout of their own buckets, the root principal in a bucket owner'sAmazon Web Services account can perform theGetBucketPolicy,PutBucketPolicy, andDeleteBucketPolicy API actions, even iftheir bucket policy explicitly denies the root principal's access.Bucket owner root principals can only be blocked from performingthese API actions by VPC endpoint policies and Amazon Web ServicesOrganizations policies.

  • General purpose bucket permissions - Thes3:DeleteBucketPolicy permission is required in a policy. Formore information about general purpose buckets bucket policies,seeUsing Bucket Policies and User Policies in theAmazon S3User Guide.

  • Directory bucket permissions - To grant access to this APIoperation, you must have thes3express:DeleteBucketPolicypermission in an IAM identity-based policy instead of a bucketpolicy. Cross-account access to this API operation isn'tsupported. This operation can only be performed by the Amazon WebServices account that owns the resource. For more informationabout directory bucket policies and permissions, seeAmazon WebServices Identity and Access Management (IAM) for S3 Express OneZone in theAmazon S3 User Guide.

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax iss3express-control.region-code.amazonaws.com.

The following operations are related toDeleteBucketPolicy

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To delete bucket policy

# The following example deletes bucket policy on the specified bucket.resp=client.delete_bucket_policy({bucket:"examplebucket",})

Request syntax with placeholder values

resp=client.delete_bucket_policy({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket name.

    Directory buckets - When you use this operation with adirectory bucket, you must use path-style requests in the formathttps://s3express-control.region-code.amazonaws.com/bucket-name.Virtual-hosted-style requests aren't supported. Directory bucketnames must be unique in the chosen Zone (Availability Zone or LocalZone). Bucket names must also follow the formatbucket-base-name--zone-id--x-s3 (for example,DOC-EXAMPLE-BUCKET--usw2-az1--x-s3). For information about bucketnaming restrictions, seeDirectory bucket naming rules in theAmazon S3 User Guide

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

    For directory buckets, this header is not supported in this APIoperation. If you specify this header, the request fails with the HTTPstatus code501 Not Implemented.

Returns:

See Also:

5313531453155316
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 5313defdelete_bucket_policy(params={},options={})req=build_request(:delete_bucket_policy,params)req.send_request(options)end

#delete_bucket_replication(params = {}) ⇒Struct

This operation is not supported for directory buckets.

Deletes the replication configuration from the bucket.

To use this operation, you must have permissions to perform thes3:PutReplicationConfiguration action. The bucket owner has thesepermissions by default and can grant it to others. For moreinformation about permissions, seePermissions Related to BucketSubresource Operations andManaging Access Permissions to YourAmazon S3 Resources.

It can take a while for the deletion of a replication configuration tofully propagate.

For information about replication configuration, seeReplicationin theAmazon S3 User Guide.

The following operations are related toDeleteBucketReplication:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To delete bucket replication configuration

# The following example deletes replication configuration set on bucket.resp=client.delete_bucket_replication({bucket:"example",})

Request syntax with placeholder values

resp=client.delete_bucket_replication({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket name.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

5387538853895390
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 5387defdelete_bucket_replication(params={},options={})req=build_request(:delete_bucket_replication,params)req.send_request(options)end

#delete_bucket_tagging(params = {}) ⇒Struct

This operation is not supported for directory buckets.

Deletes tags from the general purpose bucket if attribute based accesscontrol (ABAC) is not enabled for the bucket. When youenable ABACfor a general purpose bucket, you can no longer use this operationfor that bucket and must useUntagResource instead.

if ABAC is not enabled for the bucket. When youenable ABAC for ageneral purpose bucket, you can no longer use this operation forthat bucket and must useUntagResource instead.

To use this operation, you must have permission to perform thes3:PutBucketTagging action. By default, the bucket owner has thispermission and can grant this permission to others.

The following operations are related toDeleteBucketTagging:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To delete bucket tags

# The following example deletes bucket tags.resp=client.delete_bucket_tagging({bucket:"examplebucket",})

Request syntax with placeholder values

resp=client.delete_bucket_tagging({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket that has the tag set to be removed.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

5456545754585459
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 5456defdelete_bucket_tagging(params={},options={})req=build_request(:delete_bucket_tagging,params)req.send_request(options)end

#delete_bucket_website(params = {}) ⇒Struct

This operation is not supported for directory buckets.

This action removes the website configuration for a bucket. Amazon S3returns a200 OK response upon successfully deleting a websiteconfiguration on the specified bucket. You will get a200 OKresponse if the website configuration you are trying to delete doesnot exist on the bucket. Amazon S3 returns a404 response if thebucket specified in the request does not exist.

This DELETE action requires theS3:DeleteBucketWebsite permission.By default, only the bucket owner can delete the website configurationattached to a bucket. However, bucket owners can grant other userspermission to delete the website configuration by writing a bucketpolicy granting them theS3:DeleteBucketWebsite permission.

For more information about hosting websites, seeHosting Websites onAmazon S3.

The following operations are related toDeleteBucketWebsite:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To delete bucket website configuration

# The following example deletes bucket website configuration.resp=client.delete_bucket_website({bucket:"examplebucket",})

Request syntax with placeholder values

resp=client.delete_bucket_website({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket name for which you want to remove the websiteconfiguration.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

5528552955305531
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 5528defdelete_bucket_website(params={},options={})req=build_request(:delete_bucket_website,params)req.send_request(options)end

#delete_object(params = {}) ⇒Types::DeleteObjectOutput

Removes an object from a bucket. The behavior depends on the bucket'sversioning state:

  • If bucket versioning is not enabled, the operation permanentlydeletes the object.

  • If bucket versioning is enabled, the operation inserts a deletemarker, which becomes the current version of the object. Topermanently delete an object in a versioned bucket, you must includethe object’sversionId in the request. For more information aboutversioning-enabled buckets, seeDeleting object versions from aversioning-enabled bucket.

  • If bucket versioning is suspended, the operation removes the objectthat has a nullversionId, if there is one, and inserts a deletemarker that becomes the current version of the object. If thereisn't an object with a nullversionId, and all versions of theobject have aversionId, Amazon S3 does not remove the object andonly inserts a delete marker. To permanently delete an object thathas aversionId, you must include the object’sversionId in therequest. For more information about versioning-suspended buckets,seeDeleting objects from versioning-suspended buckets.

*Directory buckets - S3 Versioning isn't enabled and supported for directory buckets. For this API operation, only thenull value of the version ID is supported by directory buckets. You can only specifynull to theversionId query parameter in the request.

  • Directory buckets - For directory buckets, you must makerequests for this API operation to the Zonal endpoint. Theseendpoints support virtual-hosted-style requests in the formathttps://amzn-s3-demo-bucket.s3express-zone-id.region-code.amazonaws.com/key-name. Path-style requests are not supported. For more information aboutendpoints in Availability Zones, seeRegional and Zonal endpointsfor directory buckets in Availability Zones in theAmazon S3User Guide. For more information about endpoints in Local Zones,seeConcepts for directory buckets in Local Zones in theAmazon S3 User Guide.

To remove a specific version, you must use theversionId queryparameter. Using this query parameter permanently deletes the version.If the object deleted is a delete marker, Amazon S3 sets the responseheaderx-amz-delete-marker to true.

If the object you want to delete is in a bucket where the bucketversioning configuration is MFA Delete enabled, you must include thex-amz-mfa request header in the DELETEversionId request. Requeststhat includex-amz-mfa must use HTTPS. For more information aboutMFA Delete, seeUsing MFA Delete in theAmazon S3 User Guide.To see sample requests that use versioning, seeSample Request.

Directory buckets - MFA delete is not supported by directorybuckets.

You can delete objects by explicitly calling DELETE Object or calling(PutBucketLifecycle) to enable Amazon S3 to remove them for you.If you want to block users or accounts from removing or deletingobjects from your bucket, you must deny them thes3:DeleteObject,s3:DeleteObjectVersion, ands3:PutLifeCycleConfiguration actions.

Directory buckets - S3 Lifecycle is not supported by directorybuckets.

Permissions
  • General purpose bucket permissions - The following permissionsare required in your policies when yourDeleteObjects requestincludes specific headers.

    • s3:DeleteObject - To delete an object froma bucket, you must always have thes3:DeleteObject permission.

    • s3:DeleteObjectVersion - To delete aspecific version of an object from a versioning-enabled bucket,you must have thes3:DeleteObjectVersion permission.

      If thes3:DeleteObject ors3:DeleteObjectVersion permissionsare explicitly denied in your bucket policy, attempts to deleteany unversioned objects result in a403 Access Denied error.

  • Directory bucket permissions - To grant access to this APIoperation on a directory bucket, we recommend that you use theCreateSession API operation for session-basedauthorization. Specifically, you grant thes3express:CreateSession permission to the directory bucket in abucket policy or an IAM identity-based policy. Then, you make theCreateSession API call on the bucket to obtain a session token.With the session token in your request header, you can make APIrequests to this operation. After the session token expires, youmake anotherCreateSession API call to generate a new sessiontoken for use. Amazon Web Services CLI or SDKs create session andrefresh the session token automatically to avoid serviceinterruptions when a session expires. For more information aboutauthorization, seeCreateSession.
HTTP Host header syntax

Directory buckets - The HTTP Host header syntax isBucket-name.s3express-zone-id.region-code.amazonaws.com.

The following action is related toDeleteObject:

^

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

TheIf-Match header is supported for both general purpose anddirectory buckets.IfMatchLastModifiedTime andIfMatchSize is onlysupported for directory buckets.

Examples:

Example: To delete an object (from a non-versioned bucket)

# The following example deletes an object from a non-versioned bucket.resp=client.delete_object({bucket:"ExampleBucket",key:"HappyFace.jpg",})

Example: To delete an object

# The following example deletes an object from an S3 bucket.resp=client.delete_object({bucket:"examplebucket",key:"objectkey.jpg",})resp.to_houtputsthefollowing:{}

Request syntax with placeholder values

resp=client.delete_object({bucket:"BucketName",# requiredkey:"ObjectKey",# requiredmfa:"MFA",version_id:"ObjectVersionId",request_payer:"requester",# accepts requesterbypass_governance_retention:false,expected_bucket_owner:"AccountId",if_match:"IfMatch",if_match_last_modified_time:Time.now,if_match_size:1,})

Response structure

resp.delete_marker#=> Booleanresp.version_id#=> Stringresp.request_charged#=> String, one of "requester"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket name of the bucket containing the object.

    Directory buckets - When you use this operation with a directorybucket, you must use virtual-hosted-style requests in the formatBucket-name.s3express-zone-id.region-code.amazonaws.com. Path-stylerequests are not supported. Directory bucket names must be unique inthe chosen Zone (Availability Zone or Local Zone). Bucket names mustfollow the formatbucket-base-name--zone-id--x-s3 (for example,amzn-s3-demo-bucket--usw2-az1--x-s3). For information about bucketnaming restrictions, seeDirectory bucket naming rules in theAmazon S3 User Guide.

    Access points - When you use this action with an access point forgeneral purpose buckets, you must provide the alias of the accesspoint in place of the bucket name or specify the access point ARN.When you use this action with an access point for directory buckets,you must provide the access point name in place of the bucket name.When using the access point ARN, you must direct requests to theaccess point hostname. The access point hostname takes the formAccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.When using this action with an access point through the Amazon WebServices SDKs, you provide the access point ARN in place of the bucketname. For more information about access point ARNs, seeUsing accesspoints in theAmazon S3 User Guide.

    Object Lambda access points are not supported by directory buckets.

    S3 on Outposts - When you use this action with S3 on Outposts, youmust direct requests to the S3 on Outposts hostname. The S3 onOutposts hostname takes the formAccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com.When you use this action with S3 on Outposts, the destination bucketmust be the Outposts access point ARN or the access point alias. Formore information about S3 on Outposts, seeWhat is S3 onOutposts? in theAmazon S3 User Guide.

  • :key(required,String)

    Key name of the object to delete.

  • :mfa(String)

    The concatenation of the authentication device's serial number, aspace, and the value that is displayed on your authentication device.Required to permanently delete a versioned object if versioning isconfigured with MFA delete enabled.

    This functionality is not supported for directory buckets.

  • :version_id(String)

    Version ID used to reference a specific version of the object.

    For directory buckets in this API operation, only thenull value ofthe version ID is supported.

  • :request_payer(String)

    Confirms that the requester knows that they will be charged for therequest. Bucket owners need not specify this parameter in theirrequests. If either the source or destination S3 bucket has RequesterPays enabled, the requester will pay for corresponding charges to copythe object. For information about downloading objects from RequesterPays buckets, seeDownloading Objects in Requester Pays Bucketsin theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :bypass_governance_retention(Boolean)

    Indicates whether S3 Object Lock should bypass Governance-moderestrictions to process this operation. To use this header, you musthave thes3:BypassGovernanceRetention permission.

    This functionality is not supported for directory buckets.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

  • :if_match(String)

    Deletes the object if the ETag (entity tag) value provided during thedelete operation matches the ETag of the object in S3. If the ETagvalues do not match, the operation returns a412 Precondition Failederror.

    Expects the ETag value as a string.If-Match does accept a stringvalue of an '*' (asterisk) character to denote a match of any ETag.

    For more information about conditional requests, seeRFC 7232.

  • :if_match_last_modified_time(Time,DateTime,Date,Integer,String)

    If present, the object is deleted only if its modification timesmatches the providedTimestamp. If theTimestamp values do notmatch, the operation returns a412 Precondition Failed error. If theTimestamp matches or if the object doesn’t exist, the operationreturns a204 Success (No Content) response.

    This functionality is only supported for directory buckets.

  • :if_match_size(Integer)

    If present, the object is deleted only if its size matches theprovided size in bytes. If theSize value does not match, theoperation returns a412 Precondition Failed error. If theSizematches or if the object doesn’t exist, the operation returns a204Success (No Content) response.

    This functionality is only supported for directory buckets.

    You can use theIf-Match,x-amz-if-match-last-modified-time andx-amz-if-match-size conditional headers in conjunction witheach-other or individually.

Returns:

See Also:

5859586058615862
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 5859defdelete_object(params={},options={})req=build_request(:delete_object,params)req.send_request(options)end

#delete_object_tagging(params = {}) ⇒Types::DeleteObjectTaggingOutput

This operation is not supported for directory buckets.

Removes the entire tag set from the specified object. For moreinformation about managing object tags, see Object Tagging.

To use this operation, you must have permission to perform thes3:DeleteObjectTagging action.

To delete tags of a specific object version, add theversionId queryparameter in the request. You will need permission for thes3:DeleteObjectVersionTagging action.

The following operations are related toDeleteObjectTagging:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To remove tag set from an object

# The following example removes tag set associated with the specified object. If the bucket is versioning enabled, the# operation removes tag set from the latest object version.resp=client.delete_object_tagging({bucket:"examplebucket",key:"HappyFace.jpg",})resp.to_houtputsthefollowing:{version_id:"null",}

Example: To remove tag set from an object version

# The following example removes tag set associated with the specified object version. The request specifies both the# object key and object version.resp=client.delete_object_tagging({bucket:"examplebucket",key:"HappyFace.jpg",version_id:"ydlaNkwWm0SfKJR.T1b1fIdPRbldTYRI",})resp.to_houtputsthefollowing:{version_id:"ydlaNkwWm0SfKJR.T1b1fIdPRbldTYRI",}

Request syntax with placeholder values

resp=client.delete_object_tagging({bucket:"BucketName",# requiredkey:"ObjectKey",# requiredversion_id:"ObjectVersionId",expected_bucket_owner:"AccountId",})

Response structure

resp.version_id#=> String

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket name containing the objects from which to remove the tags.

    Access points - When you use this action with an access point forgeneral purpose buckets, you must provide the alias of the accesspoint in place of the bucket name or specify the access point ARN.When you use this action with an access point for directory buckets,you must provide the access point name in place of the bucket name.When using the access point ARN, you must direct requests to theaccess point hostname. The access point hostname takes the formAccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.When using this action with an access point through the Amazon WebServices SDKs, you provide the access point ARN in place of the bucketname. For more information about access point ARNs, seeUsing accesspoints in theAmazon S3 User Guide.

    S3 on Outposts - When you use this action with S3 on Outposts, youmust direct requests to the S3 on Outposts hostname. The S3 onOutposts hostname takes the formAccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com.When you use this action with S3 on Outposts, the destination bucketmust be the Outposts access point ARN or the access point alias. Formore information about S3 on Outposts, seeWhat is S3 onOutposts? in theAmazon S3 User Guide.

  • :key(required,String)

    The key that identifies the object in the bucket from which to removeall tags.

  • :version_id(String)

    The versionId of the object that the tag-set will be removed from.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

5989599059915992
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 5989defdelete_object_tagging(params={},options={})req=build_request(:delete_object_tagging,params)req.send_request(options)end

#delete_objects(params = {}) ⇒Types::DeleteObjectsOutput

This operation enables you to delete multiple objects from a bucketusing a single HTTP request. If you know the object keys that you wantto delete, then this operation provides a suitable alternative tosending individual delete requests, reducing per-request overhead.

The request can contain a list of up to 1,000 keys that you want todelete. In the XML, you provide the object key names, and optionally,version IDs if you want to delete a specific version of the objectfrom a versioning-enabled bucket. For each key, Amazon S3 performs adelete operation and returns the result of that delete, success orfailure, in the response. If the object specified in the requestisn't found, Amazon S3 confirms the deletion by returning the resultas deleted.

*Directory buckets - S3 Versioning isn't enabled and supported for directory buckets.

  • Directory buckets - For directory buckets, you must makerequests for this API operation to the Zonal endpoint. Theseendpoints support virtual-hosted-style requests in the formathttps://amzn-s3-demo-bucket.s3express-zone-id.region-code.amazonaws.com/key-name. Path-style requests are not supported. For more information aboutendpoints in Availability Zones, seeRegional and Zonal endpointsfor directory buckets in Availability Zones in theAmazon S3User Guide. For more information about endpoints in Local Zones,seeConcepts for directory buckets in Local Zones in theAmazon S3 User Guide.

The operation supports two modes for the response: verbose and quiet.By default, the operation uses verbose mode in which the responseincludes the result of deletion of each key in your request. In quietmode the response includes only keys where the delete operationencountered an error. For a successful deletion in a quiet mode, theoperation does not return any information about the delete in theresponse body.

When performing this action on an MFA Delete enabled bucket, thatattempts to delete any versioned objects, you must include an MFAtoken. If you do not provide one, the entire request will fail, evenif there are non-versioned objects you are trying to delete. If youprovide an invalid token, whether there are versioned keys in therequest or not, the entire Multi-Object Delete request will fail. Forinformation about MFA Delete, seeMFA Delete in theAmazon S3User Guide.

Directory buckets - MFA delete is not supported by directorybuckets.

Permissions
  • General purpose bucket permissions - The following permissionsare required in your policies when yourDeleteObjects requestincludes specific headers.

    • s3:DeleteObject - To delete an object froma bucket, you must always specify thes3:DeleteObjectpermission.

    • s3:DeleteObjectVersion - To delete aspecific version of an object from a versioning-enabled bucket,you must specify thes3:DeleteObjectVersion permission.

      If thes3:DeleteObject ors3:DeleteObjectVersion permissionsare explicitly denied in your bucket policy, attempts to deleteany unversioned objects result in a403 Access Denied error.

  • Directory bucket permissions - To grant access to this APIoperation on a directory bucket, we recommend that you use theCreateSession API operation for session-basedauthorization. Specifically, you grant thes3express:CreateSession permission to the directory bucket in abucket policy or an IAM identity-based policy. Then, you make theCreateSession API call on the bucket to obtain a session token.With the session token in your request header, you can make APIrequests to this operation. After the session token expires, youmake anotherCreateSession API call to generate a new sessiontoken for use. Amazon Web Services CLI or SDKs create session andrefresh the session token automatically to avoid serviceinterruptions when a session expires. For more information aboutauthorization, seeCreateSession.
Content-MD5 request header
  • General purpose bucket - The Content-MD5 request header isrequired for all Multi-Object Delete requests. Amazon S3 uses theheader value to ensure that your request body has not been alteredin transit.

  • Directory bucket - The Content-MD5 request header or aadditional checksum request header (includingx-amz-checksum-crc32,x-amz-checksum-crc32c,x-amz-checksum-sha1, orx-amz-checksum-sha256) is required forall Multi-Object Delete requests.

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax isBucket-name.s3express-zone-id.region-code.amazonaws.com.

The following operations are related toDeleteObjects:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To delete multiple objects from a versioned bucket

# The following example deletes objects from a bucket. The bucket is versioned, and the request does not specify the# object version to delete. In this case, all versions remain in the bucket and S3 adds a delete marker.resp=client.delete_objects({bucket:"examplebucket",delete:{objects:[{key:"objectkey1",},{key:"objectkey2",},],quiet:false,},})resp.to_houtputsthefollowing:{deleted:[{delete_marker:true,delete_marker_version_id:"A._w1z6EFiCF5uhtQMDal9JDkID9tQ7F",key:"objectkey1",},{delete_marker:true,delete_marker_version_id:"iOd_ORxhkKe_e8G8_oSGxt2PjsCZKlkt",key:"objectkey2",},],}

Example: To delete multiple object versions from a versioned bucket

# The following example deletes objects from a bucket. The request specifies object versions. S3 deletes specific object# versions and returns the key and versions of deleted objects in the response.resp=client.delete_objects({bucket:"examplebucket",delete:{objects:[{key:"HappyFace.jpg",version_id:"2LWg7lQLnY41.maGB5Z6SWW.dcq0vx7b",},{key:"HappyFace.jpg",version_id:"yoz3HB.ZhCS_tKVEmIOr7qYyyAaZSKVd",},],quiet:false,},})resp.to_houtputsthefollowing:{deleted:[{key:"HappyFace.jpg",version_id:"yoz3HB.ZhCS_tKVEmIOr7qYyyAaZSKVd",},{key:"HappyFace.jpg",version_id:"2LWg7lQLnY41.maGB5Z6SWW.dcq0vx7b",},],}

Request syntax with placeholder values

resp=client.delete_objects({bucket:"BucketName",# requireddelete:{# requiredobjects:[# required{key:"ObjectKey",# requiredversion_id:"ObjectVersionId",etag:"ETag",last_modified_time:Time.now,size:1,},],quiet:false,},mfa:"MFA",request_payer:"requester",# accepts requesterbypass_governance_retention:false,expected_bucket_owner:"AccountId",checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVME})

Response structure

resp.deleted#=> Arrayresp.deleted[0].key#=> Stringresp.deleted[0].version_id#=> Stringresp.deleted[0].delete_marker#=> Booleanresp.deleted[0].delete_marker_version_id#=> Stringresp.request_charged#=> String, one of "requester"resp.errors#=> Arrayresp.errors[0].key#=> Stringresp.errors[0].version_id#=> Stringresp.errors[0].code#=> Stringresp.errors[0].message#=> String

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket name containing the objects to delete.

    Directory buckets - When you use this operation with a directorybucket, you must use virtual-hosted-style requests in the formatBucket-name.s3express-zone-id.region-code.amazonaws.com. Path-stylerequests are not supported. Directory bucket names must be unique inthe chosen Zone (Availability Zone or Local Zone). Bucket names mustfollow the formatbucket-base-name--zone-id--x-s3 (for example,amzn-s3-demo-bucket--usw2-az1--x-s3). For information about bucketnaming restrictions, seeDirectory bucket naming rules in theAmazon S3 User Guide.

    Access points - When you use this action with an access point forgeneral purpose buckets, you must provide the alias of the accesspoint in place of the bucket name or specify the access point ARN.When you use this action with an access point for directory buckets,you must provide the access point name in place of the bucket name.When using the access point ARN, you must direct requests to theaccess point hostname. The access point hostname takes the formAccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.When using this action with an access point through the Amazon WebServices SDKs, you provide the access point ARN in place of the bucketname. For more information about access point ARNs, seeUsing accesspoints in theAmazon S3 User Guide.

    Object Lambda access points are not supported by directory buckets.

    S3 on Outposts - When you use this action with S3 on Outposts, youmust direct requests to the S3 on Outposts hostname. The S3 onOutposts hostname takes the formAccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com.When you use this action with S3 on Outposts, the destination bucketmust be the Outposts access point ARN or the access point alias. Formore information about S3 on Outposts, seeWhat is S3 onOutposts? in theAmazon S3 User Guide.

  • :delete(required,Types::Delete)

    Container for the request.

  • :mfa(String)

    The concatenation of the authentication device's serial number, aspace, and the value that is displayed on your authentication device.Required to permanently delete a versioned object if versioning isconfigured with MFA delete enabled.

    When performing theDeleteObjects operation on an MFA delete enabledbucket, which attempts to delete the specified versioned objects, youmust include an MFA token. If you don't provide an MFA token, theentire request will fail, even if there are non-versioned objects thatyou are trying to delete. If you provide an invalid token, whetherthere are versioned object keys in the request or not, the entireMulti-Object Delete request will fail. For information about MFADelete, see MFA Delete in theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :request_payer(String)

    Confirms that the requester knows that they will be charged for therequest. Bucket owners need not specify this parameter in theirrequests. If either the source or destination S3 bucket has RequesterPays enabled, the requester will pay for corresponding charges to copythe object. For information about downloading objects from RequesterPays buckets, seeDownloading Objects in Requester Pays Bucketsin theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :bypass_governance_retention(Boolean)

    Specifies whether you want to delete this object even if it has aGovernance-type Object Lock in place. To use this header, you musthave thes3:BypassGovernanceRetention permission.

    This functionality is not supported for directory buckets.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

  • :checksum_algorithm(String)

    Indicates the algorithm used to create the checksum for the objectwhen you use the SDK. This header will not provide any additionalfunctionality if you don't use the SDK. When you send this header,there must be a correspondingx-amz-checksum-algorithm orx-amz-trailer header sent. Otherwise, Amazon S3 fails the requestwith the HTTP status code400 Bad Request.

    For thex-amz-checksum-algorithm header, replacealgorithm withthe supported algorithm from the following list:

    • CRC32

    • CRC32C

    • CRC64NVME

    • SHA1

    • SHA256

    For more information, seeChecking object integrity in theAmazon S3 User Guide.

    If the individual checksum value you provide throughx-amz-checksum-algorithm doesn't match the checksum algorithm youset throughx-amz-sdk-checksum-algorithm, Amazon S3 fails therequest with aBadDigest error.

    If you provide an individual checksum, Amazon S3 ignores any providedChecksumAlgorithm parameter.

Returns:

See Also:

6382638363846385
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 6382defdelete_objects(params={},options={})req=build_request(:delete_objects,params)req.send_request(options)end

#delete_public_access_block(params = {}) ⇒Struct

This operation is not supported for directory buckets.

Removes thePublicAccessBlock configuration for an Amazon S3 bucket.This operation removes the bucket-level configuration only. Theeffective public access behavior will still be governed byaccount-level settings (which may inherit from organization-levelpolicies). To use this operation, you must have thes3:PutBucketPublicAccessBlock permission. For more information aboutpermissions, seePermissions Related to Bucket SubresourceOperations andManaging Access Permissions to Your Amazon S3Resources.

The following operations are related toDeletePublicAccessBlock:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.delete_public_access_block({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The Amazon S3 bucket whosePublicAccessBlock configuration you wantto delete.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

6446644764486449
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 6446defdelete_public_access_block(params={},options={})req=build_request(:delete_public_access_block,params)req.send_request(options)end

#get_bucket_abac(params = {}) ⇒Types::GetBucketAbacOutput

Returns the attribute-based access control (ABAC) property of thegeneral purpose bucket. If ABAC is enabled on your bucket, you can usetags on the bucket for access control. For more information, seeEnabling ABAC in general purpose buckets.

Examples:

Request syntax with placeholder values

resp=client.get_bucket_abac({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Response structure

resp.abac_status.status#=> String, one of "Enabled", "Disabled"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the general purpose bucket.

  • :expected_bucket_owner(String)

    The Amazon Web Services account ID of the general purpose bucket'sowner.

Returns:

See Also:

6486648764886489
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 6486defget_bucket_abac(params={},options={})req=build_request(:get_bucket_abac,params)req.send_request(options)end

#get_bucket_accelerate_configuration(params = {}) ⇒Types::GetBucketAccelerateConfigurationOutput

This operation is not supported for directory buckets.

This implementation of the GET action uses theacceleratesubresource to return the Transfer Acceleration state of a bucket,which is eitherEnabled orSuspended. Amazon S3 TransferAcceleration is a bucket-level feature that enables you to performfaster data transfers to and from Amazon S3.

To use this operation, you must have permission to perform thes3:GetAccelerateConfiguration action. The bucket owner has thispermission by default. The bucket owner can grant this permission toothers. For more information about permissions, seePermissionsRelated to Bucket Subresource Operations andManaging AccessPermissions to your Amazon S3 Resources in theAmazon S3 UserGuide.

You set the Transfer Acceleration state of an existing bucket toEnabled orSuspended by using thePutBucketAccelerateConfiguration operation.

A GETaccelerate request does not return a state value for a bucketthat has no transfer acceleration state. A bucket has no TransferAcceleration state if a state has never been set on the bucket.

For more information about transfer acceleration, seeTransferAcceleration in the Amazon S3 User Guide.

The following operations are related toGetBucketAccelerateConfiguration:

^

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.get_bucket_accelerate_configuration({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",request_payer:"requester",# accepts requester})

Response structure

resp.status#=> String, one of "Enabled", "Suspended"resp.request_charged#=> String, one of "requester"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket for which the accelerate configuration isretrieved.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

  • :request_payer(String)

    Confirms that the requester knows that they will be charged for therequest. Bucket owners need not specify this parameter in theirrequests. If either the source or destination S3 bucket has RequesterPays enabled, the requester will pay for corresponding charges to copythe object. For information about downloading objects from RequesterPays buckets, seeDownloading Objects in Requester Pays Bucketsin theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

Returns:

See Also:

6586658765886589
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 6586defget_bucket_accelerate_configuration(params={},options={})req=build_request(:get_bucket_accelerate_configuration,params)req.send_request(options)end

#get_bucket_acl(params = {}) ⇒Types::GetBucketAclOutput

This operation is not supported for directory buckets.

This implementation of theGET action uses theacl subresource toreturn the access control list (ACL) of a bucket. To useGET toreturn the ACL of the bucket, you must have theREAD_ACP access tothe bucket. IfREAD_ACP permission is granted to the anonymous user,you can return the ACL of the bucket without using an authorizationheader.

When you use this API operation with an access point, provide thealias of the access point in place of the bucket name.

When you use this API operation with an Object Lambda access point,provide the alias of the Object Lambda access point in place of thebucket name. If the Object Lambda access point alias in a request isnot valid, the error codeInvalidAccessPointAliasError is returned.For more information aboutInvalidAccessPointAliasError, seeListof Error Codes.

If your bucket uses the bucket owner enforced setting for S3 ObjectOwnership, requests to read ACLs are still supported and return thebucket-owner-full-control ACL with the owner being the account thatcreated the bucket. For more information, see Controlling objectownership and disabling ACLs in theAmazon S3 User Guide.

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

The following operations are related toGetBucketAcl:

^

Examples:

Request syntax with placeholder values

resp=client.get_bucket_acl({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Response structure

resp.owner.display_name#=> Stringresp.owner.id#=> Stringresp.grants#=> Arrayresp.grants[0].grantee.display_name#=> Stringresp.grants[0].grantee.email_address#=> Stringresp.grants[0].grantee.id#=> Stringresp.grants[0].grantee.type#=> String, one of "CanonicalUser", "AmazonCustomerByEmail", "Group"resp.grants[0].grantee.uri#=> Stringresp.grants[0].permission#=> String, one of "FULL_CONTROL", "WRITE", "WRITE_ACP", "READ", "READ_ACP"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    Specifies the S3 bucket whose ACL is being requested.

    When you use this API operation with an access point, provide thealias of the access point in place of the bucket name.

    When you use this API operation with an Object Lambda access point,provide the alias of the Object Lambda access point in place of thebucket name. If the Object Lambda access point alias in a request isnot valid, the error codeInvalidAccessPointAliasError is returned.For more information aboutInvalidAccessPointAliasError, seeListof Error Codes.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

6686668766886689
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 6686defget_bucket_acl(params={},options={})req=build_request(:get_bucket_acl,params)req.send_request(options)end

#get_bucket_analytics_configuration(params = {}) ⇒Types::GetBucketAnalyticsConfigurationOutput

This operation is not supported for directory buckets.

This implementation of the GET action returns an analyticsconfiguration (identified by the analytics configuration ID) from thebucket.

To use this operation, you must have permissions to perform thes3:GetAnalyticsConfiguration action. The bucket owner has thispermission by default. The bucket owner can grant this permission toothers. For more information about permissions, see PermissionsRelated to Bucket Subresource Operations andManaging AccessPermissions to Your Amazon S3 Resources in theAmazon S3 UserGuide.

For information about Amazon S3 analytics feature, seeAmazon S3Analytics – Storage Class Analysis in theAmazon S3 User Guide.

The following operations are related toGetBucketAnalyticsConfiguration:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.get_bucket_analytics_configuration({bucket:"BucketName",# requiredid:"AnalyticsId",# requiredexpected_bucket_owner:"AccountId",})

Response structure

resp.analytics_configuration.id#=> Stringresp.analytics_configuration.filter.prefix#=> Stringresp.analytics_configuration.filter.tag.key#=> Stringresp.analytics_configuration.filter.tag.value#=> Stringresp.analytics_configuration.filter.and.prefix#=> Stringresp.analytics_configuration.filter.and.tags#=> Arrayresp.analytics_configuration.filter.and.tags[0].key#=> Stringresp.analytics_configuration.filter.and.tags[0].value#=> Stringresp.analytics_configuration.storage_class_analysis.data_export.output_schema_version#=> String, one of "V_1"resp.analytics_configuration.storage_class_analysis.data_export.destination.s3_bucket_destination.format#=> String, one of "CSV"resp.analytics_configuration.storage_class_analysis.data_export.destination.s3_bucket_destination.#=> Stringresp.analytics_configuration.storage_class_analysis.data_export.destination.s3_bucket_destination.bucket#=> Stringresp.analytics_configuration.storage_class_analysis.data_export.destination.s3_bucket_destination.prefix#=> String

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket from which an analytics configuration isretrieved.

  • :id(required,String)

    The ID that identifies the analytics configuration.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

6776677767786779
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 6776defget_bucket_analytics_configuration(params={},options={})req=build_request(:get_bucket_analytics_configuration,params)req.send_request(options)end

#get_bucket_cors(params = {}) ⇒Types::GetBucketCorsOutput

This operation is not supported for directory buckets.

Returns the Cross-Origin Resource Sharing (CORS) configurationinformation set for the bucket.

To use this operation, you must have permission to perform thes3:GetBucketCORS action. By default, the bucket owner has thispermission and can grant it to others.

When you use this API operation with an access point, provide thealias of the access point in place of the bucket name.

When you use this API operation with an Object Lambda access point,provide the alias of the Object Lambda access point in place of thebucket name. If the Object Lambda access point alias in a request isnot valid, the error codeInvalidAccessPointAliasError is returned.For more information aboutInvalidAccessPointAliasError, seeListof Error Codes.

For more information about CORS, see Enabling Cross-Origin ResourceSharing.

The following operations are related toGetBucketCors:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To get cors configuration set on a bucket

# The following example returns cross-origin resource sharing (CORS) configuration set on a bucket.resp=client.get_bucket_cors({bucket:"examplebucket",})resp.to_houtputsthefollowing:{cors_rules:[{allowed_headers:["Authorization",],allowed_methods:["GET",],allowed_origins:["*",],max_age_seconds:3000,},],}

Request syntax with placeholder values

resp=client.get_bucket_cors({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Response structure

resp.cors_rules#=> Arrayresp.cors_rules[0].id#=> Stringresp.cors_rules[0].allowed_headers#=> Arrayresp.cors_rules[0].allowed_headers[0]#=> Stringresp.cors_rules[0].allowed_methods#=> Arrayresp.cors_rules[0].allowed_methods[0]#=> Stringresp.cors_rules[0].allowed_origins#=> Arrayresp.cors_rules[0].allowed_origins[0]#=> Stringresp.cors_rules[0].expose_headers#=> Arrayresp.cors_rules[0].expose_headers[0]#=> Stringresp.cors_rules[0].max_age_seconds#=> Integer

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket name for which to get the cors configuration.

    When you use this API operation with an access point, provide thealias of the access point in place of the bucket name.

    When you use this API operation with an Object Lambda access point,provide the alias of the Object Lambda access point in place of thebucket name. If the Object Lambda access point alias in a request isnot valid, the error codeInvalidAccessPointAliasError is returned.For more information aboutInvalidAccessPointAliasError, seeListof Error Codes.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

6900690169026903
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 6900defget_bucket_cors(params={},options={})req=build_request(:get_bucket_cors,params)req.send_request(options)end

#get_bucket_encryption(params = {}) ⇒Types::GetBucketEncryptionOutput

Returns the default encryption configuration for an Amazon S3 bucket.By default, all buckets have a default encryption configuration thatuses server-side encryption with Amazon S3 managed keys (SSE-S3). Thisoperation also returns theBucketKeyEnabled andBlockedEncryptionTypes statuses.

*General purpose buckets - For information about the bucket default encryption feature, seeAmazon S3 Bucket Default Encryption in theAmazon S3 User Guide.

Permissions
  • General purpose bucket permissions - Thes3:GetEncryptionConfiguration permission is required in apolicy. The bucket owner has this permission by default. Thebucket owner can grant this permission to others. For moreinformation about permissions, seePermissions Related to BucketOperations andManaging Access Permissions to Your Amazon S3Resources.

  • Directory bucket permissions - To grant access to this APIoperation, you must have thes3express:GetEncryptionConfiguration permission in an IAMidentity-based policy instead of a bucket policy. Cross-accountaccess to this API operation isn't supported. This operation canonly be performed by the Amazon Web Services account that owns theresource. For more information about directory bucket policies andpermissions, seeAmazon Web Services Identity and AccessManagement (IAM) for S3 Express One Zone in theAmazon S3User Guide.

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax iss3express-control.region-code.amazonaws.com.

The following operations are related toGetBucketEncryption:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.get_bucket_encryption({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Response structure

resp.server_side_encryption_configuration.rules#=> Arrayresp.server_side_encryption_configuration.rules[0].apply_server_side_encryption_by_default.sse_algorithm#=> String, one of "AES256", "aws:fsx", "aws:kms", "aws:kms:dsse"resp.server_side_encryption_configuration.rules[0].apply_server_side_encryption_by_default.kms_master_key_id#=> Stringresp.server_side_encryption_configuration.rules[0].bucket_key_enabled#=> Booleanresp.server_side_encryption_configuration.rules[0].blocked_encryption_types.encryption_type#=> Arrayresp.server_side_encryption_configuration.rules[0].blocked_encryption_types.encryption_type[0]#=> String, one of "NONE", "SSE-C"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket from which the server-side encryptionconfiguration is retrieved.

    Directory buckets - When you use this operation with adirectory bucket, you must use path-style requests in the formathttps://s3express-control.region-code.amazonaws.com/bucket-name.Virtual-hosted-style requests aren't supported. Directory bucketnames must be unique in the chosen Zone (Availability Zone or LocalZone). Bucket names must also follow the formatbucket-base-name--zone-id--x-s3 (for example,DOC-EXAMPLE-BUCKET--usw2-az1--x-s3). For information about bucketnaming restrictions, seeDirectory bucket naming rules in theAmazon S3 User Guide

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

    For directory buckets, this header is not supported in this APIoperation. If you specify this header, the request fails with the HTTPstatus code501 Not Implemented.

Returns:

See Also:

7024702570267027
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7024defget_bucket_encryption(params={},options={})req=build_request(:get_bucket_encryption,params)req.send_request(options)end

#get_bucket_intelligent_tiering_configuration(params = {}) ⇒Types::GetBucketIntelligentTieringConfigurationOutput

This operation is not supported for directory buckets.

Gets the S3 Intelligent-Tiering configuration from the specifiedbucket.

The S3 Intelligent-Tiering storage class is designed to optimizestorage costs by automatically moving data to the most cost-effectivestorage access tier, without performance impact or operationaloverhead. S3 Intelligent-Tiering delivers automatic cost savings inthree low latency and high throughput access tiers. To get the loweststorage cost on data that can be accessed in minutes to hours, you canchoose to activate additional archiving capabilities.

The S3 Intelligent-Tiering storage class is the ideal storage classfor data with unknown, changing, or unpredictable access patterns,independent of object size or retention period. If the size of anobject is less than 128 KB, it is not monitored and not eligible forauto-tiering. Smaller objects can be stored, but they are alwayscharged at the Frequent Access tier rates in the S3Intelligent-Tiering storage class.

For more information, seeStorage class for automatically optimizingfrequently and infrequently accessed objects.

Operations related toGetBucketIntelligentTieringConfigurationinclude:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.get_bucket_intelligent_tiering_configuration({bucket:"BucketName",# requiredid:"IntelligentTieringId",# requiredexpected_bucket_owner:"AccountId",})

Response structure

resp.intelligent_tiering_configuration.id#=> Stringresp.intelligent_tiering_configuration.filter.prefix#=> Stringresp.intelligent_tiering_configuration.filter.tag.key#=> Stringresp.intelligent_tiering_configuration.filter.tag.value#=> Stringresp.intelligent_tiering_configuration.filter.and.prefix#=> Stringresp.intelligent_tiering_configuration.filter.and.tags#=> Arrayresp.intelligent_tiering_configuration.filter.and.tags[0].key#=> Stringresp.intelligent_tiering_configuration.filter.and.tags[0].value#=> Stringresp.intelligent_tiering_configuration.status#=> String, one of "Enabled", "Disabled"resp.intelligent_tiering_configuration.tierings#=> Arrayresp.intelligent_tiering_configuration.tierings[0].days#=> Integerresp.intelligent_tiering_configuration.tierings[0].access_tier#=> String, one of "ARCHIVE_ACCESS", "DEEP_ARCHIVE_ACCESS"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the Amazon S3 bucket whose configuration you want tomodify or retrieve.

  • :id(required,String)

    The ID used to identify the S3 Intelligent-Tiering configuration.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

7118711971207121
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7118defget_bucket_intelligent_tiering_configuration(params={},options={})req=build_request(:get_bucket_intelligent_tiering_configuration,params)req.send_request(options)end

#get_bucket_inventory_configuration(params = {}) ⇒Types::GetBucketInventoryConfigurationOutput

This operation is not supported for directory buckets.

Returns an S3 Inventory configuration (identified by the inventoryconfiguration ID) from the bucket.

To use this operation, you must have permissions to perform thes3:GetInventoryConfiguration action. The bucket owner has thispermission by default and can grant this permission to others. Formore information about permissions, seePermissions Related to BucketSubresource Operations andManaging Access Permissions to YourAmazon S3 Resources.

For information about the Amazon S3 inventory feature, seeAmazon S3Inventory.

The following operations are related toGetBucketInventoryConfiguration:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.get_bucket_inventory_configuration({bucket:"BucketName",# requiredid:"InventoryId",# requiredexpected_bucket_owner:"AccountId",})

Response structure

resp.inventory_configuration.destination.s3_bucket_destination.#=> Stringresp.inventory_configuration.destination.s3_bucket_destination.bucket#=> Stringresp.inventory_configuration.destination.s3_bucket_destination.format#=> String, one of "CSV", "ORC", "Parquet"resp.inventory_configuration.destination.s3_bucket_destination.prefix#=> Stringresp.inventory_configuration.destination.s3_bucket_destination.encryption.ssekms.key_id#=> Stringresp.inventory_configuration.is_enabled#=> Booleanresp.inventory_configuration.filter.prefix#=> Stringresp.inventory_configuration.id#=> Stringresp.inventory_configuration.included_object_versions#=> String, one of "All", "Current"resp.inventory_configuration.optional_fields#=> Arrayresp.inventory_configuration.optional_fields[0]#=> String, one of "Size", "LastModifiedDate", "StorageClass", "ETag", "IsMultipartUploaded", "ReplicationStatus", "EncryptionStatus", "ObjectLockRetainUntilDate", "ObjectLockMode", "ObjectLockLegalHoldStatus", "IntelligentTieringAccessTier", "BucketKeyStatus", "ChecksumAlgorithm", "ObjectAccessControlList", "ObjectOwner", "LifecycleExpirationDate"resp.inventory_configuration.schedule.frequency#=> String, one of "Daily", "Weekly"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket containing the inventory configuration toretrieve.

  • :id(required,String)

    The ID used to identify the inventory configuration.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

7205720672077208
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7205defget_bucket_inventory_configuration(params={},options={})req=build_request(:get_bucket_inventory_configuration,params)req.send_request(options)end

#get_bucket_lifecycle(params = {}) ⇒Types::GetBucketLifecycleOutput

For an updated version of this API, seeGetBucketLifecycleConfiguration. If you configured a bucketlifecycle using thefilter element, you should see the updatedversion of this topic. This topic is provided for backwardcompatibility.

This operation is not supported for directory buckets.

Returns the lifecycle configuration information set on the bucket. Forinformation about lifecycle configuration, seeObject LifecycleManagement.

To use this operation, you must have permission to perform thes3:GetLifecycleConfiguration action. The bucket owner has thispermission by default. The bucket owner can grant this permission toothers. For more information about permissions, seePermissionsRelated to Bucket Subresource Operations andManaging AccessPermissions to Your Amazon S3 Resources.

GetBucketLifecycle has the following special error:

  • Error code:NoSuchLifecycleConfiguration

    • Description: The lifecycle configuration does not exist.

    • HTTP Status Code: 404 Not Found

    • SOAP Fault Code Prefix: Client

The following operations are related toGetBucketLifecycle:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To get a bucket acl

# The following example gets ACL on the specified bucket.resp=client.get_bucket_lifecycle({bucket:"acl1",})resp.to_houtputsthefollowing:{rules:[{expiration:{days:1,},id:"delete logs",prefix:"123/",status:"Enabled",},],}

Request syntax with placeholder values

resp=client.get_bucket_lifecycle({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Response structure

resp.rules#=> Arrayresp.rules[0].expiration.date#=> Timeresp.rules[0].expiration.days#=> Integerresp.rules[0].expiration.expired_object_delete_marker#=> Booleanresp.rules[0].id#=> Stringresp.rules[0].prefix#=> Stringresp.rules[0].status#=> String, one of "Enabled", "Disabled"resp.rules[0].transition.date#=> Timeresp.rules[0].transition.days#=> Integerresp.rules[0].transition.storage_class#=> String, one of "GLACIER", "STANDARD_IA", "ONEZONE_IA", "INTELLIGENT_TIERING", "DEEP_ARCHIVE", "GLACIER_IR"resp.rules[0].noncurrent_version_transition.noncurrent_days#=> Integerresp.rules[0].noncurrent_version_transition.storage_class#=> String, one of "GLACIER", "STANDARD_IA", "ONEZONE_IA", "INTELLIGENT_TIERING", "DEEP_ARCHIVE", "GLACIER_IR"resp.rules[0].noncurrent_version_transition.newer_noncurrent_versions#=> Integerresp.rules[0].noncurrent_version_expiration.noncurrent_days#=> Integerresp.rules[0].noncurrent_version_expiration.newer_noncurrent_versions#=> Integerresp.rules[0].abort_incomplete_multipart_upload.days_after_initiation#=> Integer

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket for which to get the lifecycle information.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

7327732873297330
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7327defget_bucket_lifecycle(params={},options={})req=build_request(:get_bucket_lifecycle,params)req.send_request(options)end

#get_bucket_lifecycle_configuration(params = {}) ⇒Types::GetBucketLifecycleConfigurationOutput

Returns the lifecycle configuration information set on the bucket. Forinformation about lifecycle configuration, seeObject LifecycleManagement.

Bucket lifecycle configuration now supports specifying a lifecyclerule using an object key name prefix, one or more object tags, objectsize, or any combination of these. Accordingly, this section describesthe latest API, which is compatible with the new functionality. Theprevious version of the API supported filtering based only on anobject key name prefix, which is supported for general purpose bucketsfor backward compatibility. For the related API description, seeGetBucketLifecycle.

Lifecyle configurations for directory buckets only support expiringobjects and cancelling multipart uploads. Expiring of versionedobjects, transitions and tag filters are not supported.

Permissions
  • General purpose bucket permissions - By default, all Amazon S3resources are private, including buckets, objects, and relatedsubresources (for example, lifecycle configuration and websiteconfiguration). Only the resource owner (that is, the Amazon WebServices account that created it) can access the resource. Theresource owner can optionally grant access permissions to othersby writing an access policy. For this operation, a user must havethes3:GetLifecycleConfiguration permission.

    For more information about permissions, seeManaging AccessPermissions to Your Amazon S3 Resources.^

  • Directory bucket permissions - You must have thes3express:GetLifecycleConfiguration permission in an IAMidentity-based policy to use this operation. Cross-account accessto this API operation isn't supported. The resource owner canoptionally grant access permissions to others by creating a roleor user for them as long as they are within the same account asthe owner and resource.

    For more information about directory bucket policies andpermissions, seeAuthorizing Regional endpoint APIs with IAMin theAmazon S3 User Guide.

    Directory buckets - For directory buckets, you must makerequests for this API operation to the Regional endpoint. Theseendpoints support path-style requests in the formathttps://s3express-control.region-code.amazonaws.com/bucket-name. Virtual-hosted-style requests aren't supported. For moreinformation about endpoints in Availability Zones, seeRegionaland Zonal endpoints for directory buckets in AvailabilityZones in theAmazon S3 User Guide. For more informationabout endpoints in Local Zones, seeConcepts for directorybuckets in Local Zones in theAmazon S3 User Guide.

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax iss3express-control.region.amazonaws.com.

GetBucketLifecycleConfiguration has the following special error:

  • Error code:NoSuchLifecycleConfiguration

    • Description: The lifecycle configuration does not exist.

    • HTTP Status Code: 404 Not Found

    • SOAP Fault Code Prefix: Client

The following operations are related toGetBucketLifecycleConfiguration:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To get lifecycle configuration on a bucket

# The following example retrieves lifecycle configuration on set on a bucket.resp=client.get_bucket_lifecycle_configuration({bucket:"examplebucket",})resp.to_houtputsthefollowing:{rules:[{id:"Rule for TaxDocs/",prefix:"TaxDocs",status:"Enabled",transitions:[{days:365,storage_class:"STANDARD_IA",},],},],}

Request syntax with placeholder values

resp=client.get_bucket_lifecycle_configuration({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Response structure

resp.rules#=> Arrayresp.rules[0].expiration.date#=> Timeresp.rules[0].expiration.days#=> Integerresp.rules[0].expiration.expired_object_delete_marker#=> Booleanresp.rules[0].id#=> Stringresp.rules[0].prefix#=> Stringresp.rules[0].filter.prefix#=> Stringresp.rules[0].filter.tag.key#=> Stringresp.rules[0].filter.tag.value#=> Stringresp.rules[0].filter.object_size_greater_than#=> Integerresp.rules[0].filter.object_size_less_than#=> Integerresp.rules[0].filter.and.prefix#=> Stringresp.rules[0].filter.and.tags#=> Arrayresp.rules[0].filter.and.tags[0].key#=> Stringresp.rules[0].filter.and.tags[0].value#=> Stringresp.rules[0].filter.and.object_size_greater_than#=> Integerresp.rules[0].filter.and.object_size_less_than#=> Integerresp.rules[0].status#=> String, one of "Enabled", "Disabled"resp.rules[0].transitions#=> Arrayresp.rules[0].transitions[0].date#=> Timeresp.rules[0].transitions[0].days#=> Integerresp.rules[0].transitions[0].storage_class#=> String, one of "GLACIER", "STANDARD_IA", "ONEZONE_IA", "INTELLIGENT_TIERING", "DEEP_ARCHIVE", "GLACIER_IR"resp.rules[0].noncurrent_version_transitions#=> Arrayresp.rules[0].noncurrent_version_transitions[0].noncurrent_days#=> Integerresp.rules[0].noncurrent_version_transitions[0].storage_class#=> String, one of "GLACIER", "STANDARD_IA", "ONEZONE_IA", "INTELLIGENT_TIERING", "DEEP_ARCHIVE", "GLACIER_IR"resp.rules[0].noncurrent_version_transitions[0].newer_noncurrent_versions#=> Integerresp.rules[0].noncurrent_version_expiration.noncurrent_days#=> Integerresp.rules[0].noncurrent_version_expiration.newer_noncurrent_versions#=> Integerresp.rules[0].abort_incomplete_multipart_upload.days_after_initiation#=> Integerresp.transition_default_minimum_object_size#=> String, one of "varies_by_storage_class", "all_storage_classes_128K"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket for which to get the lifecycle information.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

    This parameter applies to general purpose buckets only. It is notsupported for directory bucket lifecycle configurations.

Returns:

See Also:

7517751875197520
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7517defget_bucket_lifecycle_configuration(params={},options={})req=build_request(:get_bucket_lifecycle_configuration,params)req.send_request(options)end

#get_bucket_location(params = {}) ⇒Types::GetBucketLocationOutput

Using theGetBucketLocation operation is no longer a best practice.To return the Region that a bucket resides in, we recommend that youuse theHeadBucket operation instead. For backward compatibility,Amazon S3 continues to support theGetBucketLocation operation.

Returns the Region the bucket resides in. You set the bucket's Regionusing theLocationConstraint request parameter in aCreateBucketrequest. For more information, seeCreateBucket.

In a bucket's home Region, calls to theGetBucketLocation operationare governed by the bucket's policy. In other Regions, the bucketpolicy doesn't apply, which means that cross-account access won't beauthorized. However, calls to theHeadBucket operation always returnthe bucket’s location through an HTTP response header, whether accessto the bucket is authorized or not. Therefore, we recommend using theHeadBucket operation for bucket Region discovery and to avoid usingtheGetBucketLocation operation.

When you use this API operation with an access point, provide thealias of the access point in place of the bucket name.

When you use this API operation with an Object Lambda access point,provide the alias of the Object Lambda access point in place of thebucket name. If the Object Lambda access point alias in a request isnot valid, the error codeInvalidAccessPointAliasError is returned.For more information aboutInvalidAccessPointAliasError, seeListof Error Codes.

This operation is not supported for directory buckets.

The following operations are related toGetBucketLocation:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To get bucket location

# The following example returns bucket location.resp=client.get_bucket_location({bucket:"examplebucket",})resp.to_houtputsthefollowing:{location_constraint:"us-west-2",}

Request syntax with placeholder values

resp=client.get_bucket_location({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Response structure

resp.location_constraint#=> String, one of "af-south-1", "ap-east-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-south-1", "ap-south-2", "ap-southeast-1", "ap-southeast-2", "ap-southeast-3", "ap-southeast-4", "ap-southeast-5", "ca-central-1", "cn-north-1", "cn-northwest-1", "EU", "eu-central-1", "eu-central-2", "eu-north-1", "eu-south-1", "eu-south-2", "eu-west-1", "eu-west-2", "eu-west-3", "il-central-1", "me-central-1", "me-south-1", "sa-east-1", "us-east-2", "us-gov-east-1", "us-gov-west-1", "us-west-1", "us-west-2"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket for which to get the location.

    When you use this API operation with an access point, provide thealias of the access point in place of the bucket name.

    When you use this API operation with an Object Lambda access point,provide the alias of the Object Lambda access point in place of thebucket name. If the Object Lambda access point alias in a request isnot valid, the error codeInvalidAccessPointAliasError is returned.For more information aboutInvalidAccessPointAliasError, seeListof Error Codes.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

7628762976307631
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7628defget_bucket_location(params={},options={})req=build_request(:get_bucket_location,params)req.send_request(options)end

#get_bucket_logging(params = {}) ⇒Types::GetBucketLoggingOutput

This operation is not supported for directory buckets.

Returns the logging status of a bucket and the permissions users haveto view and modify that status.

The following operations are related toGetBucketLogging:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.get_bucket_logging({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Response structure

resp.logging_enabled.target_bucket#=> Stringresp.logging_enabled.target_grants#=> Arrayresp.logging_enabled.target_grants[0].grantee.display_name#=> Stringresp.logging_enabled.target_grants[0].grantee.email_address#=> Stringresp.logging_enabled.target_grants[0].grantee.id#=> Stringresp.logging_enabled.target_grants[0].grantee.type#=> String, one of "CanonicalUser", "AmazonCustomerByEmail", "Group"resp.logging_enabled.target_grants[0].grantee.uri#=> Stringresp.logging_enabled.target_grants[0].permission#=> String, one of "FULL_CONTROL", "READ", "WRITE"resp.logging_enabled.target_prefix#=> Stringresp.logging_enabled.target_object_key_format.partitioned_prefix.partition_date_source#=> String, one of "EventTime", "DeliveryTime"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket name for which to get the logging information.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

7691769276937694
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7691defget_bucket_logging(params={},options={})req=build_request(:get_bucket_logging,params)req.send_request(options)end

#get_bucket_metadata_configuration(params = {}) ⇒Types::GetBucketMetadataConfigurationOutput

Retrieves the S3 Metadata configuration for a general purpose bucket.For more information, seeAccelerating data discovery with S3Metadata in theAmazon S3 User Guide.

You can use the V2GetBucketMetadataConfiguration API operation withV1 or V2 metadata configurations. However, if you try to use the V1GetBucketMetadataTableConfiguration API operation with V2configurations, you will receive an HTTP405 Method Not Allowederror.

Permissions

To use this operation, you must have thes3:GetBucketMetadataTableConfiguration permission. For moreinformation, seeSetting up permissions for configuring metadatatables in theAmazon S3 User Guide.

The IAM policy action name is the same for the V1 and V2 APIoperations.

The following operations are related toGetBucketMetadataConfiguration:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Response structure

resp...destination_result.table_bucket_type#=> String, one of "aws", "customer"resp...destination_result.table_bucket_arn#=> Stringresp...destination_result.table_namespace#=> Stringresp...journal_table_configuration_result.table_status#=> Stringresp...journal_table_configuration_result.error.error_code#=> Stringresp...journal_table_configuration_result.error.error_message#=> Stringresp...journal_table_configuration_result.table_name#=> Stringresp...journal_table_configuration_result.table_arn#=> Stringresp...journal_table_configuration_result.record_expiration.expiration#=> String, one of "ENABLED", "DISABLED"resp...journal_table_configuration_result.record_expiration.days#=> Integerresp...inventory_table_configuration_result.configuration_state#=> String, one of "ENABLED", "DISABLED"resp...inventory_table_configuration_result.table_status#=> Stringresp...inventory_table_configuration_result.error.error_code#=> Stringresp...inventory_table_configuration_result.error.error_message#=> Stringresp...inventory_table_configuration_result.table_name#=> Stringresp...inventory_table_configuration_result.table_arn#=> String

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The general purpose bucket that corresponds to the metadataconfiguration that you want to retrieve.

  • :expected_bucket_owner(String)

    The expected owner of the general purpose bucket that you want toretrieve the metadata table configuration for.

Returns:

See Also:

7786778777887789
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7786def(params={},options={})req=build_request(:get_bucket_metadata_configuration,params)req.send_request(options)end

#get_bucket_metadata_table_configuration(params = {}) ⇒Types::GetBucketMetadataTableConfigurationOutput

We recommend that you retrieve your S3 Metadata configurations byusing the V2GetBucketMetadataTableConfiguration API operation.We no longer recommend using the V1GetBucketMetadataTableConfiguration API operation.

If you created your S3 Metadata configuration before July 15, 2025,werecommend that you delete and re-create your configuration by usingCreateBucketMetadataConfiguration so that you can expire journaltable records and create a live inventory table.

Retrieves the V1 S3 Metadata configuration for a general purposebucket. For more information, seeAccelerating data discovery with S3Metadata in theAmazon S3 User Guide.

You can use the V2GetBucketMetadataConfiguration API operation withV1 or V2 metadata table configurations. However, if you try to use theV1GetBucketMetadataTableConfiguration API operation with V2configurations, you will receive an HTTP405 Method Not Allowederror.

Make sure that you update your processes to use the new V2 APIoperations (CreateBucketMetadataConfiguration,GetBucketMetadataConfiguration, andDeleteBucketMetadataConfiguration) instead of the V1 API operations.

Permissions

To use this operation, you must have thes3:GetBucketMetadataTableConfiguration permission. For moreinformation, seeSetting up permissions for configuring metadatatables in theAmazon S3 User Guide.

The following operations are related toGetBucketMetadataTableConfiguration:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Response structure

resp...s3_tables_destination_result.table_bucket_arn#=> Stringresp...s3_tables_destination_result.table_name#=> Stringresp...s3_tables_destination_result.table_arn#=> Stringresp...s3_tables_destination_result.table_namespace#=> Stringresp..status#=> Stringresp..error.error_code#=> Stringresp..error.error_message#=> String

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The general purpose bucket that corresponds to the metadata tableconfiguration that you want to retrieve.

  • :expected_bucket_owner(String)

    The expected owner of the general purpose bucket that you want toretrieve the metadata table configuration for.

Returns:

See Also:

7879788078817882
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7879def(params={},options={})req=build_request(:get_bucket_metadata_table_configuration,params)req.send_request(options)end

#get_bucket_metrics_configuration(params = {}) ⇒Types::GetBucketMetricsConfigurationOutput

This operation is not supported for directory buckets.

Gets a metrics configuration (specified by the metrics configurationID) from the bucket. Note that this doesn't include the daily storagemetrics.

To use this operation, you must have permissions to perform thes3:GetMetricsConfiguration action. The bucket owner has thispermission by default. The bucket owner can grant this permission toothers. For more information about permissions, seePermissionsRelated to Bucket Subresource Operations andManaging AccessPermissions to Your Amazon S3 Resources.

For information about CloudWatch request metrics for Amazon S3, seeMonitoring Metrics with Amazon CloudWatch.

The following operations are related toGetBucketMetricsConfiguration:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.get_bucket_metrics_configuration({bucket:"BucketName",# requiredid:"MetricsId",# requiredexpected_bucket_owner:"AccountId",})

Response structure

resp.metrics_configuration.id#=> Stringresp.metrics_configuration.filter.prefix#=> Stringresp.metrics_configuration.filter.tag.key#=> Stringresp.metrics_configuration.filter.tag.value#=> Stringresp.metrics_configuration.filter.access_point_arn#=> Stringresp.metrics_configuration.filter.and.prefix#=> Stringresp.metrics_configuration.filter.and.tags#=> Arrayresp.metrics_configuration.filter.and.tags[0].key#=> Stringresp.metrics_configuration.filter.and.tags[0].value#=> Stringresp.metrics_configuration.filter.and.access_point_arn#=> String

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket containing the metrics configuration toretrieve.

  • :id(required,String)

    The ID used to identify the metrics configuration. The ID has a 64character limit and can only contain letters, numbers, periods,dashes, and underscores.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

7969797079717972
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 7969defget_bucket_metrics_configuration(params={},options={})req=build_request(:get_bucket_metrics_configuration,params)req.send_request(options)end

#get_bucket_notification(params = {}) ⇒Types::NotificationConfigurationDeprecated

This operation is not supported for directory buckets.

No longer used, seeGetBucketNotificationConfiguration.

Examples:

Example: To get notification configuration set on a bucket

# The following example returns notification configuration set on a bucket.resp=client.get_bucket_notification({bucket:"examplebucket",})resp.to_houtputsthefollowing:{queue_configuration:{event:"s3:ObjectCreated:Put",events:["s3:ObjectCreated:Put",],id:"MDQ2OGQ4NDEtOTBmNi00YTM4LTk0NzYtZDIwN2I3NWQ1NjIx",queue:"arn:aws:sqs:us-east-1:acct-id:S3ObjectCreatedEventQueue",},topic_configuration:{event:"s3:ObjectCreated:Copy",events:["s3:ObjectCreated:Copy",],id:"YTVkMWEzZGUtNTY1NS00ZmE2LWJjYjktMmRlY2QwODFkNTJi",topic:"arn:aws:sns:us-east-1:acct-id:S3ObjectCreatedEventTopic",},}

Example: To get notification configuration set on a bucket

# The following example returns notification configuration set on a bucket.resp=client.get_bucket_notification({bucket:"examplebucket",})resp.to_houtputsthefollowing:{queue_configuration:{event:"s3:ObjectCreated:Put",events:["s3:ObjectCreated:Put",],id:"MDQ2OGQ4NDEtOTBmNi00YTM4LTk0NzYtZDIwN2I3NWQ1NjIx",queue:"arn:aws:sqs:us-east-1:acct-id:S3ObjectCreatedEventQueue",},topic_configuration:{event:"s3:ObjectCreated:Copy",events:["s3:ObjectCreated:Copy",],id:"YTVkMWEzZGUtNTY1NS00ZmE2LWJjYjktMmRlY2QwODFkNTJi",topic:"arn:aws:sns:us-east-1:acct-id:S3ObjectCreatedEventTopic",},}

Request syntax with placeholder values

resp=client.get_bucket_notification({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Response structure

resp.topic_configuration.id#=> Stringresp.topic_configuration.events#=> Arrayresp.topic_configuration.events[0]#=> String, one of "s3:ReducedRedundancyLostObject", "s3:ObjectCreated:*", "s3:ObjectCreated:Put", "s3:ObjectCreated:Post", "s3:ObjectCreated:Copy", "s3:ObjectCreated:CompleteMultipartUpload", "s3:ObjectRemoved:*", "s3:ObjectRemoved:Delete", "s3:ObjectRemoved:DeleteMarkerCreated", "s3:ObjectRestore:*", "s3:ObjectRestore:Post", "s3:ObjectRestore:Completed", "s3:Replication:*", "s3:Replication:OperationFailedReplication", "s3:Replication:OperationNotTracked", "s3:Replication:OperationMissedThreshold", "s3:Replication:OperationReplicatedAfterThreshold", "s3:ObjectRestore:Delete", "s3:LifecycleTransition", "s3:IntelligentTiering", "s3:ObjectAcl:Put", "s3:LifecycleExpiration:*", "s3:LifecycleExpiration:Delete", "s3:LifecycleExpiration:DeleteMarkerCreated", "s3:ObjectTagging:*", "s3:ObjectTagging:Put", "s3:ObjectTagging:Delete"resp.topic_configuration.event#=> String, one of "s3:ReducedRedundancyLostObject", "s3:ObjectCreated:*", "s3:ObjectCreated:Put", "s3:ObjectCreated:Post", "s3:ObjectCreated:Copy", "s3:ObjectCreated:CompleteMultipartUpload", "s3:ObjectRemoved:*", "s3:ObjectRemoved:Delete", "s3:ObjectRemoved:DeleteMarkerCreated", "s3:ObjectRestore:*", "s3:ObjectRestore:Post", "s3:ObjectRestore:Completed", "s3:Replication:*", "s3:Replication:OperationFailedReplication", "s3:Replication:OperationNotTracked", "s3:Replication:OperationMissedThreshold", "s3:Replication:OperationReplicatedAfterThreshold", "s3:ObjectRestore:Delete", "s3:LifecycleTransition", "s3:IntelligentTiering", "s3:ObjectAcl:Put", "s3:LifecycleExpiration:*", "s3:LifecycleExpiration:Delete", "s3:LifecycleExpiration:DeleteMarkerCreated", "s3:ObjectTagging:*", "s3:ObjectTagging:Put", "s3:ObjectTagging:Delete"resp.topic_configuration.topic#=> Stringresp.queue_configuration.id#=> Stringresp.queue_configuration.event#=> String, one of "s3:ReducedRedundancyLostObject", "s3:ObjectCreated:*", "s3:ObjectCreated:Put", "s3:ObjectCreated:Post", "s3:ObjectCreated:Copy", "s3:ObjectCreated:CompleteMultipartUpload", "s3:ObjectRemoved:*", "s3:ObjectRemoved:Delete", "s3:ObjectRemoved:DeleteMarkerCreated", "s3:ObjectRestore:*", "s3:ObjectRestore:Post", "s3:ObjectRestore:Completed", "s3:Replication:*", "s3:Replication:OperationFailedReplication", "s3:Replication:OperationNotTracked", "s3:Replication:OperationMissedThreshold", "s3:Replication:OperationReplicatedAfterThreshold", "s3:ObjectRestore:Delete", "s3:LifecycleTransition", "s3:IntelligentTiering", "s3:ObjectAcl:Put", "s3:LifecycleExpiration:*", "s3:LifecycleExpiration:Delete", "s3:LifecycleExpiration:DeleteMarkerCreated", "s3:ObjectTagging:*", "s3:ObjectTagging:Put", "s3:ObjectTagging:Delete"resp.queue_configuration.events#=> Arrayresp.queue_configuration.events[0]#=> String, one of "s3:ReducedRedundancyLostObject", "s3:ObjectCreated:*", "s3:ObjectCreated:Put", "s3:ObjectCreated:Post", "s3:ObjectCreated:Copy", "s3:ObjectCreated:CompleteMultipartUpload", "s3:ObjectRemoved:*", "s3:ObjectRemoved:Delete", "s3:ObjectRemoved:DeleteMarkerCreated", "s3:ObjectRestore:*", "s3:ObjectRestore:Post", "s3:ObjectRestore:Completed", "s3:Replication:*", "s3:Replication:OperationFailedReplication", "s3:Replication:OperationNotTracked", "s3:Replication:OperationMissedThreshold", "s3:Replication:OperationReplicatedAfterThreshold", "s3:ObjectRestore:Delete", "s3:LifecycleTransition", "s3:IntelligentTiering", "s3:ObjectAcl:Put", "s3:LifecycleExpiration:*", "s3:LifecycleExpiration:Delete", "s3:LifecycleExpiration:DeleteMarkerCreated", "s3:ObjectTagging:*", "s3:ObjectTagging:Put", "s3:ObjectTagging:Delete"resp.queue_configuration.queue#=> Stringresp.cloud_function_configuration.id#=> Stringresp.cloud_function_configuration.event#=> String, one of "s3:ReducedRedundancyLostObject", "s3:ObjectCreated:*", "s3:ObjectCreated:Put", "s3:ObjectCreated:Post", "s3:ObjectCreated:Copy", "s3:ObjectCreated:CompleteMultipartUpload", "s3:ObjectRemoved:*", "s3:ObjectRemoved:Delete", "s3:ObjectRemoved:DeleteMarkerCreated", "s3:ObjectRestore:*", "s3:ObjectRestore:Post", "s3:ObjectRestore:Completed", "s3:Replication:*", "s3:Replication:OperationFailedReplication", "s3:Replication:OperationNotTracked", "s3:Replication:OperationMissedThreshold", "s3:Replication:OperationReplicatedAfterThreshold", "s3:ObjectRestore:Delete", "s3:LifecycleTransition", "s3:IntelligentTiering", "s3:ObjectAcl:Put", "s3:LifecycleExpiration:*", "s3:LifecycleExpiration:Delete", "s3:LifecycleExpiration:DeleteMarkerCreated", "s3:ObjectTagging:*", "s3:ObjectTagging:Put", "s3:ObjectTagging:Delete"resp.cloud_function_configuration.events#=> Arrayresp.cloud_function_configuration.events[0]#=> String, one of "s3:ReducedRedundancyLostObject", "s3:ObjectCreated:*", "s3:ObjectCreated:Put", "s3:ObjectCreated:Post", "s3:ObjectCreated:Copy", "s3:ObjectCreated:CompleteMultipartUpload", "s3:ObjectRemoved:*", "s3:ObjectRemoved:Delete", "s3:ObjectRemoved:DeleteMarkerCreated", "s3:ObjectRestore:*", "s3:ObjectRestore:Post", "s3:ObjectRestore:Completed", "s3:Replication:*", "s3:Replication:OperationFailedReplication", "s3:Replication:OperationNotTracked", "s3:Replication:OperationMissedThreshold", "s3:Replication:OperationReplicatedAfterThreshold", "s3:ObjectRestore:Delete", "s3:LifecycleTransition", "s3:IntelligentTiering", "s3:ObjectAcl:Put", "s3:LifecycleExpiration:*", "s3:LifecycleExpiration:Delete", "s3:LifecycleExpiration:DeleteMarkerCreated", "s3:ObjectTagging:*", "s3:ObjectTagging:Put", "s3:ObjectTagging:Delete"resp.cloud_function_configuration.cloud_function#=> Stringresp.cloud_function_configuration.invocation_role#=> String

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket for which to get the notificationconfiguration.

    When you use this API operation with an access point, provide thealias of the access point in place of the bucket name.

    When you use this API operation with an Object Lambda access point,provide the alias of the Object Lambda access point in place of thebucket name. If the Object Lambda access point alias in a request isnot valid, the error codeInvalidAccessPointAliasError is returned.For more information aboutInvalidAccessPointAliasError, seeListof Error Codes.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

8100810181028103
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 8100defget_bucket_notification(params={},options={})req=build_request(:get_bucket_notification,params)req.send_request(options)end

#get_bucket_notification_configuration(params = {}) ⇒Types::NotificationConfiguration

This operation is not supported for directory buckets.

Returns the notification configuration of a bucket.

If notifications are not enabled on the bucket, the action returns anemptyNotificationConfiguration element.

By default, you must be the bucket owner to read the notificationconfiguration of a bucket. However, the bucket owner can use a bucketpolicy to grant permission to other users to read this configurationwith thes3:GetBucketNotification permission.

When you use this API operation with an access point, provide thealias of the access point in place of the bucket name.

When you use this API operation with an Object Lambda access point,provide the alias of the Object Lambda access point in place of thebucket name. If the Object Lambda access point alias in a request isnot valid, the error codeInvalidAccessPointAliasError is returned.For more information aboutInvalidAccessPointAliasError, seeListof Error Codes.

For more information about setting and reading the notificationconfiguration on a bucket, seeSetting Up Notification of BucketEvents. For more information about bucket policies, seeUsingBucket Policies.

The following action is related toGetBucketNotification:

^

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.get_bucket_notification_configuration({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Response structure

resp.topic_configurations#=> Arrayresp.topic_configurations[0].id#=> Stringresp.topic_configurations[0].topic_arn#=> Stringresp.topic_configurations[0].events#=> Arrayresp.topic_configurations[0].events[0]#=> String, one of "s3:ReducedRedundancyLostObject", "s3:ObjectCreated:*", "s3:ObjectCreated:Put", "s3:ObjectCreated:Post", "s3:ObjectCreated:Copy", "s3:ObjectCreated:CompleteMultipartUpload", "s3:ObjectRemoved:*", "s3:ObjectRemoved:Delete", "s3:ObjectRemoved:DeleteMarkerCreated", "s3:ObjectRestore:*", "s3:ObjectRestore:Post", "s3:ObjectRestore:Completed", "s3:Replication:*", "s3:Replication:OperationFailedReplication", "s3:Replication:OperationNotTracked", "s3:Replication:OperationMissedThreshold", "s3:Replication:OperationReplicatedAfterThreshold", "s3:ObjectRestore:Delete", "s3:LifecycleTransition", "s3:IntelligentTiering", "s3:ObjectAcl:Put", "s3:LifecycleExpiration:*", "s3:LifecycleExpiration:Delete", "s3:LifecycleExpiration:DeleteMarkerCreated", "s3:ObjectTagging:*", "s3:ObjectTagging:Put", "s3:ObjectTagging:Delete"resp.topic_configurations[0].filter.key.filter_rules#=> Arrayresp.topic_configurations[0].filter.key.filter_rules[0].name#=> String, one of "prefix", "suffix"resp.topic_configurations[0].filter.key.filter_rules[0].value#=> Stringresp.queue_configurations#=> Arrayresp.queue_configurations[0].id#=> Stringresp.queue_configurations[0].queue_arn#=> Stringresp.queue_configurations[0].events#=> Arrayresp.queue_configurations[0].events[0]#=> String, one of "s3:ReducedRedundancyLostObject", "s3:ObjectCreated:*", "s3:ObjectCreated:Put", "s3:ObjectCreated:Post", "s3:ObjectCreated:Copy", "s3:ObjectCreated:CompleteMultipartUpload", "s3:ObjectRemoved:*", "s3:ObjectRemoved:Delete", "s3:ObjectRemoved:DeleteMarkerCreated", "s3:ObjectRestore:*", "s3:ObjectRestore:Post", "s3:ObjectRestore:Completed", "s3:Replication:*", "s3:Replication:OperationFailedReplication", "s3:Replication:OperationNotTracked", "s3:Replication:OperationMissedThreshold", "s3:Replication:OperationReplicatedAfterThreshold", "s3:ObjectRestore:Delete", "s3:LifecycleTransition", "s3:IntelligentTiering", "s3:ObjectAcl:Put", "s3:LifecycleExpiration:*", "s3:LifecycleExpiration:Delete", "s3:LifecycleExpiration:DeleteMarkerCreated", "s3:ObjectTagging:*", "s3:ObjectTagging:Put", "s3:ObjectTagging:Delete"resp.queue_configurations[0].filter.key.filter_rules#=> Arrayresp.queue_configurations[0].filter.key.filter_rules[0].name#=> String, one of "prefix", "suffix"resp.queue_configurations[0].filter.key.filter_rules[0].value#=> Stringresp.lambda_function_configurations#=> Arrayresp.lambda_function_configurations[0].id#=> Stringresp.lambda_function_configurations[0].lambda_function_arn#=> Stringresp.lambda_function_configurations[0].events#=> Arrayresp.lambda_function_configurations[0].events[0]#=> String, one of "s3:ReducedRedundancyLostObject", "s3:ObjectCreated:*", "s3:ObjectCreated:Put", "s3:ObjectCreated:Post", "s3:ObjectCreated:Copy", "s3:ObjectCreated:CompleteMultipartUpload", "s3:ObjectRemoved:*", "s3:ObjectRemoved:Delete", "s3:ObjectRemoved:DeleteMarkerCreated", "s3:ObjectRestore:*", "s3:ObjectRestore:Post", "s3:ObjectRestore:Completed", "s3:Replication:*", "s3:Replication:OperationFailedReplication", "s3:Replication:OperationNotTracked", "s3:Replication:OperationMissedThreshold", "s3:Replication:OperationReplicatedAfterThreshold", "s3:ObjectRestore:Delete", "s3:LifecycleTransition", "s3:IntelligentTiering", "s3:ObjectAcl:Put", "s3:LifecycleExpiration:*", "s3:LifecycleExpiration:Delete", "s3:LifecycleExpiration:DeleteMarkerCreated", "s3:ObjectTagging:*", "s3:ObjectTagging:Put", "s3:ObjectTagging:Delete"resp.lambda_function_configurations[0].filter.key.filter_rules#=> Arrayresp.lambda_function_configurations[0].filter.key.filter_rules[0].name#=> String, one of "prefix", "suffix"resp.lambda_function_configurations[0].filter.key.filter_rules[0].value#=> String

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket for which to get the notificationconfiguration.

    When you use this API operation with an access point, provide thealias of the access point in place of the bucket name.

    When you use this API operation with an Object Lambda access point,provide the alias of the Object Lambda access point in place of thebucket name. If the Object Lambda access point alias in a request isnot valid, the error codeInvalidAccessPointAliasError is returned.For more information aboutInvalidAccessPointAliasError, seeListof Error Codes.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

8219822082218222
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 8219defget_bucket_notification_configuration(params={},options={})req=build_request(:get_bucket_notification_configuration,params)req.send_request(options)end

#get_bucket_ownership_controls(params = {}) ⇒Types::GetBucketOwnershipControlsOutput

This operation is not supported for directory buckets.

RetrievesOwnershipControls for an Amazon S3 bucket. To use thisoperation, you must have thes3:GetBucketOwnershipControlspermission. For more information about Amazon S3 permissions, seeSpecifying permissions in a policy.

A bucket doesn't haveOwnershipControls settings in the followingcases:

  • The bucket was created before theBucketOwnerEnforced ownershipsetting was introduced and you've never explicitly applied thisvalue

  • You've manually deleted the bucket ownership control value usingtheDeleteBucketOwnershipControls API operation.

By default, Amazon S3 setsOwnershipControls for all newly createdbuckets.

For information about Amazon S3 Object Ownership, seeUsing ObjectOwnership.

The following operations are related toGetBucketOwnershipControls:

  • PutBucketOwnershipControls

  • DeleteBucketOwnershipControls

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.get_bucket_ownership_controls({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Response structure

resp.ownership_controls.rules#=> Arrayresp.ownership_controls.rules[0].object_ownership#=> String, one of "BucketOwnerPreferred", "ObjectWriter", "BucketOwnerEnforced"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the Amazon S3 bucket whoseOwnershipControls you want toretrieve.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

8295829682978298
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 8295defget_bucket_ownership_controls(params={},options={})req=build_request(:get_bucket_ownership_controls,params)req.send_request(options)end

#get_bucket_policy(params = {}) ⇒Types::GetBucketPolicyOutput

Returns the policy of a specified bucket.

Directory buckets - For directory buckets, you must makerequests for this API operation to the Regional endpoint. Theseendpoints support path-style requests in the formathttps://s3express-control.region-code.amazonaws.com/bucket-name.Virtual-hosted-style requests aren't supported. For more informationabout endpoints in Availability Zones, seeRegional and Zonalendpoints for directory buckets in Availability Zones in theAmazon S3 User Guide. For more information about endpoints in LocalZones, seeConcepts for directory buckets in Local Zones in theAmazon S3 User Guide.

Permissions

If you are using an identity other than the root user of the AmazonWeb Services account that owns the bucket, the calling identity mustboth have theGetBucketPolicy permissions on the specified bucketand belong to the bucket owner's account in order to use thisoperation.

If you don't haveGetBucketPolicy permissions, Amazon S3 returnsa403 Access Denied error. If you have the correct permissions,but you're not using an identity that belongs to the bucketowner's account, Amazon S3 returns a405 Method Not Allowederror.

To ensure that bucket owners don't inadvertently lock themselvesout of their own buckets, the root principal in a bucket owner'sAmazon Web Services account can perform theGetBucketPolicy,PutBucketPolicy, andDeleteBucketPolicy API actions, even iftheir bucket policy explicitly denies the root principal's access.Bucket owner root principals can only be blocked from performingthese API actions by VPC endpoint policies and Amazon Web ServicesOrganizations policies.

  • General purpose bucket permissions - Thes3:GetBucketPolicypermission is required in a policy. For more information aboutgeneral purpose buckets bucket policies, seeUsing BucketPolicies and User Policies in theAmazon S3 User Guide.

  • Directory bucket permissions - To grant access to this APIoperation, you must have thes3express:GetBucketPolicypermission in an IAM identity-based policy instead of a bucketpolicy. Cross-account access to this API operation isn'tsupported. This operation can only be performed by the Amazon WebServices account that owns the resource. For more informationabout directory bucket policies and permissions, seeAmazon WebServices Identity and Access Management (IAM) for S3 Express OneZone in theAmazon S3 User Guide.

Example bucket policies

General purpose buckets example bucket policies - SeeBucketpolicy examples in theAmazon S3 User Guide.

Directory bucket example bucket policies - SeeExample bucketpolicies for S3 Express One Zone in theAmazon S3 User Guide.

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax iss3express-control.region-code.amazonaws.com.

The following action is related toGetBucketPolicy:

^

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To get bucket policy

# The following example returns bucket policy associated with a bucket.resp=client.get_bucket_policy({bucket:"examplebucket",})resp.to_houtputsthefollowing:{policy:"{\"Version\":\"2008-10-17\",&TCX5-2025-waiver;\"Id\":\"LogPolicy\",\"Statement\":[{\"Sid\":\"Enables the log delivery group to publish logs to your bucket \",\"Effect\":\"Allow\",\"Principal\":{\"AWS\":\"111122223333\"},\"Action\":[\"s3:GetBucketAcl\",\"s3:GetObjectAcl\",\"s3:PutObject\"],\"Resource\":[\"arn:aws:s3:::policytest1/*\",\"arn:aws:s3:::policytest1\"]}]}",}

Request syntax with placeholder values

resp=client.get_bucket_policy({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Response structure

resp.policy#=> String

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket name to get the bucket policy for.

    Directory buckets - When you use this operation with adirectory bucket, you must use path-style requests in the formathttps://s3express-control.region-code.amazonaws.com/bucket-name.Virtual-hosted-style requests aren't supported. Directory bucketnames must be unique in the chosen Zone (Availability Zone or LocalZone). Bucket names must also follow the formatbucket-base-name--zone-id--x-s3 (for example,DOC-EXAMPLE-BUCKET--usw2-az1--x-s3). For information about bucketnaming restrictions, seeDirectory bucket naming rules in theAmazon S3 User Guide

    Access points - When you use this API operation with an accesspoint, provide the alias of the access point in place of the bucketname.

    Object Lambda access points - When you use this API operation withan Object Lambda access point, provide the alias of the Object Lambdaaccess point in place of the bucket name. If the Object Lambda accesspoint alias in a request is not valid, the error codeInvalidAccessPointAliasError is returned. For more information aboutInvalidAccessPointAliasError, seeList of Error Codes.

    Object Lambda access points are not supported by directory buckets.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

    For directory buckets, this header is not supported in this APIoperation. If you specify this header, the request fails with the HTTPstatus code501 Not Implemented.

Returns:

See Also:

8464846584668467
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 8464defget_bucket_policy(params={},options={},&block)req=build_request(:get_bucket_policy,params)req.send_request(options,&block)end

#get_bucket_policy_status(params = {}) ⇒Types::GetBucketPolicyStatusOutput

This operation is not supported for directory buckets.

Retrieves the policy status for an Amazon S3 bucket, indicatingwhether the bucket is public. In order to use this operation, you musthave thes3:GetBucketPolicyStatus permission. For more informationabout Amazon S3 permissions, seeSpecifying Permissions in aPolicy.

For more information about when Amazon S3 considers a bucket public,seeThe Meaning of "Public".

The following operations are related toGetBucketPolicyStatus:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.get_bucket_policy_status({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Response structure

resp.policy_status.is_public#=> Boolean

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the Amazon S3 bucket whose policy status you want toretrieve.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

8533853485358536
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 8533defget_bucket_policy_status(params={},options={})req=build_request(:get_bucket_policy_status,params)req.send_request(options)end

#get_bucket_replication(params = {}) ⇒Types::GetBucketReplicationOutput

This operation is not supported for directory buckets.

Returns the replication configuration of a bucket.

It can take a while to propagate the put or delete a replicationconfiguration to all Amazon S3 systems. Therefore, a get request soonafter put or delete can return a wrong result.

For information about replication configuration, seeReplicationin theAmazon S3 User Guide.

This action requires permissions for thes3:GetReplicationConfiguration action. For more information aboutpermissions, seeUsing Bucket Policies and User Policies.

If you include theFilter element in a replication configuration,you must also include theDeleteMarkerReplication andPriorityelements. The response also returns those elements.

For information aboutGetBucketReplication errors, seeList ofreplication-related error codes

The following operations are related toGetBucketReplication:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To get replication configuration set on a bucket

# The following example returns replication configuration set on a bucket.resp=client.get_bucket_replication({bucket:"examplebucket",})resp.to_houtputsthefollowing:{replication_configuration:{role:"arn:aws:iam::acct-id:role/example-role",rules:[{destination:{bucket:"arn:aws:s3:::destination-bucket",},id:"MWIwNTkwZmItMTE3MS00ZTc3LWJkZDEtNzRmODQwYzc1OTQy",prefix:"Tax",status:"Enabled",},],},}

Request syntax with placeholder values

resp=client.get_bucket_replication({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Response structure

resp.replication_configuration.role#=> Stringresp.replication_configuration.rules#=> Arrayresp.replication_configuration.rules[0].id#=> Stringresp.replication_configuration.rules[0].priority#=> Integerresp.replication_configuration.rules[0].prefix#=> Stringresp.replication_configuration.rules[0].filter.prefix#=> Stringresp.replication_configuration.rules[0].filter.tag.key#=> Stringresp.replication_configuration.rules[0].filter.tag.value#=> Stringresp.replication_configuration.rules[0].filter.and.prefix#=> Stringresp.replication_configuration.rules[0].filter.and.tags#=> Arrayresp.replication_configuration.rules[0].filter.and.tags[0].key#=> Stringresp.replication_configuration.rules[0].filter.and.tags[0].value#=> Stringresp.replication_configuration.rules[0].status#=> String, one of "Enabled", "Disabled"resp.replication_configuration.rules[0].source_selection_criteria.sse_kms_encrypted_objects.status#=> String, one of "Enabled", "Disabled"resp.replication_configuration.rules[0].source_selection_criteria.replica_modifications.status#=> String, one of "Enabled", "Disabled"resp.replication_configuration.rules[0].existing_object_replication.status#=> String, one of "Enabled", "Disabled"resp.replication_configuration.rules[0].destination.bucket#=> Stringresp.replication_configuration.rules[0].destination.#=> Stringresp.replication_configuration.rules[0].destination.storage_class#=> String, one of "STANDARD", "REDUCED_REDUNDANCY", "STANDARD_IA", "ONEZONE_IA", "INTELLIGENT_TIERING", "GLACIER", "DEEP_ARCHIVE", "OUTPOSTS", "GLACIER_IR", "SNOW", "EXPRESS_ONEZONE", "FSX_OPENZFS", "FSX_ONTAP"resp.replication_configuration.rules[0].destination.access_control_translation.owner#=> String, one of "Destination"resp.replication_configuration.rules[0].destination.encryption_configuration.replica_kms_key_id#=> Stringresp.replication_configuration.rules[0].destination.replication_time.status#=> String, one of "Enabled", "Disabled"resp.replication_configuration.rules[0].destination.replication_time.time.minutes#=> Integerresp.replication_configuration.rules[0].destination.metrics.status#=> String, one of "Enabled", "Disabled"resp.replication_configuration.rules[0].destination.metrics.event_threshold.minutes#=> Integerresp.replication_configuration.rules[0].delete_marker_replication.status#=> String, one of "Enabled", "Disabled"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket name for which to get the replication information.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

8660866186628663
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 8660defget_bucket_replication(params={},options={})req=build_request(:get_bucket_replication,params)req.send_request(options)end

#get_bucket_request_payment(params = {}) ⇒Types::GetBucketRequestPaymentOutput

This operation is not supported for directory buckets.

Returns the request payment configuration of a bucket. To use thisversion of the operation, you must be the bucket owner. For moreinformation, seeRequester Pays Buckets.

The following operations are related toGetBucketRequestPayment:

^

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To get bucket versioning configuration

# The following example retrieves bucket versioning configuration.resp=client.get_bucket_request_payment({bucket:"examplebucket",})resp.to_houtputsthefollowing:{payer:"BucketOwner",}

Request syntax with placeholder values

resp=client.get_bucket_request_payment({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Response structure

resp.payer#=> String, one of "Requester", "BucketOwner"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket for which to get the payment requestconfiguration

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

8730873187328733
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 8730defget_bucket_request_payment(params={},options={})req=build_request(:get_bucket_request_payment,params)req.send_request(options)end

#get_bucket_tagging(params = {}) ⇒Types::GetBucketTaggingOutput

This operation is not supported for directory buckets.

Returns the tag set associated with the general purpose bucket.

if ABAC is not enabled for the bucket. When youenable ABAC for ageneral purpose bucket, you can no longer use this operation forthat bucket and must useListTagsForResource instead.

To use this operation, you must have permission to perform thes3:GetBucketTagging action. By default, the bucket owner has thispermission and can grant this permission to others.

GetBucketTagging has the following special error:

  • Error code:NoSuchTagSet

    • Description: There is no tag set associated with the bucket.

    ^

The following operations are related toGetBucketTagging:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To get tag set associated with a bucket

# The following example returns tag set associated with a bucketresp=client.get_bucket_tagging({bucket:"examplebucket",})resp.to_houtputsthefollowing:{tag_set:[{key:"key1",value:"value1",},{key:"key2",value:"value2",},],}

Request syntax with placeholder values

resp=client.get_bucket_tagging({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Response structure

resp.tag_set#=> Arrayresp.tag_set[0].key#=> Stringresp.tag_set[0].value#=> String

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket for which to get the tagging information.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

8826882788288829
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 8826defget_bucket_tagging(params={},options={})req=build_request(:get_bucket_tagging,params)req.send_request(options)end

#get_bucket_versioning(params = {}) ⇒Types::GetBucketVersioningOutput

This operation is not supported for directory buckets.

Returns the versioning state of a bucket.

To retrieve the versioning state of a bucket, you must be the bucketowner.

This implementation also returns the MFA Delete status of theversioning state. If the MFA Delete status isenabled, the bucketowner must use an authentication device to change the versioning stateof the bucket.

The following operations are related toGetBucketVersioning:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To get bucket versioning configuration

# The following example retrieves bucket versioning configuration.resp=client.get_bucket_versioning({bucket:"examplebucket",})resp.to_houtputsthefollowing:{mfa_delete:"Disabled",status:"Enabled",}

Request syntax with placeholder values

resp=client.get_bucket_versioning({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Response structure

resp.status#=> String, one of "Enabled", "Suspended"resp.mfa_delete#=> String, one of "Enabled", "Disabled"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket for which to get the versioning information.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

8907890889098910
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 8907defget_bucket_versioning(params={},options={})req=build_request(:get_bucket_versioning,params)req.send_request(options)end

#get_bucket_website(params = {}) ⇒Types::GetBucketWebsiteOutput

This operation is not supported for directory buckets.

Returns the website configuration for a bucket. To host website onAmazon S3, you can configure a bucket as website by adding a websiteconfiguration. For more information about hosting websites, seeHosting Websites on Amazon S3.

This GET action requires theS3:GetBucketWebsite permission. Bydefault, only the bucket owner can read the bucket websiteconfiguration. However, bucket owners can allow other users to readthe website configuration by writing a bucket policy granting them theS3:GetBucketWebsite permission.

The following operations are related toGetBucketWebsite:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To get bucket website configuration

# The following example retrieves website configuration of a bucket.resp=client.get_bucket_website({bucket:"examplebucket",})resp.to_houtputsthefollowing:{error_document:{key:"error.html",},index_document:{suffix:"index.html",},}

Request syntax with placeholder values

resp=client.get_bucket_website({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Response structure

resp.redirect_all_requests_to.host_name#=> Stringresp.redirect_all_requests_to.protocol#=> String, one of "http", "https"resp.index_document.suffix#=> Stringresp.error_document.key#=> Stringresp.routing_rules#=> Arrayresp.routing_rules[0].condition.http_error_code_returned_equals#=> Stringresp.routing_rules[0].condition.key_prefix_equals#=> Stringresp.routing_rules[0].redirect.host_name#=> Stringresp.routing_rules[0].redirect.http_redirect_code#=> Stringresp.routing_rules[0].redirect.protocol#=> String, one of "http", "https"resp.routing_rules[0].redirect.replace_key_prefix_with#=> Stringresp.routing_rules[0].redirect.replace_key_with#=> String

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket name for which to get the website configuration.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

9003900490059006
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 9003defget_bucket_website(params={},options={})req=build_request(:get_bucket_website,params)req.send_request(options)end

#get_object(params = {}) ⇒Types::GetObjectOutput

Retrieves an object from Amazon S3.

In theGetObject request, specify the full key name for the object.

General purpose buckets - Both the virtual-hosted-style requestsand the path-style requests are supported. For a virtual hosted-stylerequest example, if you have the objectphotos/2006/February/sample.jpg, specify the object key name as/photos/2006/February/sample.jpg. For a path-style request example,if you have the objectphotos/2006/February/sample.jpg in the bucketnamedexamplebucket, specify the object key name as/examplebucket/photos/2006/February/sample.jpg. For more informationabout request types, seeHTTP Host Header Bucket Specification intheAmazon S3 User Guide.

Directory buckets - Only virtual-hosted-style requests aresupported. For a virtual hosted-style request example, if you have theobjectphotos/2006/February/sample.jpg in the bucket namedamzn-s3-demo-bucket--usw2-az1--x-s3, specify the object key name as/photos/2006/February/sample.jpg. Also, when you make requests tothis API operation, your requests are sent to the Zonal endpoint.These endpoints support virtual-hosted-style requests in the formathttps://bucket-name.s3express-zone-id.region-code.amazonaws.com/key-name. Path-style requests are not supported. For more information aboutendpoints in Availability Zones, seeRegional and Zonal endpoints fordirectory buckets in Availability Zones in theAmazon S3 UserGuide. For more information about endpoints in Local Zones, seeConcepts for directory buckets in Local Zones in theAmazon S3User Guide.

Permissions
  • General purpose bucket permissions - You must have therequired permissions in a policy. To useGetObject, you musthave theREAD access to the object (or version). If you grantREAD access to the anonymous user, theGetObject operationreturns the object without using an authorization header. For moreinformation, seeSpecifying permissions in a policy in theAmazon S3 User Guide.

    If you include aversionId in your request header, you must havethes3:GetObjectVersion permission to access a specific versionof an object. Thes3:GetObject permission is not required inthis scenario.

    If you request the current version of an object without a specificversionId in the request header, only thes3:GetObjectpermission is required. Thes3:GetObjectVersion permission isnot required in this scenario.

    If the object that you request doesn’t exist, the error thatAmazon S3 returns depends on whether you also have thes3:ListBucket permission.

    • If you have thes3:ListBucket permission on the bucket, AmazonS3 returns an HTTP status code404 Not Found error.

    • If you don’t have thes3:ListBucket permission, Amazon S3returns an HTTP status code403 Access Denied error.

  • Directory bucket permissions - To grant access to this APIoperation on a directory bucket, we recommend that you use theCreateSession API operation for session-basedauthorization. Specifically, you grant thes3express:CreateSession permission to the directory bucket in abucket policy or an IAM identity-based policy. Then, you make theCreateSession API call on the bucket to obtain a session token.With the session token in your request header, you can make APIrequests to this operation. After the session token expires, youmake anotherCreateSession API call to generate a new sessiontoken for use. Amazon Web Services CLI or SDKs create session andrefresh the session token automatically to avoid serviceinterruptions when a session expires. For more information aboutauthorization, seeCreateSession.

    If the object is encrypted using SSE-KMS, you must also have thekms:GenerateDataKey andkms:Decrypt permissions in IAMidentity-based policies and KMS key policies for the KMS key.

Storage classes

If the object you are retrieving is stored in the S3 GlacierFlexible Retrieval storage class, the S3 Glacier Deep Archivestorage class, the S3 Intelligent-Tiering Archive Access tier, orthe S3 Intelligent-Tiering Deep Archive Access tier, before you canretrieve the object you must first restore a copy usingRestoreObject. Otherwise, this operation returns anInvalidObjectState error. For information about restoring archivedobjects, seeRestoring Archived Objects in theAmazon S3 UserGuide.

Directory buckets - Directory buckets only supportEXPRESS_ONEZONE (the S3 Express One Zone storage class) inAvailability Zones andONEZONE_IA (the S3 One Zone-InfrequentAccess storage class) in Dedicated Local Zones. Unsupported storageclass values won't write a destination object and will respond withthe HTTP status code400 Bad Request.

Encryption

Encryption request headers, likex-amz-server-side-encryption,should not be sent for theGetObject requests, if your object usesserver-side encryption with Amazon S3 managed encryption keys(SSE-S3), server-side encryption with Key Management Service (KMS)keys (SSE-KMS), or dual-layer server-side encryption with Amazon WebServices KMS keys (DSSE-KMS). If you include the header in yourGetObject requests for the object that uses these types of keys,you’ll get an HTTP400 Bad Request error.

Directory buckets - For directory buckets, there are only twosupported options for server-side encryption: SSE-S3 and SSE-KMS.SSE-C isn't supported. For more information, seeProtecting datawith server-side encryption in theAmazon S3 User Guide.

Overriding response header values through the request

There are times when you want to override certain response headervalues of aGetObject response. For example, you might overridetheContent-Disposition response header value through yourGetObject request.

You can override values for a set of response headers. Thesemodified response header values are included only in a successfulresponse, that is, when the HTTP status code200 OK is returned.The headers you can override using the following query parameters inthe request are a subset of the headers that Amazon S3 accepts whenyou create an object.

The response headers that you can override for theGetObjectresponse areCache-Control,Content-Disposition,Content-Encoding,Content-Language,Content-Type, andExpires.

To override values for a set of response headers in theGetObjectresponse, you can use the following query parameters in the request.

  • response-cache-control

  • response-content-disposition

  • response-content-encoding

  • response-content-language

  • response-content-type

  • response-expires

When you use these parameters, you must sign the request by usingeither an Authorization header or a presigned URL. These parameterscannot be used with an unsigned (anonymous) request.

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax isBucket-name.s3express-zone-id.region-code.amazonaws.com.

The following operations are related toGetObject:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To retrieve a byte range of an object

# The following example retrieves an object for an S3 bucket. The request specifies the range header to retrieve a# specific byte range.resp=client.get_object({bucket:"examplebucket",key:"SampleFile.txt",range:"bytes=0-9",})resp.to_houtputsthefollowing:{accept_ranges:"bytes",content_length:10,content_range:"bytes 0-9/43",content_type:"text/plain",etag:"\"0d94420ffd0bc68cd3d152506b97a9cc\"",last_modified:Time.parse("2014-10-09T22:57:28.000Z"),metadata:{},version_id:"null",}

Example: To retrieve an object

# The following example retrieves an object for an S3 bucket.resp=client.get_object({bucket:"examplebucket",key:"HappyFace.jpg",})resp.to_houtputsthefollowing:{accept_ranges:"bytes",content_length:3191,content_type:"image/jpeg",etag:"\"6805f2cfc46c0f04559748bb039d69ae\"",last_modified:Time.parse("2016-12-15T01:19:41.000Z"),metadata:{},tag_count:2,version_id:"null",}

Download an object to disk

# stream object directly to diskresp=s3.get_object(response_target:'/path/to/file',bucket:'bucket-name',key:'object-key')# you can still access other response dataresp.#=> { ... }resp.etag#=> "..."

Download object into memory

# omit :response_target to download to a StringIO in memoryresp=s3.get_object(bucket:'bucket-name',key:'object-key')# call #read or #string on the response bodyresp.body.read#=> '...'

Streaming data to a block

# WARNING: yielding data to a block disables retries of networking errors# However truncation of the body will be retried automatically using a range requestFile.open('/path/to/file','wb')do|file|s3.get_object(bucket:'bucket-name',key:'object-key')do|chunk,headers|# headers['content-length']file.write(chunk)endend

Request syntax with placeholder values

resp=client.get_object({bucket:"BucketName",# requiredif_match:"IfMatch",if_modified_since:Time.now,if_none_match:"IfNoneMatch",if_unmodified_since:Time.now,key:"ObjectKey",# requiredrange:"Range",response_cache_control:"ResponseCacheControl",response_content_disposition:"ResponseContentDisposition",response_content_encoding:"ResponseContentEncoding",response_content_language:"ResponseContentLanguage",response_content_type:"ResponseContentType",response_expires:Time.now,version_id:"ObjectVersionId",sse_customer_algorithm:"SSECustomerAlgorithm",sse_customer_key:"SSECustomerKey",sse_customer_key_md5:"SSECustomerKeyMD5",request_payer:"requester",# accepts requesterpart_number:1,expected_bucket_owner:"AccountId",checksum_mode:"ENABLED",# accepts ENABLED})

Response structure

resp.body#=> IOresp.delete_marker#=> Booleanresp.accept_ranges#=> Stringresp.expiration#=> Stringresp.restore#=> Stringresp.last_modified#=> Timeresp.content_length#=> Integerresp.etag#=> Stringresp.checksum_crc32#=> Stringresp.checksum_crc32c#=> Stringresp.checksum_crc64nvme#=> Stringresp.checksum_sha1#=> Stringresp.checksum_sha256#=> Stringresp.checksum_type#=> String, one of "COMPOSITE", "FULL_OBJECT"resp.missing_meta#=> Integerresp.version_id#=> Stringresp.cache_control#=> Stringresp.content_disposition#=> Stringresp.content_encoding#=> Stringresp.content_language#=> Stringresp.content_range#=> Stringresp.content_type#=> Stringresp.expires#=> Timeresp.expires_string#=> Stringresp.website_redirect_location#=> Stringresp.server_side_encryption#=> String, one of "AES256", "aws:fsx", "aws:kms", "aws:kms:dsse"resp.#=> Hashresp.["MetadataKey"]#=> Stringresp.sse_customer_algorithm#=> Stringresp.sse_customer_key_md5#=> Stringresp.ssekms_key_id#=> Stringresp.bucket_key_enabled#=> Booleanresp.storage_class#=> String, one of "STANDARD", "REDUCED_REDUNDANCY", "STANDARD_IA", "ONEZONE_IA", "INTELLIGENT_TIERING", "GLACIER", "DEEP_ARCHIVE", "OUTPOSTS", "GLACIER_IR", "SNOW", "EXPRESS_ONEZONE", "FSX_OPENZFS", "FSX_ONTAP"resp.request_charged#=> String, one of "requester"resp.replication_status#=> String, one of "COMPLETE", "PENDING", "FAILED", "REPLICA", "COMPLETED"resp.parts_count#=> Integerresp.tag_count#=> Integerresp.object_lock_mode#=> String, one of "GOVERNANCE", "COMPLIANCE"resp.object_lock_retain_until_date#=> Timeresp.object_lock_legal_hold_status#=> String, one of "ON", "OFF"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :response_target(String,IO)

    Where to write response data, file path, or IO object.

  • :bucket(required,String)

    The bucket name containing the object.

    Directory buckets - When you use this operation with a directorybucket, you must use virtual-hosted-style requests in the formatBucket-name.s3express-zone-id.region-code.amazonaws.com. Path-stylerequests are not supported. Directory bucket names must be unique inthe chosen Zone (Availability Zone or Local Zone). Bucket names mustfollow the formatbucket-base-name--zone-id--x-s3 (for example,amzn-s3-demo-bucket--usw2-az1--x-s3). For information about bucketnaming restrictions, seeDirectory bucket naming rules in theAmazon S3 User Guide.

    Access points - When you use this action with an access point forgeneral purpose buckets, you must provide the alias of the accesspoint in place of the bucket name or specify the access point ARN.When you use this action with an access point for directory buckets,you must provide the access point name in place of the bucket name.When using the access point ARN, you must direct requests to theaccess point hostname. The access point hostname takes the formAccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.When using this action with an access point through the Amazon WebServices SDKs, you provide the access point ARN in place of the bucketname. For more information about access point ARNs, seeUsing accesspoints in theAmazon S3 User Guide.

    Object Lambda access points - When you use this action with anObject Lambda access point, you must direct requests to the ObjectLambda access point hostname. The Object Lambda access point hostnametakes the formAccessPointName-AccountId.s3-object-lambda.Region.amazonaws.com.

    Object Lambda access points are not supported by directory buckets.

    S3 on Outposts - When you use this action with S3 on Outposts, youmust direct requests to the S3 on Outposts hostname. The S3 onOutposts hostname takes the formAccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com.When you use this action with S3 on Outposts, the destination bucketmust be the Outposts access point ARN or the access point alias. Formore information about S3 on Outposts, seeWhat is S3 onOutposts? in theAmazon S3 User Guide.

  • :if_match(String)

    Return the object only if its entity tag (ETag) is the same as the onespecified in this header; otherwise, return a412 PreconditionFailed error.

    If both of theIf-Match andIf-Unmodified-Since headers arepresent in the request as follows:If-Match condition evaluates totrue, and;If-Unmodified-Since condition evaluates tofalse;then, S3 returns200 OK and the data requested.

    For more information about conditional requests, seeRFC 7232.

  • :if_modified_since(Time,DateTime,Date,Integer,String)

    Return the object only if it has been modified since the specifiedtime; otherwise, return a304 Not Modified error.

    If both of theIf-None-Match andIf-Modified-Since headers arepresent in the request as follows:If-None-Match condition evaluatestofalse, and;If-Modified-Since condition evaluates totrue;then, S3 returns304 Not Modified status code.

    For more information about conditional requests, seeRFC 7232.

  • :if_none_match(String)

    Return the object only if its entity tag (ETag) is different from theone specified in this header; otherwise, return a304 Not Modifiederror.

    If both of theIf-None-Match andIf-Modified-Since headers arepresent in the request as follows:If-None-Match condition evaluatestofalse, and;If-Modified-Since condition evaluates totrue;then, S3 returns304 Not Modified HTTP status code.

    For more information about conditional requests, seeRFC 7232.

  • :if_unmodified_since(Time,DateTime,Date,Integer,String)

    Return the object only if it has not been modified since the specifiedtime; otherwise, return a412 Precondition Failed error.

    If both of theIf-Match andIf-Unmodified-Since headers arepresent in the request as follows:If-Match condition evaluates totrue, and;If-Unmodified-Since condition evaluates tofalse;then, S3 returns200 OK and the data requested.

    For more information about conditional requests, seeRFC 7232.

  • :key(required,String)

    Key of the object to get.

  • :range(String)

    Downloads the specified byte range of an object. For more informationabout the HTTP Range header, seehttps://www.rfc-editor.org/rfc/rfc9110.html#name-range.

    Amazon S3 doesn't support retrieving multiple ranges of data perGET request.

  • :response_cache_control(String)

    Sets theCache-Control header of the response.

  • :response_content_disposition(String)

    Sets theContent-Disposition header of the response.

  • :response_content_encoding(String)

    Sets theContent-Encoding header of the response.

  • :response_content_language(String)

    Sets theContent-Language header of the response.

  • :response_content_type(String)

    Sets theContent-Type header of the response.

  • :response_expires(Time,DateTime,Date,Integer,String)

    Sets theExpires header of the response.

  • :version_id(String)

    Version ID used to reference a specific version of the object.

    By default, theGetObject operation returns the current version ofan object. To return a different version, use theversionIdsubresource.

    * If you include aversionId in your request header, you must have thes3:GetObjectVersion permission to access a specific version of an object. Thes3:GetObject permission is not required in this scenario.

    • If you request the current version of an object without a specificversionId in the request header, only thes3:GetObjectpermission is required. Thes3:GetObjectVersion permission is notrequired in this scenario.

    • Directory buckets - S3 Versioning isn't enabled and supportedfor directory buckets. For this API operation, only thenull valueof the version ID is supported by directory buckets. You can onlyspecifynull to theversionId query parameter in the request.

    For more information about versioning, seePutBucketVersioning.

  • :sse_customer_algorithm(String)

    Specifies the algorithm to use when decrypting the object (forexample,AES256).

    If you encrypt an object by using server-side encryption withcustomer-provided encryption keys (SSE-C) when you store the object inAmazon S3, then when you GET the object, you must use the followingheaders:

    • x-amz-server-side-encryption-customer-algorithm

    • x-amz-server-side-encryption-customer-key

    • x-amz-server-side-encryption-customer-key-MD5

    For more information about SSE-C, seeServer-Side Encryption (UsingCustomer-Provided Encryption Keys) in theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :sse_customer_key(String)

    Specifies the customer-provided encryption key that you originallyprovided for Amazon S3 to encrypt the data before storing it. Thisvalue is used to decrypt the object when recovering it and must matchthe one used when storing the data. The key must be appropriate foruse with the algorithm specified in thex-amz-server-side-encryption-customer-algorithm header.

    If you encrypt an object by using server-side encryption withcustomer-provided encryption keys (SSE-C) when you store the object inAmazon S3, then when you GET the object, you must use the followingheaders:

    • x-amz-server-side-encryption-customer-algorithm

    • x-amz-server-side-encryption-customer-key

    • x-amz-server-side-encryption-customer-key-MD5

    For more information about SSE-C, seeServer-Side Encryption (UsingCustomer-Provided Encryption Keys) in theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :sse_customer_key_md5(String)

    Specifies the 128-bit MD5 digest of the customer-provided encryptionkey according to RFC 1321. Amazon S3 uses this header for a messageintegrity check to ensure that the encryption key was transmittedwithout error.

    If you encrypt an object by using server-side encryption withcustomer-provided encryption keys (SSE-C) when you store the object inAmazon S3, then when you GET the object, you must use the followingheaders:

    • x-amz-server-side-encryption-customer-algorithm

    • x-amz-server-side-encryption-customer-key

    • x-amz-server-side-encryption-customer-key-MD5

    For more information about SSE-C, seeServer-Side Encryption (UsingCustomer-Provided Encryption Keys) in theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :request_payer(String)

    Confirms that the requester knows that they will be charged for therequest. Bucket owners need not specify this parameter in theirrequests. If either the source or destination S3 bucket has RequesterPays enabled, the requester will pay for corresponding charges to copythe object. For information about downloading objects from RequesterPays buckets, seeDownloading Objects in Requester Pays Bucketsin theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :part_number(Integer)

    Part number of the object being read. This is a positive integerbetween 1 and 10,000. Effectively performs a 'ranged' GET requestfor the part specified. Useful for downloading just a part of anobject.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

  • :checksum_mode(String)

    To retrieve the checksum, this mode must be enabled.

Returns:

See Also:

9675967696779678
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 9675defget_object(params={},options={},&block)req=build_request(:get_object,params)req.send_request(options,&block)end

#get_object_acl(params = {}) ⇒Types::GetObjectAclOutput

This operation is not supported for directory buckets.

Returns the access control list (ACL) of an object. To use thisoperation, you must haves3:GetObjectAcl permissions orREAD_ACPaccess to the object. For more information, seeMapping of ACLpermissions and access policy permissions in theAmazon S3 UserGuide

This functionality is not supported for Amazon S3 on Outposts.

By default, GET returns ACL information about the current version ofan object. To return ACL information about a different version, usethe versionId subresource.

If your bucket uses the bucket owner enforced setting for S3 ObjectOwnership, requests to read ACLs are still supported and return thebucket-owner-full-control ACL with the owner being the account thatcreated the bucket. For more information, see Controlling objectownership and disabling ACLs in theAmazon S3 User Guide.

The following operations are related toGetObjectAcl:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To retrieve object ACL

# The following example retrieves access control list (ACL) of an object.resp=client.get_object_acl({bucket:"examplebucket",key:"HappyFace.jpg",})resp.to_houtputsthefollowing:{grants:[{grantee:{display_name:"owner-display-name",id:"examplee7a2f25102679df27bb0ae12b3f85be6f290b936c4393484be31bebcc",type:"CanonicalUser",},permission:"WRITE",},{grantee:{display_name:"owner-display-name",id:"examplee7a2f25102679df27bb0ae12b3f85be6f290b936c4393484be31bebcc",type:"CanonicalUser",},permission:"WRITE_ACP",},{grantee:{display_name:"owner-display-name",id:"examplee7a2f25102679df27bb0ae12b3f85be6f290b936c4393484be31bebcc",type:"CanonicalUser",},permission:"READ",},{grantee:{display_name:"owner-display-name",id:"852b113eexamplee7a2f25102679df27bb0ae12b3f85be6f290b936c4393484be31bebcc7a2f25102679df27bb0ae12b3f85be6f290b936c4393484be31bebcc",type:"CanonicalUser",},permission:"READ_ACP",},],owner:{display_name:"owner-display-name",id:"examplee7a2f25102679df27bb0ae12b3f85be6f290b936c4393484be31bebcc",},}

Request syntax with placeholder values

resp=client.get_object_acl({bucket:"BucketName",# requiredkey:"ObjectKey",# requiredversion_id:"ObjectVersionId",request_payer:"requester",# accepts requesterexpected_bucket_owner:"AccountId",})

Response structure

resp.owner.display_name#=> Stringresp.owner.id#=> Stringresp.grants#=> Arrayresp.grants[0].grantee.display_name#=> Stringresp.grants[0].grantee.email_address#=> Stringresp.grants[0].grantee.id#=> Stringresp.grants[0].grantee.type#=> String, one of "CanonicalUser", "AmazonCustomerByEmail", "Group"resp.grants[0].grantee.uri#=> Stringresp.grants[0].permission#=> String, one of "FULL_CONTROL", "WRITE", "WRITE_ACP", "READ", "READ_ACP"resp.request_charged#=> String, one of "requester"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket name that contains the object for which to get the ACLinformation.

    Access points - When you use this action with an access point forgeneral purpose buckets, you must provide the alias of the accesspoint in place of the bucket name or specify the access point ARN.When you use this action with an access point for directory buckets,you must provide the access point name in place of the bucket name.When using the access point ARN, you must direct requests to theaccess point hostname. The access point hostname takes the formAccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.When using this action with an access point through the Amazon WebServices SDKs, you provide the access point ARN in place of the bucketname. For more information about access point ARNs, seeUsing accesspoints in theAmazon S3 User Guide.

  • :key(required,String)

    The key of the object for which to get the ACL information.

  • :version_id(String)

    Version ID used to reference a specific version of the object.

    This functionality is not supported for directory buckets.

  • :request_payer(String)

    Confirms that the requester knows that they will be charged for therequest. Bucket owners need not specify this parameter in theirrequests. If either the source or destination S3 bucket has RequesterPays enabled, the requester will pay for corresponding charges to copythe object. For information about downloading objects from RequesterPays buckets, seeDownloading Objects in Requester Pays Bucketsin theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

9865986698679868
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 9865defget_object_acl(params={},options={})req=build_request(:get_object_acl,params)req.send_request(options)end

#get_object_attributes(params = {}) ⇒Types::GetObjectAttributesOutput

Retrieves all of the metadata from an object without returning theobject itself. This operation is useful if you're interested only inan object's metadata.

GetObjectAttributes combines the functionality ofHeadObject andListParts. All of the data returned with both of those individualcalls can be returned with a single call toGetObjectAttributes.

Directory buckets - For directory buckets, you must make requestsfor this API operation to the Zonal endpoint. These endpoints supportvirtual-hosted-style requests in the formathttps://amzn-s3-demo-bucket.s3express-zone-id.region-code.amazonaws.com/key-name. Path-style requests are not supported. For more information aboutendpoints in Availability Zones, seeRegional and Zonal endpoints fordirectory buckets in Availability Zones in theAmazon S3 UserGuide. For more information about endpoints in Local Zones, seeConcepts for directory buckets in Local Zones in theAmazon S3User Guide.

Permissions
  • General purpose bucket permissions - To useGetObjectAttributes, you must have READ access to the object.

    The other permissions that you need to use this operation dependon whether the bucket is versioned and if a version ID is passedin theGetObjectAttributes request.

    • If you pass a version ID in your request, you need both thes3:GetObjectVersion ands3:GetObjectVersionAttributespermissions.

    • If you do not pass a version ID in your request, you need thes3:GetObject ands3:GetObjectAttributes permissions.For more information, seeSpecifying Permissions in a Policyin theAmazon S3 User Guide.

    If the object that you request does not exist, the error Amazon S3returns depends on whether you also have thes3:ListBucketpermission.

    • If you have thes3:ListBucket permission on the bucket, AmazonS3 returns an HTTP status code404 Not Found ("no such key")error.

    • If you don't have thes3:ListBucket permission, Amazon S3returns an HTTP status code403 Forbidden ("access denied")error.

  • Directory bucket permissions - To grant access to this APIoperation on a directory bucket, we recommend that you use theCreateSession API operation for session-basedauthorization. Specifically, you grant thes3express:CreateSession permission to the directory bucket in abucket policy or an IAM identity-based policy. Then, you make theCreateSession API call on the bucket to obtain a session token.With the session token in your request header, you can make APIrequests to this operation. After the session token expires, youmake anotherCreateSession API call to generate a new sessiontoken for use. Amazon Web Services CLI or SDKs create session andrefresh the session token automatically to avoid serviceinterruptions when a session expires. For more information aboutauthorization, seeCreateSession.

    If the object is encrypted with SSE-KMS, you must also have thekms:GenerateDataKey andkms:Decrypt permissions in IAMidentity-based policies and KMS key policies for the KMS key.

Encryption

Encryption request headers, likex-amz-server-side-encryption,should not be sent forHEAD requests if your object usesserver-side encryption with Key Management Service (KMS) keys(SSE-KMS), dual-layer server-side encryption with Amazon WebServices KMS keys (DSSE-KMS), or server-side encryption with AmazonS3 managed encryption keys (SSE-S3). Thex-amz-server-side-encryption header is used when youPUT anobject to S3 and want to specify the encryption method. If youinclude this header in aGET request for an object that uses thesetypes of keys, you’ll get an HTTP400 Bad Request error. It'sbecause the encryption method can't be changed when you retrievethe object.

If you encrypted an object when you stored the object in Amazon S3by using server-side encryption with customer-provided encryptionkeys (SSE-C), then when you retrieve the metadata from the object,you must use the following headers. These headers provide the serverwith the encryption key required to retrieve the object's metadata.The headers are:

  • x-amz-server-side-encryption-customer-algorithm

  • x-amz-server-side-encryption-customer-key

  • x-amz-server-side-encryption-customer-key-MD5

For more information about SSE-C, seeServer-Side Encryption (UsingCustomer-Provided Encryption Keys) in theAmazon S3 UserGuide.

Directory bucket permissions - For directory buckets, there areonly two supported options for server-side encryption: server-sideencryption with Amazon S3 managed keys (SSE-S3) (AES256) andserver-side encryption with KMS keys (SSE-KMS) (aws:kms). Werecommend that the bucket's default encryption uses the desiredencryption configuration and you don't override the bucket defaultencryption in yourCreateSession requests orPUT objectrequests. Then, new objects are automatically encrypted with thedesired encryption settings. For more information, seeProtectingdata with server-side encryption in theAmazon S3 User Guide.For more information about the encryption overriding behaviors indirectory buckets, seeSpecifying server-side encryption with KMSfor new object uploads.

Versioning

Directory buckets - S3 Versioning isn't enabled and supportedfor directory buckets. For this API operation, only thenull valueof the version ID is supported by directory buckets. You can onlyspecifynull to theversionId query parameter in the request.

Conditional request headers

Consider the following when using request headers:

  • If both of theIf-Match andIf-Unmodified-Since headers arepresent in the request as follows, then Amazon S3 returns the HTTPstatus code200 OK and the data requested:

    • If-Match condition evaluates totrue.

    • If-Unmodified-Since condition evaluates tofalse.For more information about conditional requests, seeRFC7232.

  • If both of theIf-None-Match andIf-Modified-Since headers arepresent in the request as follows, then Amazon S3 returns the HTTPstatus code304 Not Modified:

    • If-None-Match condition evaluates tofalse.

    • If-Modified-Since condition evaluates totrue.For more information about conditional requests, seeRFC7232.

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax isBucket-name.s3express-zone-id.region-code.amazonaws.com.

The following actions are related toGetObjectAttributes:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.get_object_attributes({bucket:"BucketName",# requiredkey:"ObjectKey",# requiredversion_id:"ObjectVersionId",max_parts:1,part_number_marker:1,sse_customer_algorithm:"SSECustomerAlgorithm",sse_customer_key:"SSECustomerKey",sse_customer_key_md5:"SSECustomerKeyMD5",request_payer:"requester",# accepts requesterexpected_bucket_owner:"AccountId",object_attributes:["ETag"],# required, accepts ETag, Checksum, ObjectParts, StorageClass, ObjectSize})

Response structure

resp.delete_marker#=> Booleanresp.last_modified#=> Timeresp.version_id#=> Stringresp.request_charged#=> String, one of "requester"resp.etag#=> Stringresp.checksum.checksum_crc32#=> Stringresp.checksum.checksum_crc32c#=> Stringresp.checksum.checksum_crc64nvme#=> Stringresp.checksum.checksum_sha1#=> Stringresp.checksum.checksum_sha256#=> Stringresp.checksum.checksum_type#=> String, one of "COMPOSITE", "FULL_OBJECT"resp.object_parts.total_parts_count#=> Integerresp.object_parts.part_number_marker#=> Integerresp.object_parts.next_part_number_marker#=> Integerresp.object_parts.max_parts#=> Integerresp.object_parts.is_truncated#=> Booleanresp.object_parts.parts#=> Arrayresp.object_parts.parts[0].part_number#=> Integerresp.object_parts.parts[0].size#=> Integerresp.object_parts.parts[0].checksum_crc32#=> Stringresp.object_parts.parts[0].checksum_crc32c#=> Stringresp.object_parts.parts[0].checksum_crc64nvme#=> Stringresp.object_parts.parts[0].checksum_sha1#=> Stringresp.object_parts.parts[0].checksum_sha256#=> Stringresp.storage_class#=> String, one of "STANDARD", "REDUCED_REDUNDANCY", "STANDARD_IA", "ONEZONE_IA", "INTELLIGENT_TIERING", "GLACIER", "DEEP_ARCHIVE", "OUTPOSTS", "GLACIER_IR", "SNOW", "EXPRESS_ONEZONE", "FSX_OPENZFS", "FSX_ONTAP"resp.object_size#=> Integer

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket that contains the object.

    Directory buckets - When you use this operation with a directorybucket, you must use virtual-hosted-style requests in the formatBucket-name.s3express-zone-id.region-code.amazonaws.com. Path-stylerequests are not supported. Directory bucket names must be unique inthe chosen Zone (Availability Zone or Local Zone). Bucket names mustfollow the formatbucket-base-name--zone-id--x-s3 (for example,amzn-s3-demo-bucket--usw2-az1--x-s3). For information about bucketnaming restrictions, seeDirectory bucket naming rules in theAmazon S3 User Guide.

    Access points - When you use this action with an access point forgeneral purpose buckets, you must provide the alias of the accesspoint in place of the bucket name or specify the access point ARN.When you use this action with an access point for directory buckets,you must provide the access point name in place of the bucket name.When using the access point ARN, you must direct requests to theaccess point hostname. The access point hostname takes the formAccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.When using this action with an access point through the Amazon WebServices SDKs, you provide the access point ARN in place of the bucketname. For more information about access point ARNs, seeUsing accesspoints in theAmazon S3 User Guide.

    Object Lambda access points are not supported by directory buckets.

    S3 on Outposts - When you use this action with S3 on Outposts, youmust direct requests to the S3 on Outposts hostname. The S3 onOutposts hostname takes the formAccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com.When you use this action with S3 on Outposts, the destination bucketmust be the Outposts access point ARN or the access point alias. Formore information about S3 on Outposts, seeWhat is S3 onOutposts? in theAmazon S3 User Guide.

  • :key(required,String)

    The object key.

  • :version_id(String)

    The version ID used to reference a specific version of the object.

    S3 Versioning isn't enabled and supported for directory buckets. Forthis API operation, only thenull value of the version ID issupported by directory buckets. You can only specifynull to theversionId query parameter in the request.

  • :max_parts(Integer)

    Sets the maximum number of parts to return. For more information, seeUploading and copying objects using multipart upload in Amazon S3 in theAmazon Simple Storage Service user guide.

  • :part_number_marker(Integer)

    Specifies the part after which listing should begin. Only parts withhigher part numbers will be listed. For more information, seeUploading and copying objects using multipart upload in Amazon S3 in theAmazon Simple Storage Service user guide.

  • :sse_customer_algorithm(String)

    Specifies the algorithm to use when encrypting the object (forexample, AES256).

    This functionality is not supported for directory buckets.

  • :sse_customer_key(String)

    Specifies the customer-provided encryption key for Amazon S3 to use inencrypting data. This value is used to store the object and then it isdiscarded; Amazon S3 does not store the encryption key. The key mustbe appropriate for use with the algorithm specified in thex-amz-server-side-encryption-customer-algorithm header.

    This functionality is not supported for directory buckets.

  • :sse_customer_key_md5(String)

    Specifies the 128-bit MD5 digest of the encryption key according toRFC 1321. Amazon S3 uses this header for a message integrity check toensure that the encryption key was transmitted without error.

    This functionality is not supported for directory buckets.

  • :request_payer(String)

    Confirms that the requester knows that they will be charged for therequest. Bucket owners need not specify this parameter in theirrequests. If either the source or destination S3 bucket has RequesterPays enabled, the requester will pay for corresponding charges to copythe object. For information about downloading objects from RequesterPays buckets, seeDownloading Objects in Requester Pays Bucketsin theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

  • :object_attributes(required,Array<String>)

    Specifies the fields at the root level that you want returned in theresponse. Fields that you do not specify are not returned.

Returns:

See Also:

10256102571025810259
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 10256defget_object_attributes(params={},options={})req=build_request(:get_object_attributes,params)req.send_request(options)end

#get_object_legal_hold(params = {}) ⇒Types::GetObjectLegalHoldOutput

This operation is not supported for directory buckets.

Gets an object's current legal hold status. For more information, seeLocking Objects.

This functionality is not supported for Amazon S3 on Outposts.

The following action is related toGetObjectLegalHold:

^

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.get_object_legal_hold({bucket:"BucketName",# requiredkey:"ObjectKey",# requiredversion_id:"ObjectVersionId",request_payer:"requester",# accepts requesterexpected_bucket_owner:"AccountId",})

Response structure

resp.legal_hold.status#=> String, one of "ON", "OFF"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket name containing the object whose legal hold status you wantto retrieve.

    Access points - When you use this action with an access point forgeneral purpose buckets, you must provide the alias of the accesspoint in place of the bucket name or specify the access point ARN.When you use this action with an access point for directory buckets,you must provide the access point name in place of the bucket name.When using the access point ARN, you must direct requests to theaccess point hostname. The access point hostname takes the formAccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.When using this action with an access point through the Amazon WebServices SDKs, you provide the access point ARN in place of the bucketname. For more information about access point ARNs, seeUsing accesspoints in theAmazon S3 User Guide.

  • :key(required,String)

    The key name for the object whose legal hold status you want toretrieve.

  • :version_id(String)

    The version ID of the object whose legal hold status you want toretrieve.

  • :request_payer(String)

    Confirms that the requester knows that they will be charged for therequest. Bucket owners need not specify this parameter in theirrequests. If either the source or destination S3 bucket has RequesterPays enabled, the requester will pay for corresponding charges to copythe object. For information about downloading objects from RequesterPays buckets, seeDownloading Objects in Requester Pays Bucketsin theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

10358103591036010361
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 10358defget_object_legal_hold(params={},options={})req=build_request(:get_object_legal_hold,params)req.send_request(options)end

#get_object_lock_configuration(params = {}) ⇒Types::GetObjectLockConfigurationOutput

This operation is not supported for directory buckets.

Gets the Object Lock configuration for a bucket. The rule specified inthe Object Lock configuration will be applied by default to every newobject placed in the specified bucket. For more information, seeLocking Objects.

The following action is related toGetObjectLockConfiguration:

^

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.get_object_lock_configuration({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Response structure

resp.object_lock_configuration.object_lock_enabled#=> String, one of "Enabled"resp.object_lock_configuration.rule.default_retention.mode#=> String, one of "GOVERNANCE", "COMPLIANCE"resp.object_lock_configuration.rule.default_retention.days#=> Integerresp.object_lock_configuration.rule.default_retention.years#=> Integer

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket whose Object Lock configuration you want to retrieve.

    Access points - When you use this action with an access point forgeneral purpose buckets, you must provide the alias of the accesspoint in place of the bucket name or specify the access point ARN.When you use this action with an access point for directory buckets,you must provide the access point name in place of the bucket name.When using the access point ARN, you must direct requests to theaccess point hostname. The access point hostname takes the formAccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.When using this action with an access point through the Amazon WebServices SDKs, you provide the access point ARN in place of the bucketname. For more information about access point ARNs, seeUsing accesspoints in theAmazon S3 User Guide.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

10434104351043610437
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 10434defget_object_lock_configuration(params={},options={})req=build_request(:get_object_lock_configuration,params)req.send_request(options)end

#get_object_retention(params = {}) ⇒Types::GetObjectRetentionOutput

This operation is not supported for directory buckets.

Retrieves an object's retention settings. For more information, seeLocking Objects.

This functionality is not supported for Amazon S3 on Outposts.

The following action is related toGetObjectRetention:

^

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.get_object_retention({bucket:"BucketName",# requiredkey:"ObjectKey",# requiredversion_id:"ObjectVersionId",request_payer:"requester",# accepts requesterexpected_bucket_owner:"AccountId",})

Response structure

resp.retention.mode#=> String, one of "GOVERNANCE", "COMPLIANCE"resp.retention.retain_until_date#=> Time

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket name containing the object whose retention settings youwant to retrieve.

    Access points - When you use this action with an access point forgeneral purpose buckets, you must provide the alias of the accesspoint in place of the bucket name or specify the access point ARN.When you use this action with an access point for directory buckets,you must provide the access point name in place of the bucket name.When using the access point ARN, you must direct requests to theaccess point hostname. The access point hostname takes the formAccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.When using this action with an access point through the Amazon WebServices SDKs, you provide the access point ARN in place of the bucketname. For more information about access point ARNs, seeUsing accesspoints in theAmazon S3 User Guide.

  • :key(required,String)

    The key name for the object whose retention settings you want toretrieve.

  • :version_id(String)

    The version ID for the object whose retention settings you want toretrieve.

  • :request_payer(String)

    Confirms that the requester knows that they will be charged for therequest. Bucket owners need not specify this parameter in theirrequests. If either the source or destination S3 bucket has RequesterPays enabled, the requester will pay for corresponding charges to copythe object. For information about downloading objects from RequesterPays buckets, seeDownloading Objects in Requester Pays Bucketsin theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

10537105381053910540
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 10537defget_object_retention(params={},options={})req=build_request(:get_object_retention,params)req.send_request(options)end

#get_object_tagging(params = {}) ⇒Types::GetObjectTaggingOutput

This operation is not supported for directory buckets.

Returns the tag-set of an object. You send the GET request against thetagging subresource associated with the object.

To use this operation, you must have permission to perform thes3:GetObjectTagging action. By default, the GET action returnsinformation about current version of an object. For a versionedbucket, you can have multiple versions of an object in your bucket. Toretrieve tags of any other version, use the versionId query parameter.You also need permission for thes3:GetObjectVersionTagging action.

By default, the bucket owner has this permission and can grant thispermission to others.

For information about the Amazon S3 object tagging feature, seeObject Tagging.

The following actions are related toGetObjectTagging:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To retrieve tag set of a specific object version

# The following example retrieves tag set of an object. The request specifies object version.resp=client.get_object_tagging({bucket:"examplebucket",key:"exampleobject",version_id:"ydlaNkwWm0SfKJR.T1b1fIdPRbldTYRI",})resp.to_houtputsthefollowing:{tag_set:[{key:"Key1",value:"Value1",},],version_id:"ydlaNkwWm0SfKJR.T1b1fIdPRbldTYRI",}

Example: To retrieve tag set of an object

# The following example retrieves tag set of an object.resp=client.get_object_tagging({bucket:"examplebucket",key:"HappyFace.jpg",})resp.to_houtputsthefollowing:{tag_set:[{key:"Key4",value:"Value4",},{key:"Key3",value:"Value3",},],version_id:"null",}

Request syntax with placeholder values

resp=client.get_object_tagging({bucket:"BucketName",# requiredkey:"ObjectKey",# requiredversion_id:"ObjectVersionId",expected_bucket_owner:"AccountId",request_payer:"requester",# accepts requester})

Response structure

resp.version_id#=> Stringresp.tag_set#=> Arrayresp.tag_set[0].key#=> Stringresp.tag_set[0].value#=> String

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket name containing the object for which to get the tagginginformation.

    Access points - When you use this action with an access point forgeneral purpose buckets, you must provide the alias of the accesspoint in place of the bucket name or specify the access point ARN.When you use this action with an access point for directory buckets,you must provide the access point name in place of the bucket name.When using the access point ARN, you must direct requests to theaccess point hostname. The access point hostname takes the formAccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.When using this action with an access point through the Amazon WebServices SDKs, you provide the access point ARN in place of the bucketname. For more information about access point ARNs, seeUsing accesspoints in theAmazon S3 User Guide.

    S3 on Outposts - When you use this action with S3 on Outposts, youmust direct requests to the S3 on Outposts hostname. The S3 onOutposts hostname takes the formAccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com.When you use this action with S3 on Outposts, the destination bucketmust be the Outposts access point ARN or the access point alias. Formore information about S3 on Outposts, seeWhat is S3 onOutposts? in theAmazon S3 User Guide.

  • :key(required,String)

    Object key for which to get the tagging information.

  • :version_id(String)

    The versionId of the object for which to get the tagging information.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

  • :request_payer(String)

    Confirms that the requester knows that they will be charged for therequest. Bucket owners need not specify this parameter in theirrequests. If either the source or destination S3 bucket has RequesterPays enabled, the requester will pay for corresponding charges to copythe object. For information about downloading objects from RequesterPays buckets, seeDownloading Objects in Requester Pays Bucketsin theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

Returns:

See Also:

10712107131071410715
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 10712defget_object_tagging(params={},options={})req=build_request(:get_object_tagging,params)req.send_request(options)end

#get_object_torrent(params = {}) ⇒Types::GetObjectTorrentOutput

This operation is not supported for directory buckets.

Returns torrent files from a bucket. BitTorrent can save you bandwidthwhen you're distributing large files.

You can get torrent only for objects that are less than 5 GB in size,and that are not encrypted using server-side encryption with acustomer-provided encryption key.

To use GET, you must have READ access to the object.

This functionality is not supported for Amazon S3 on Outposts.

The following action is related toGetObjectTorrent:

^

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To retrieve torrent files for an object

# The following example retrieves torrent files of an object.resp=client.get_object_torrent({bucket:"examplebucket",key:"HappyFace.jpg",})resp.to_houtputsthefollowing:{}

Request syntax with placeholder values

resp=client.get_object_torrent({bucket:"BucketName",# requiredkey:"ObjectKey",# requiredrequest_payer:"requester",# accepts requesterexpected_bucket_owner:"AccountId",})

Response structure

resp.body#=> IOresp.request_charged#=> String, one of "requester"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :response_target(String,IO)

    Where to write response data, file path, or IO object.

  • :bucket(required,String)

    The name of the bucket containing the object for which to get thetorrent files.

  • :key(required,String)

    The object key for which to get the information.

  • :request_payer(String)

    Confirms that the requester knows that they will be charged for therequest. Bucket owners need not specify this parameter in theirrequests. If either the source or destination S3 bucket has RequesterPays enabled, the requester will pay for corresponding charges to copythe object. For information about downloading objects from RequesterPays buckets, seeDownloading Objects in Requester Pays Bucketsin theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

10817108181081910820
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 10817defget_object_torrent(params={},options={},&block)req=build_request(:get_object_torrent,params)req.send_request(options,&block)end

#get_public_access_block(params = {}) ⇒Types::GetPublicAccessBlockOutput

This operation is not supported for directory buckets.

Retrieves thePublicAccessBlock configuration for an Amazon S3bucket. This operation returns the bucket-level configuration only. Tounderstand the effective public access behavior, you must alsoconsider account-level settings (which may inherit fromorganization-level policies). To use this operation, you must have thes3:GetBucketPublicAccessBlock permission. For more information aboutAmazon S3 permissions, seeSpecifying Permissions in a Policy.

When Amazon S3 evaluates thePublicAccessBlock configuration for abucket or an object, it checks thePublicAccessBlock configurationfor both the bucket (or the bucket that contains the object) and thebucket owner's account. Account-level settings automatically inheritfrom organization-level policies when present. If thePublicAccessBlock settings are different between the bucket and theaccount, Amazon S3 uses the most restrictive combination of thebucket-level and account-level settings.

For more information about when Amazon S3 considers a bucket or anobject public, seeThe Meaning of "Public".

The following operations are related toGetPublicAccessBlock:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.get_public_access_block({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Response structure

resp.public_access_block_configuration.block_public_acls#=> Booleanresp.public_access_block_configuration.ignore_public_acls#=> Booleanresp.public_access_block_configuration.block_public_policy#=> Booleanresp.public_access_block_configuration.restrict_public_buckets#=> Boolean

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the Amazon S3 bucket whosePublicAccessBlockconfiguration you want to retrieve.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

10900109011090210903
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 10900defget_public_access_block(params={},options={})req=build_request(:get_public_access_block,params)req.send_request(options)end

#head_bucket(params = {}) ⇒Types::HeadBucketOutput

You can use this operation to determine if a bucket exists and if youhave permission to access it. The action returns a200 OK HTTPstatus code if the bucket exists and you have permission to access it.You can make aHeadBucket call on any bucket name to any Region inthe partition, and regardless of the permissions on the bucket, youwill receive a response header with the correct bucket location sothat you can then make a proper, signed request to the appropriateRegional endpoint.

If the bucket doesn't exist or you don't have permission to accessit, theHEAD request returns a generic400 Bad Request,403Forbidden, or404 Not Found HTTP status code. A message body isn'tincluded, so you can't determine the exception beyond these HTTPresponse codes.

Authentication and authorization

General purpose buckets - Request to public buckets that grantthe s3:ListBucket permission publicly do not need to be signed. AllotherHeadBucket requests must be authenticated and signed byusing IAM credentials (access key ID and secret access key for theIAM identities). All headers with thex-amz- prefix, includingx-amz-copy-source, must be signed. For more information, seeRESTAuthentication.

Directory buckets - You must use IAM credentials to authenticateand authorize your access to theHeadBucket API operation, insteadof using the temporary security credentials through theCreateSession API operation.

Amazon Web Services CLI or SDKs handles authentication andauthorization on your behalf.

Permissions

:

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax isBucket-name.s3express-zone-id.region-code.amazonaws.com.

You must make requests for this API operation to the Zonal endpoint.These endpoints support virtual-hosted-style requests in the formathttps://bucket-name.s3express-zone-id.region-code.amazonaws.com.Path-style requests are not supported. For more information aboutendpoints in Availability Zones, seeRegional and Zonal endpointsfor directory buckets in Availability Zones in theAmazon S3User Guide. For more information about endpoints in Local Zones,seeConcepts for directory buckets in Local Zones in theAmazon S3 User Guide.

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

The following waiters are defined for this operation (see#wait_until for detailed usage):

  • bucket_exists
  • bucket_not_exists

Examples:

Example: To determine if bucket exists

# This operation checks to see if a bucket exists.resp=client.head_bucket({bucket:"acl1",})

Request syntax with placeholder values

resp=client.head_bucket({bucket:"BucketName",# requiredexpected_bucket_owner:"AccountId",})

Response structure

resp.bucket_arn#=> Stringresp.bucket_location_type#=> String, one of "AvailabilityZone", "LocalZone"resp.bucket_location_name#=> Stringresp.bucket_region#=> Stringresp.access_point_alias#=> Boolean

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket name.

    Directory buckets - When you use this operation with a directorybucket, you must use virtual-hosted-style requests in the formatBucket-name.s3express-zone-id.region-code.amazonaws.com. Path-stylerequests are not supported. Directory bucket names must be unique inthe chosen Zone (Availability Zone or Local Zone). Bucket names mustfollow the formatbucket-base-name--zone-id--x-s3 (for example,amzn-s3-demo-bucket--usw2-az1--x-s3). For information about bucketnaming restrictions, seeDirectory bucket naming rules in theAmazon S3 User Guide.

    Access points - When you use this action with an access point forgeneral purpose buckets, you must provide the alias of the accesspoint in place of the bucket name or specify the access point ARN.When you use this action with an access point for directory buckets,you must provide the access point name in place of the bucket name.When using the access point ARN, you must direct requests to theaccess point hostname. The access point hostname takes the formAccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.When using this action with an access point through the Amazon WebServices SDKs, you provide the access point ARN in place of the bucketname. For more information about access point ARNs, seeUsing accesspoints in theAmazon S3 User Guide.

    Object Lambda access points - When you use this API operation withan Object Lambda access point, provide the alias of the Object Lambdaaccess point in place of the bucket name. If the Object Lambda accesspoint alias in a request is not valid, the error codeInvalidAccessPointAliasError is returned. For more information aboutInvalidAccessPointAliasError, seeList of Error Codes.

    Object Lambda access points are not supported by directory buckets.

    S3 on Outposts - When you use this action with S3 on Outposts, youmust direct requests to the S3 on Outposts hostname. The S3 onOutposts hostname takes the formAccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com.When you use this action with S3 on Outposts, the destination bucketmust be the Outposts access point ARN or the access point alias. Formore information about S3 on Outposts, seeWhat is S3 onOutposts? in theAmazon S3 User Guide.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

11094110951109611097
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 11094defhead_bucket(params={},options={})req=build_request(:head_bucket,params)req.send_request(options)end

#head_object(params = {}) ⇒Types::HeadObjectOutput

TheHEAD operation retrieves metadata from an object withoutreturning the object itself. This operation is useful if you'reinterested only in an object's metadata.

AHEAD request has the same options as aGET operation on anobject. The response is identical to theGET response except thatthere is no response body. Because of this, if theHEAD requestgenerates an error, it returns a generic code, such as400 BadRequest,403 Forbidden,404 Not Found,405 Method Not Allowed,412 Precondition Failed, or304 Not Modified. It's not possibleto retrieve the exact exception of these error codes.

Request headers are limited to 8 KB in size. For more information, seeCommon Request Headers.

Permissions

:

  • General purpose bucket permissions - To useHEAD, you musthave thes3:GetObject permission. You need the relevant readobject (or version) permission for this operation. For moreinformation, seeActions, resources, and condition keys forAmazon S3 in theAmazon S3 User Guide. For more informationabout the permissions to S3 API operations by S3 resource types,seeRequired permissions for Amazon S3 APIoperationsin theAmazon S3 User Guide.

    If the object you request doesn't exist, the error that Amazon S3returns depends on whether you also have thes3:ListBucketpermission.

    • If you have thes3:ListBucket permission on the bucket, AmazonS3 returns an HTTP status code404 Not Found error.

    • If you don’t have thes3:ListBucket permission, Amazon S3returns an HTTP status code403 Forbidden error.

  • Directory bucket permissions - To grant access to this APIoperation on a directory bucket, we recommend that you use theCreateSession API operation for session-basedauthorization. Specifically, you grant thes3express:CreateSession permission to the directory bucket in abucket policy or an IAM identity-based policy. Then, you make theCreateSession API call on the bucket to obtain a session token.With the session token in your request header, you can make APIrequests to this operation. After the session token expires, youmake anotherCreateSession API call to generate a new sessiontoken for use. Amazon Web Services CLI or SDKs create session andrefresh the session token automatically to avoid serviceinterruptions when a session expires. For more information aboutauthorization, seeCreateSession.

    If you enablex-amz-checksum-mode in the request and the objectis encrypted with Amazon Web Services Key Management Service(Amazon Web Services KMS), you must also have thekms:GenerateDataKey andkms:Decrypt permissions in IAMidentity-based policies and KMS key policies for the KMS key toretrieve the checksum of the object.

Encryption

Encryption request headers, likex-amz-server-side-encryption,should not be sent forHEAD requests if your object usesserver-side encryption with Key Management Service (KMS) keys(SSE-KMS), dual-layer server-side encryption with Amazon WebServices KMS keys (DSSE-KMS), or server-side encryption with AmazonS3 managed encryption keys (SSE-S3). Thex-amz-server-side-encryption header is used when youPUT anobject to S3 and want to specify the encryption method. If youinclude this header in aHEAD request for an object that usesthese types of keys, you’ll get an HTTP400 Bad Request error.It's because the encryption method can't be changed when youretrieve the object.

If you encrypt an object by using server-side encryption withcustomer-provided encryption keys (SSE-C) when you store the objectin Amazon S3, then when you retrieve the metadata from the object,you must use the following headers to provide the encryption key forthe server to be able to retrieve the object's metadata. Theheaders are:

  • x-amz-server-side-encryption-customer-algorithm

  • x-amz-server-side-encryption-customer-key

  • x-amz-server-side-encryption-customer-key-MD5

For more information about SSE-C, seeServer-Side Encryption (UsingCustomer-Provided Encryption Keys) in theAmazon S3 UserGuide.

Directory bucket - For directory buckets, there are only twosupported options for server-side encryption: SSE-S3 and SSE-KMS.SSE-C isn't supported. For more information, seeProtecting datawith server-side encryption in theAmazon S3 User Guide.

Versioning
  • If the current version of the object is a delete marker, Amazon S3behaves as if the object was deleted and includesx-amz-delete-marker: true in the response.

  • If the specified version is a delete marker, the response returnsa405 Method Not Allowed error and theLast-Modified:timestamp response header.

*Directory buckets - Delete marker is not supported for directory buckets.

  • Directory buckets - S3 Versioning isn't enabled and supportedfor directory buckets. For this API operation, only thenullvalue of the version ID is supported by directory buckets. You canonly specifynull to theversionId query parameter in therequest.

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax isBucket-name.s3express-zone-id.region-code.amazonaws.com.

For directory buckets, you must make requests for this API operationto the Zonal endpoint. These endpoints support virtual-hosted-stylerequests in the formathttps://amzn-s3-demo-bucket.s3express-zone-id.region-code.amazonaws.com/key-name. Path-style requests are not supported. For more information aboutendpoints in Availability Zones, seeRegional and Zonal endpointsfor directory buckets in Availability Zones in theAmazon S3User Guide. For more information about endpoints in Local Zones,seeConcepts for directory buckets in Local Zones in theAmazon S3 User Guide.

The following actions are related toHeadObject:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

The following waiters are defined for this operation (see#wait_until for detailed usage):

  • object_exists
  • object_not_exists

Examples:

Example: To retrieve metadata of an object without returning the object itself

# The following example retrieves an object metadata.resp=client.head_object({bucket:"examplebucket",key:"HappyFace.jpg",})resp.to_houtputsthefollowing:{accept_ranges:"bytes",content_length:3191,content_type:"image/jpeg",etag:"\"6805f2cfc46c0f04559748bb039d69ae\"",last_modified:Time.parse("2016-12-15T01:19:41.000Z"),metadata:{},version_id:"null",}

Request syntax with placeholder values

resp=client.head_object({bucket:"BucketName",# requiredif_match:"IfMatch",if_modified_since:Time.now,if_none_match:"IfNoneMatch",if_unmodified_since:Time.now,key:"ObjectKey",# requiredrange:"Range",response_cache_control:"ResponseCacheControl",response_content_disposition:"ResponseContentDisposition",response_content_encoding:"ResponseContentEncoding",response_content_language:"ResponseContentLanguage",response_content_type:"ResponseContentType",response_expires:Time.now,version_id:"ObjectVersionId",sse_customer_algorithm:"SSECustomerAlgorithm",sse_customer_key:"SSECustomerKey",sse_customer_key_md5:"SSECustomerKeyMD5",request_payer:"requester",# accepts requesterpart_number:1,expected_bucket_owner:"AccountId",checksum_mode:"ENABLED",# accepts ENABLED})

Response structure

resp.delete_marker#=> Booleanresp.accept_ranges#=> Stringresp.expiration#=> Stringresp.restore#=> Stringresp.archive_status#=> String, one of "ARCHIVE_ACCESS", "DEEP_ARCHIVE_ACCESS"resp.last_modified#=> Timeresp.content_length#=> Integerresp.checksum_crc32#=> Stringresp.checksum_crc32c#=> Stringresp.checksum_crc64nvme#=> Stringresp.checksum_sha1#=> Stringresp.checksum_sha256#=> Stringresp.checksum_type#=> String, one of "COMPOSITE", "FULL_OBJECT"resp.etag#=> Stringresp.missing_meta#=> Integerresp.version_id#=> Stringresp.cache_control#=> Stringresp.content_disposition#=> Stringresp.content_encoding#=> Stringresp.content_language#=> Stringresp.content_type#=> Stringresp.content_range#=> Stringresp.expires#=> Timeresp.expires_string#=> Stringresp.website_redirect_location#=> Stringresp.server_side_encryption#=> String, one of "AES256", "aws:fsx", "aws:kms", "aws:kms:dsse"resp.#=> Hashresp.["MetadataKey"]#=> Stringresp.sse_customer_algorithm#=> Stringresp.sse_customer_key_md5#=> Stringresp.ssekms_key_id#=> Stringresp.bucket_key_enabled#=> Booleanresp.storage_class#=> String, one of "STANDARD", "REDUCED_REDUNDANCY", "STANDARD_IA", "ONEZONE_IA", "INTELLIGENT_TIERING", "GLACIER", "DEEP_ARCHIVE", "OUTPOSTS", "GLACIER_IR", "SNOW", "EXPRESS_ONEZONE", "FSX_OPENZFS", "FSX_ONTAP"resp.request_charged#=> String, one of "requester"resp.replication_status#=> String, one of "COMPLETE", "PENDING", "FAILED", "REPLICA", "COMPLETED"resp.parts_count#=> Integerresp.tag_count#=> Integerresp.object_lock_mode#=> String, one of "GOVERNANCE", "COMPLIANCE"resp.object_lock_retain_until_date#=> Timeresp.object_lock_legal_hold_status#=> String, one of "ON", "OFF"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket that contains the object.

    Directory buckets - When you use this operation with a directorybucket, you must use virtual-hosted-style requests in the formatBucket-name.s3express-zone-id.region-code.amazonaws.com. Path-stylerequests are not supported. Directory bucket names must be unique inthe chosen Zone (Availability Zone or Local Zone). Bucket names mustfollow the formatbucket-base-name--zone-id--x-s3 (for example,amzn-s3-demo-bucket--usw2-az1--x-s3). For information about bucketnaming restrictions, seeDirectory bucket naming rules in theAmazon S3 User Guide.

    Access points - When you use this action with an access point forgeneral purpose buckets, you must provide the alias of the accesspoint in place of the bucket name or specify the access point ARN.When you use this action with an access point for directory buckets,you must provide the access point name in place of the bucket name.When using the access point ARN, you must direct requests to theaccess point hostname. The access point hostname takes the formAccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.When using this action with an access point through the Amazon WebServices SDKs, you provide the access point ARN in place of the bucketname. For more information about access point ARNs, seeUsing accesspoints in theAmazon S3 User Guide.

    Object Lambda access points are not supported by directory buckets.

    S3 on Outposts - When you use this action with S3 on Outposts, youmust direct requests to the S3 on Outposts hostname. The S3 onOutposts hostname takes the formAccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com.When you use this action with S3 on Outposts, the destination bucketmust be the Outposts access point ARN or the access point alias. Formore information about S3 on Outposts, seeWhat is S3 onOutposts? in theAmazon S3 User Guide.

  • :if_match(String)

    Return the object only if its entity tag (ETag) is the same as the onespecified; otherwise, return a 412 (precondition failed) error.

    If both of theIf-Match andIf-Unmodified-Since headers arepresent in the request as follows:

    • If-Match condition evaluates totrue, and;

    • If-Unmodified-Since condition evaluates tofalse;

    Then Amazon S3 returns200 OK and the data requested.

    For more information about conditional requests, seeRFC 7232.

  • :if_modified_since(Time,DateTime,Date,Integer,String)

    Return the object only if it has been modified since the specifiedtime; otherwise, return a 304 (not modified) error.

    If both of theIf-None-Match andIf-Modified-Since headers arepresent in the request as follows:

    • If-None-Match condition evaluates tofalse, and;

    • If-Modified-Since condition evaluates totrue;

    Then Amazon S3 returns the304 Not Modified response code.

    For more information about conditional requests, seeRFC 7232.

  • :if_none_match(String)

    Return the object only if its entity tag (ETag) is different from theone specified; otherwise, return a 304 (not modified) error.

    If both of theIf-None-Match andIf-Modified-Since headers arepresent in the request as follows:

    • If-None-Match condition evaluates tofalse, and;

    • If-Modified-Since condition evaluates totrue;

    Then Amazon S3 returns the304 Not Modified response code.

    For more information about conditional requests, seeRFC 7232.

  • :if_unmodified_since(Time,DateTime,Date,Integer,String)

    Return the object only if it has not been modified since the specifiedtime; otherwise, return a 412 (precondition failed) error.

    If both of theIf-Match andIf-Unmodified-Since headers arepresent in the request as follows:

    • If-Match condition evaluates totrue, and;

    • If-Unmodified-Since condition evaluates tofalse;

    Then Amazon S3 returns200 OK and the data requested.

    For more information about conditional requests, seeRFC 7232.

  • :key(required,String)

    The object key.

  • :range(String)

    HeadObject returns only the metadata for an object. If the Range issatisfiable, only theContentLength is affected in the response. Ifthe Range is not satisfiable, S3 returns a416 - Requested Range NotSatisfiable error.

  • :response_cache_control(String)

    Sets theCache-Control header of the response.

  • :response_content_disposition(String)

    Sets theContent-Disposition header of the response.

  • :response_content_encoding(String)

    Sets theContent-Encoding header of the response.

  • :response_content_language(String)

    Sets theContent-Language header of the response.

  • :response_content_type(String)

    Sets theContent-Type header of the response.

  • :response_expires(Time,DateTime,Date,Integer,String)

    Sets theExpires header of the response.

  • :version_id(String)

    Version ID used to reference a specific version of the object.

    For directory buckets in this API operation, only thenull value ofthe version ID is supported.

  • :sse_customer_algorithm(String)

    Specifies the algorithm to use when encrypting the object (forexample, AES256).

    This functionality is not supported for directory buckets.

  • :sse_customer_key(String)

    Specifies the customer-provided encryption key for Amazon S3 to use inencrypting data. This value is used to store the object and then it isdiscarded; Amazon S3 does not store the encryption key. The key mustbe appropriate for use with the algorithm specified in thex-amz-server-side-encryption-customer-algorithm header.

    This functionality is not supported for directory buckets.

  • :sse_customer_key_md5(String)

    Specifies the 128-bit MD5 digest of the encryption key according toRFC 1321. Amazon S3 uses this header for a message integrity check toensure that the encryption key was transmitted without error.

    This functionality is not supported for directory buckets.

  • :request_payer(String)

    Confirms that the requester knows that they will be charged for therequest. Bucket owners need not specify this parameter in theirrequests. If either the source or destination S3 bucket has RequesterPays enabled, the requester will pay for corresponding charges to copythe object. For information about downloading objects from RequesterPays buckets, seeDownloading Objects in Requester Pays Bucketsin theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :part_number(Integer)

    Part number of the object being read. This is a positive integerbetween 1 and 10,000. Effectively performs a 'ranged' HEAD requestfor the part specified. Useful querying about the size of the part andthe number of parts in this object.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

  • :checksum_mode(String)

    To retrieve the checksum, this parameter must be enabled.

    General purpose buckets - If you enable checksum mode and theobject is uploaded with achecksum and encrypted with an KeyManagement Service (KMS) key, you must have permission to use thekms:Decrypt action to retrieve the checksum.

    Directory buckets - If you enableChecksumMode and the object isencrypted with Amazon Web Services Key Management Service (Amazon WebServices KMS), you must also have thekms:GenerateDataKey andkms:Decrypt permissions in IAM identity-based policies and KMS keypolicies for the KMS key to retrieve the checksum of the object.

Returns:

See Also:

11634116351163611637
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 11634defhead_object(params={},options={})req=build_request(:head_object,params)req.send_request(options)end

#list_bucket_analytics_configurations(params = {}) ⇒Types::ListBucketAnalyticsConfigurationsOutput

This operation is not supported for directory buckets.

Lists the analytics configurations for the bucket. You can have up to1,000 analytics configurations per bucket.

This action supports list pagination and does not return more than 100configurations at a time. You should always check theIsTruncatedelement in the response. If there are no more configurations to list,IsTruncated is set to false. If there are more configurations tolist,IsTruncated is set to true, and there will be a value inNextContinuationToken. You use theNextContinuationToken value tocontinue the pagination of the list by passing the value incontinuation-token in the request toGET the next page.

To use this operation, you must have permissions to perform thes3:GetAnalyticsConfiguration action. The bucket owner has thispermission by default. The bucket owner can grant this permission toothers. For more information about permissions, seePermissionsRelated to Bucket Subresource Operations andManaging AccessPermissions to Your Amazon S3 Resources.

For information about Amazon S3 analytics feature, seeAmazon S3Analytics – Storage Class Analysis.

The following operations are related toListBucketAnalyticsConfigurations:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.list_bucket_analytics_configurations({bucket:"BucketName",# requiredcontinuation_token:"Token",expected_bucket_owner:"AccountId",})

Response structure

resp.is_truncated#=> Booleanresp.continuation_token#=> Stringresp.next_continuation_token#=> Stringresp.analytics_configuration_list#=> Arrayresp.analytics_configuration_list[0].id#=> Stringresp.analytics_configuration_list[0].filter.prefix#=> Stringresp.analytics_configuration_list[0].filter.tag.key#=> Stringresp.analytics_configuration_list[0].filter.tag.value#=> Stringresp.analytics_configuration_list[0].filter.and.prefix#=> Stringresp.analytics_configuration_list[0].filter.and.tags#=> Arrayresp.analytics_configuration_list[0].filter.and.tags[0].key#=> Stringresp.analytics_configuration_list[0].filter.and.tags[0].value#=> Stringresp.analytics_configuration_list[0].storage_class_analysis.data_export.output_schema_version#=> String, one of "V_1"resp.analytics_configuration_list[0].storage_class_analysis.data_export.destination.s3_bucket_destination.format#=> String, one of "CSV"resp.analytics_configuration_list[0].storage_class_analysis.data_export.destination.s3_bucket_destination.#=> Stringresp.analytics_configuration_list[0].storage_class_analysis.data_export.destination.s3_bucket_destination.bucket#=> Stringresp.analytics_configuration_list[0].storage_class_analysis.data_export.destination.s3_bucket_destination.prefix#=> String

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket from which analytics configurations areretrieved.

  • :continuation_token(String)

    TheContinuationToken that represents a placeholder from where thisrequest should begin.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

11739117401174111742
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 11739deflist_bucket_analytics_configurations(params={},options={})req=build_request(:list_bucket_analytics_configurations,params)req.send_request(options)end

#list_bucket_intelligent_tiering_configurations(params = {}) ⇒Types::ListBucketIntelligentTieringConfigurationsOutput

This operation is not supported for directory buckets.

Lists the S3 Intelligent-Tiering configuration from the specifiedbucket.

The S3 Intelligent-Tiering storage class is designed to optimizestorage costs by automatically moving data to the most cost-effectivestorage access tier, without performance impact or operationaloverhead. S3 Intelligent-Tiering delivers automatic cost savings inthree low latency and high throughput access tiers. To get the loweststorage cost on data that can be accessed in minutes to hours, you canchoose to activate additional archiving capabilities.

The S3 Intelligent-Tiering storage class is the ideal storage classfor data with unknown, changing, or unpredictable access patterns,independent of object size or retention period. If the size of anobject is less than 128 KB, it is not monitored and not eligible forauto-tiering. Smaller objects can be stored, but they are alwayscharged at the Frequent Access tier rates in the S3Intelligent-Tiering storage class.

For more information, seeStorage class for automatically optimizingfrequently and infrequently accessed objects.

Operations related toListBucketIntelligentTieringConfigurationsinclude:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.list_bucket_intelligent_tiering_configurations({bucket:"BucketName",# requiredcontinuation_token:"Token",expected_bucket_owner:"AccountId",})

Response structure

resp.is_truncated#=> Booleanresp.continuation_token#=> Stringresp.next_continuation_token#=> Stringresp.intelligent_tiering_configuration_list#=> Arrayresp.intelligent_tiering_configuration_list[0].id#=> Stringresp.intelligent_tiering_configuration_list[0].filter.prefix#=> Stringresp.intelligent_tiering_configuration_list[0].filter.tag.key#=> Stringresp.intelligent_tiering_configuration_list[0].filter.tag.value#=> Stringresp.intelligent_tiering_configuration_list[0].filter.and.prefix#=> Stringresp.intelligent_tiering_configuration_list[0].filter.and.tags#=> Arrayresp.intelligent_tiering_configuration_list[0].filter.and.tags[0].key#=> Stringresp.intelligent_tiering_configuration_list[0].filter.and.tags[0].value#=> Stringresp.intelligent_tiering_configuration_list[0].status#=> String, one of "Enabled", "Disabled"resp.intelligent_tiering_configuration_list[0].tierings#=> Arrayresp.intelligent_tiering_configuration_list[0].tierings[0].days#=> Integerresp.intelligent_tiering_configuration_list[0].tierings[0].access_tier#=> String, one of "ARCHIVE_ACCESS", "DEEP_ARCHIVE_ACCESS"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the Amazon S3 bucket whose configuration you want tomodify or retrieve.

  • :continuation_token(String)

    TheContinuationToken that represents a placeholder from where thisrequest should begin.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

11841118421184311844
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 11841deflist_bucket_intelligent_tiering_configurations(params={},options={})req=build_request(:list_bucket_intelligent_tiering_configurations,params)req.send_request(options)end

#list_bucket_inventory_configurations(params = {}) ⇒Types::ListBucketInventoryConfigurationsOutput

This operation is not supported for directory buckets.

Returns a list of S3 Inventory configurations for the bucket. You canhave up to 1,000 inventory configurations per bucket.

This action supports list pagination and does not return more than 100configurations at a time. Always check theIsTruncated element inthe response. If there are no more configurations to list,IsTruncated is set to false. If there are more configurations tolist,IsTruncated is set to true, and there is a value inNextContinuationToken. You use theNextContinuationToken value tocontinue the pagination of the list by passing the value incontinuation-token in the request toGET the next page.

To use this operation, you must have permissions to perform thes3:GetInventoryConfiguration action. The bucket owner has thispermission by default. The bucket owner can grant this permission toothers. For more information about permissions, seePermissionsRelated to Bucket Subresource Operations andManaging AccessPermissions to Your Amazon S3 Resources.

For information about the Amazon S3 inventory feature, seeAmazon S3Inventory

The following operations are related toListBucketInventoryConfigurations:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.list_bucket_inventory_configurations({bucket:"BucketName",# requiredcontinuation_token:"Token",expected_bucket_owner:"AccountId",})

Response structure

resp.continuation_token#=> Stringresp.inventory_configuration_list#=> Arrayresp.inventory_configuration_list[0].destination.s3_bucket_destination.#=> Stringresp.inventory_configuration_list[0].destination.s3_bucket_destination.bucket#=> Stringresp.inventory_configuration_list[0].destination.s3_bucket_destination.format#=> String, one of "CSV", "ORC", "Parquet"resp.inventory_configuration_list[0].destination.s3_bucket_destination.prefix#=> Stringresp.inventory_configuration_list[0].destination.s3_bucket_destination.encryption.ssekms.key_id#=> Stringresp.inventory_configuration_list[0].is_enabled#=> Booleanresp.inventory_configuration_list[0].filter.prefix#=> Stringresp.inventory_configuration_list[0].id#=> Stringresp.inventory_configuration_list[0].included_object_versions#=> String, one of "All", "Current"resp.inventory_configuration_list[0].optional_fields#=> Arrayresp.inventory_configuration_list[0].optional_fields[0]#=> String, one of "Size", "LastModifiedDate", "StorageClass", "ETag", "IsMultipartUploaded", "ReplicationStatus", "EncryptionStatus", "ObjectLockRetainUntilDate", "ObjectLockMode", "ObjectLockLegalHoldStatus", "IntelligentTieringAccessTier", "BucketKeyStatus", "ChecksumAlgorithm", "ObjectAccessControlList", "ObjectOwner", "LifecycleExpirationDate"resp.inventory_configuration_list[0].schedule.frequency#=> String, one of "Daily", "Weekly"resp.is_truncated#=> Booleanresp.next_continuation_token#=> String

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket containing the inventory configurations toretrieve.

  • :continuation_token(String)

    The marker used to continue an inventory configuration listing thathas been truncated. Use theNextContinuationToken from a previouslytruncated list response to continue the listing. The continuationtoken is an opaque value that Amazon S3 understands.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

11947119481194911950
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 11947deflist_bucket_inventory_configurations(params={},options={})req=build_request(:list_bucket_inventory_configurations,params)req.send_request(options)end

#list_bucket_metrics_configurations(params = {}) ⇒Types::ListBucketMetricsConfigurationsOutput

This operation is not supported for directory buckets.

Lists the metrics configurations for the bucket. The metricsconfigurations are only for the request metrics of the bucket and donot provide information on daily storage metrics. You can have up to1,000 configurations per bucket.

This action supports list pagination and does not return more than 100configurations at a time. Always check theIsTruncated element inthe response. If there are no more configurations to list,IsTruncated is set to false. If there are more configurations tolist,IsTruncated is set to true, and there is a value inNextContinuationToken. You use theNextContinuationToken value tocontinue the pagination of the list by passing the value incontinuation-token in the request toGET the next page.

To use this operation, you must have permissions to perform thes3:GetMetricsConfiguration action. The bucket owner has thispermission by default. The bucket owner can grant this permission toothers. For more information about permissions, seePermissionsRelated to Bucket Subresource Operations andManaging AccessPermissions to Your Amazon S3 Resources.

For more information about metrics configurations and CloudWatchrequest metrics, seeMonitoring Metrics with Amazon CloudWatch.

The following operations are related toListBucketMetricsConfigurations:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.list_bucket_metrics_configurations({bucket:"BucketName",# requiredcontinuation_token:"Token",expected_bucket_owner:"AccountId",})

Response structure

resp.is_truncated#=> Booleanresp.continuation_token#=> Stringresp.next_continuation_token#=> Stringresp.metrics_configuration_list#=> Arrayresp.metrics_configuration_list[0].id#=> Stringresp.metrics_configuration_list[0].filter.prefix#=> Stringresp.metrics_configuration_list[0].filter.tag.key#=> Stringresp.metrics_configuration_list[0].filter.tag.value#=> Stringresp.metrics_configuration_list[0].filter.access_point_arn#=> Stringresp.metrics_configuration_list[0].filter.and.prefix#=> Stringresp.metrics_configuration_list[0].filter.and.tags#=> Arrayresp.metrics_configuration_list[0].filter.and.tags[0].key#=> Stringresp.metrics_configuration_list[0].filter.and.tags[0].value#=> Stringresp.metrics_configuration_list[0].filter.and.access_point_arn#=> String

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket containing the metrics configurations toretrieve.

  • :continuation_token(String)

    The marker that is used to continue a metrics configuration listingthat has been truncated. Use theNextContinuationToken from apreviously truncated list response to continue the listing. Thecontinuation token is an opaque value that Amazon S3 understands.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

12053120541205512056
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 12053deflist_bucket_metrics_configurations(params={},options={})req=build_request(:list_bucket_metrics_configurations,params)req.send_request(options)end

#list_buckets(params = {}) ⇒Types::ListBucketsOutput

This operation is not supported for directory buckets.

Returns a list of all buckets owned by the authenticated sender of therequest. To grant IAM permission to use this operation, you must addthes3:ListAllMyBuckets policy action.

For information about Amazon S3 buckets, seeCreating, configuring,and working with Amazon S3 buckets.

We strongly recommend using only paginatedListBuckets requests.UnpaginatedListBuckets requests are only supported for Amazon WebServices accounts set to the default general purpose bucket quota of10,000. If you have an approved general purpose bucket quota above10,000, you must send paginatedListBuckets requests to list youraccount’s buckets. All unpaginatedListBuckets requests will berejected for Amazon Web Services accounts with a general purposebucket quota greater than 10,000.

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

The returnedresponse is a pageable response and is Enumerable. For details on usage seePageableResponse.

Examples:

Example: To list all buckets

# The following example returns all the buckets owned by the sender of this request.resp=client.list_buckets({})resp.to_houtputsthefollowing:{buckets:[{creation_date:Time.parse("2012-02-15T21:03:02.000Z"),name:"examplebucket",},{creation_date:Time.parse("2011-07-24T19:33:50.000Z"),name:"examplebucket2",},{creation_date:Time.parse("2010-12-17T00:56:49.000Z"),name:"examplebucket3",},],owner:{display_name:"own-display-name",id:"examplee7a2f25102679df27bb0ae12b3f85be6f290b936c4393484be31",},}

Request syntax with placeholder values

resp=client.list_buckets({max_buckets:1,continuation_token:"Token",prefix:"Prefix",bucket_region:"BucketRegion",})

Response structure

resp.buckets#=> Arrayresp.buckets[0].name#=> Stringresp.buckets[0].creation_date#=> Timeresp.buckets[0].bucket_region#=> Stringresp.buckets[0].bucket_arn#=> Stringresp.owner.display_name#=> Stringresp.owner.id#=> Stringresp.continuation_token#=> Stringresp.prefix#=> String

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :max_buckets(Integer)

    Maximum number of buckets to be returned in response. When the numberis more than the count of buckets that are owned by an Amazon WebServices account, return all the buckets in response.

  • :continuation_token(String)

    ContinuationToken indicates to Amazon S3 that the list is beingcontinued on this bucket with a token.ContinuationToken isobfuscated and is not a real key. You can use thisContinuationTokenfor pagination of the list results.

    Length Constraints: Minimum length of 0. Maximum length of 1024.

    Required: No.

    If you specify thebucket-region,prefix, orcontinuation-tokenquery parameters without usingmax-buckets to set the maximum numberof buckets returned in the response, Amazon S3 applies a default pagesize of 10,000 and provides a continuation token if there are morebuckets.

  • :prefix(String)

    Limits the response to bucket names that begin with the specifiedbucket name prefix.

  • :bucket_region(String)

    Limits the response to buckets that are located in the specifiedAmazon Web Services Region. The Amazon Web Services Region must beexpressed according to the Amazon Web Services Region code, such asus-west-2 for the US West (Oregon) Region. For a list of the validvalues for all of the Amazon Web Services Regions, seeRegions andEndpoints.

    Requests made to a Regional endpoint that is different from thebucket-region parameter are not supported. For example, if you wantto limit the response to your buckets in Regionus-west-2, therequest must be made to an endpoint in Regionus-west-2.

Returns:

See Also:

12196121971219812199
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 12196deflist_buckets(params={},options={})req=build_request(:list_buckets,params)req.send_request(options)end

#list_directory_buckets(params = {}) ⇒Types::ListDirectoryBucketsOutput

Returns a list of all Amazon S3 directory buckets owned by theauthenticated sender of the request. For more information aboutdirectory buckets, seeDirectory buckets in theAmazon S3 UserGuide.

Directory buckets - For directory buckets, you must makerequests for this API operation to the Regional endpoint. Theseendpoints support path-style requests in the formathttps://s3express-control.region-code.amazonaws.com/bucket-name.Virtual-hosted-style requests aren't supported. For more informationabout endpoints in Availability Zones, seeRegional and Zonalendpoints for directory buckets in Availability Zones in theAmazon S3 User Guide. For more information about endpoints in LocalZones, seeConcepts for directory buckets in Local Zones in theAmazon S3 User Guide.

Permissions

You must have thes3express:ListAllMyDirectoryBuckets permissionin an IAM identity-based policy instead of a bucket policy.Cross-account access to this API operation isn't supported. Thisoperation can only be performed by the Amazon Web Services accountthat owns the resource. For more information about directory bucketpolicies and permissions, seeAmazon Web Services Identity andAccess Management (IAM) for S3 Express One Zone in theAmazonS3 User Guide.

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax iss3express-control.region.amazonaws.com.

TheBucketRegion response element is not part of theListDirectoryBuckets Response Syntax.

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

The returnedresponse is a pageable response and is Enumerable. For details on usage seePageableResponse.

Examples:

Request syntax with placeholder values

resp=client.list_directory_buckets({continuation_token:"DirectoryBucketToken",max_directory_buckets:1,})

Response structure

resp.buckets#=> Arrayresp.buckets[0].name#=> Stringresp.buckets[0].creation_date#=> Timeresp.buckets[0].bucket_region#=> Stringresp.buckets[0].bucket_arn#=> Stringresp.continuation_token#=> String

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :continuation_token(String)

    ContinuationToken indicates to Amazon S3 that the list is beingcontinued on buckets in this account with a token.ContinuationTokenis obfuscated and is not a real bucket name. You can use thisContinuationToken for the pagination of the list results.

  • :max_directory_buckets(Integer)

    Maximum number of buckets to be returned in response. When the numberis more than the count of buckets that are owned by an Amazon WebServices account, return all the buckets in response.

Returns:

See Also:

12289122901229112292
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 12289deflist_directory_buckets(params={},options={})req=build_request(:list_directory_buckets,params)req.send_request(options)end

#list_multipart_uploads(params = {}) ⇒Types::ListMultipartUploadsOutput

This operation lists in-progress multipart uploads in a bucket. Anin-progress multipart upload is a multipart upload that has beeninitiated by theCreateMultipartUpload request, but has not yet beencompleted or aborted.

Directory buckets - If multipart uploads in a directory bucket arein progress, you can't delete the bucket until all the in-progressmultipart uploads are aborted or completed. To delete thesein-progress multipart uploads, use theListMultipartUploadsoperation to list the in-progress multipart uploads in the bucket anduse theAbortMultipartUpload operation to abort all the in-progressmultipart uploads.

TheListMultipartUploads operation returns a maximum of 1,000multipart uploads in the response. The limit of 1,000 multipartuploads is also the default value. You can further limit the number ofuploads in a response by specifying themax-uploads requestparameter. If there are more than 1,000 multipart uploads that satisfyyourListMultipartUploads request, the response returns anIsTruncated element with the value oftrue, aNextKeyMarkerelement, and aNextUploadIdMarker element. To list the remainingmultipart uploads, you need to make subsequentListMultipartUploadsrequests. In these requests, include two query parameters:key-marker andupload-id-marker. Set the value ofkey-marker totheNextKeyMarker value from the previous response. Similarly, setthe value ofupload-id-marker to theNextUploadIdMarker value fromthe previous response.

Directory buckets - Theupload-id-marker element and theNextUploadIdMarker element aren't supported by directory buckets.To list the additional multipart uploads, you only need to set thevalue ofkey-marker to theNextKeyMarker value from the previousresponse.

For more information about multipart uploads, seeUploading ObjectsUsing Multipart Upload in theAmazon S3 User Guide.

Directory buckets - For directory buckets, you must make requestsfor this API operation to the Zonal endpoint. These endpoints supportvirtual-hosted-style requests in the formathttps://amzn-s3-demo-bucket.s3express-zone-id.region-code.amazonaws.com/key-name. Path-style requests are not supported. For more information aboutendpoints in Availability Zones, seeRegional and Zonal endpoints fordirectory buckets in Availability Zones in theAmazon S3 UserGuide. For more information about endpoints in Local Zones, seeConcepts for directory buckets in Local Zones in theAmazon S3User Guide.

Permissions
  • General purpose bucket permissions - For information aboutpermissions required to use the multipart upload API, seeMultipart Upload and Permissions in theAmazon S3 UserGuide.

  • Directory bucket permissions - To grant access to this APIoperation on a directory bucket, we recommend that you use theCreateSession API operation for session-basedauthorization. Specifically, you grant thes3express:CreateSession permission to the directory bucket in abucket policy or an IAM identity-based policy. Then, you make theCreateSession API call on the bucket to obtain a session token.With the session token in your request header, you can make APIrequests to this operation. After the session token expires, youmake anotherCreateSession API call to generate a new sessiontoken for use. Amazon Web Services CLI or SDKs create session andrefresh the session token automatically to avoid serviceinterruptions when a session expires. For more information aboutauthorization, seeCreateSession.

Sorting of multipart uploads in response
  • General purpose bucket - In theListMultipartUploadsresponse, the multipart uploads are sorted based on two criteria:

    • Key-based sorting - Multipart uploads are initially sorted inascending order based on their object keys.

    • Time-based sorting - For uploads that share the same object key,they are further sorted in ascending order based on the uploadinitiation time. Among uploads with the same key, the one thatwas initiated first will appear before the ones that wereinitiated later.

  • Directory bucket - In theListMultipartUploads response, themultipart uploads aren't sorted lexicographically based on theobject keys.
HTTP Host header syntax

Directory buckets - The HTTP Host header syntax isBucket-name.s3express-zone-id.region-code.amazonaws.com.

The following operations are related toListMultipartUploads:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

The returnedresponse is a pageable response and is Enumerable. For details on usage seePageableResponse.

Examples:

Example: List next set of multipart uploads when previous result is truncated

# The following example specifies the upload-id-marker and key-marker from previous truncated response to retrieve next# setup of multipart uploads.resp=client.list_multipart_uploads({bucket:"examplebucket",key_marker:"nextkeyfrompreviousresponse",max_uploads:2,upload_id_marker:"valuefrompreviousresponse",})resp.to_houtputsthefollowing:{bucket:"acl1",is_truncated:true,key_marker:"",max_uploads:2,next_key_marker:"someobjectkey",next_upload_id_marker:"examplelo91lv1iwvWpvCiJWugw2xXLPAD7Z8cJyX9.WiIRgNrdG6Ldsn.9FtS63TCl1Uf5faTB.1U5Ckcbmdw--",upload_id_marker:"",uploads:[{initiated:Time.parse("2014-05-01T05:40:58.000Z"),initiator:{display_name:"ownder-display-name",id:"examplee7a2f25102679df27bb0ae12b3f85be6f290b936c4393484be31bebcc",},key:"JavaFile",owner:{display_name:"mohanataws",id:"852b113e7a2f25102679df27bb0ae12b3f85be6f290b936c4393484be31bebcc",},storage_class:"STANDARD",upload_id:"gZ30jIqlUa.CInXklLQtSMJITdUnoZ1Y5GACB5UckOtspm5zbDMCkPF_qkfZzMiFZ6dksmcnqxJyIBvQMG9X9Q--",},{initiated:Time.parse("2014-05-01T05:41:27.000Z"),initiator:{display_name:"ownder-display-name",id:"examplee7a2f25102679df27bb0ae12b3f85be6f290b936c4393484be31bebcc",},key:"JavaFile",owner:{display_name:"ownder-display-name",id:"examplee7a2f25102679df27bb0ae12b3f85be6f290b936c4393484be31bebcc",},storage_class:"STANDARD",upload_id:"b7tZSqIlo91lv1iwvWpvCiJWugw2xXLPAD7Z8cJyX9.WiIRgNrdG6Ldsn.9FtS63TCl1Uf5faTB.1U5Ckcbmdw--",},],}

Example: To list in-progress multipart uploads on a bucket

# The following example lists in-progress multipart uploads on a specific bucket.resp=client.list_multipart_uploads({bucket:"examplebucket",})resp.to_houtputsthefollowing:{uploads:[{initiated:Time.parse("2014-05-01T05:40:58.000Z"),initiator:{display_name:"display-name",id:"examplee7a2f25102679df27bb0ae12b3f85be6f290b936c4393484be31bebcc",},key:"JavaFile",owner:{display_name:"display-name",id:"examplee7a2f25102679df27bb0ae12b3f85be6f290b936c4393484be31bebcc",},storage_class:"STANDARD",upload_id:"examplelUa.CInXklLQtSMJITdUnoZ1Y5GACB5UckOtspm5zbDMCkPF_qkfZzMiFZ6dksmcnqxJyIBvQMG9X9Q--",},{initiated:Time.parse("2014-05-01T05:41:27.000Z"),initiator:{display_name:"display-name",id:"examplee7a2f25102679df27bb0ae12b3f85be6f290b936c4393484be31bebcc",},key:"JavaFile",owner:{display_name:"display-name",id:"examplee7a2f25102679df27bb0ae12b3f85be6f290b936c4393484be31bebcc",},storage_class:"STANDARD",upload_id:"examplelo91lv1iwvWpvCiJWugw2xXLPAD7Z8cJyX9.WiIRgNrdG6Ldsn.9FtS63TCl1Uf5faTB.1U5Ckcbmdw--",},],}

Request syntax with placeholder values

resp=client.list_multipart_uploads({bucket:"BucketName",# requireddelimiter:"Delimiter",encoding_type:"url",# accepts urlkey_marker:"KeyMarker",max_uploads:1,prefix:"Prefix",upload_id_marker:"UploadIdMarker",expected_bucket_owner:"AccountId",request_payer:"requester",# accepts requester})

Response structure

resp.bucket#=> Stringresp.key_marker#=> Stringresp.upload_id_marker#=> Stringresp.next_key_marker#=> Stringresp.prefix#=> Stringresp.delimiter#=> Stringresp.next_upload_id_marker#=> Stringresp.max_uploads#=> Integerresp.is_truncated#=> Booleanresp.uploads#=> Arrayresp.uploads[0].upload_id#=> Stringresp.uploads[0].key#=> Stringresp.uploads[0].initiated#=> Timeresp.uploads[0].storage_class#=> String, one of "STANDARD", "REDUCED_REDUNDANCY", "STANDARD_IA", "ONEZONE_IA", "INTELLIGENT_TIERING", "GLACIER", "DEEP_ARCHIVE", "OUTPOSTS", "GLACIER_IR", "SNOW", "EXPRESS_ONEZONE", "FSX_OPENZFS", "FSX_ONTAP"resp.uploads[0].owner.display_name#=> Stringresp.uploads[0].owner.id#=> Stringresp.uploads[0].initiator.id#=> Stringresp.uploads[0].initiator.display_name#=> Stringresp.uploads[0].checksum_algorithm#=> String, one of "CRC32", "CRC32C", "SHA1", "SHA256", "CRC64NVME"resp.uploads[0].checksum_type#=> String, one of "COMPOSITE", "FULL_OBJECT"resp.common_prefixes#=> Arrayresp.common_prefixes[0].prefix#=> Stringresp.encoding_type#=> String, one of "url"resp.request_charged#=> String, one of "requester"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket to which the multipart upload was initiated.

    Directory buckets - When you use this operation with a directorybucket, you must use virtual-hosted-style requests in the formatBucket-name.s3express-zone-id.region-code.amazonaws.com. Path-stylerequests are not supported. Directory bucket names must be unique inthe chosen Zone (Availability Zone or Local Zone). Bucket names mustfollow the formatbucket-base-name--zone-id--x-s3 (for example,amzn-s3-demo-bucket--usw2-az1--x-s3). For information about bucketnaming restrictions, seeDirectory bucket naming rules in theAmazon S3 User Guide.

    Access points - When you use this action with an access point forgeneral purpose buckets, you must provide the alias of the accesspoint in place of the bucket name or specify the access point ARN.When you use this action with an access point for directory buckets,you must provide the access point name in place of the bucket name.When using the access point ARN, you must direct requests to theaccess point hostname. The access point hostname takes the formAccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.When using this action with an access point through the Amazon WebServices SDKs, you provide the access point ARN in place of the bucketname. For more information about access point ARNs, seeUsing accesspoints in theAmazon S3 User Guide.

    Object Lambda access points are not supported by directory buckets.

    S3 on Outposts - When you use this action with S3 on Outposts, youmust direct requests to the S3 on Outposts hostname. The S3 onOutposts hostname takes the formAccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com.When you use this action with S3 on Outposts, the destination bucketmust be the Outposts access point ARN or the access point alias. Formore information about S3 on Outposts, seeWhat is S3 onOutposts? in theAmazon S3 User Guide.

  • :delimiter(String)

    Character you use to group keys.

    All keys that contain the same string between the prefix, ifspecified, and the first occurrence of the delimiter after the prefixare grouped under a single result element,CommonPrefixes. If youdon't specify the prefix parameter, then the substring starts at thebeginning of the key. The keys that are grouped underCommonPrefixesresult element are not returned elsewhere in the response.

    CommonPrefixes is filtered out from results if it is notlexicographically greater than the key-marker.

    Directory buckets - For directory buckets,/ is the onlysupported delimiter.

  • :encoding_type(String)

    Encoding type used by Amazon S3 to encode theobject keys in theresponse. Responses are encoded only in UTF-8. An object key cancontain any Unicode character. However, the XML 1.0 parser can'tparse certain characters, such as characters with an ASCII value from0 to 10. For characters that aren't supported in XML 1.0, you can addthis parameter to request that Amazon S3 encode the keys in theresponse. For more information about characters to avoid in object keynames, seeObject key naming guidelines.

    When using the URL encoding type, non-ASCII characters that are usedin an object's key name will be percent-encoded according to UTF-8code values. For example, the objecttest_file(3).png will appear astest_file%283%29.png.

  • :key_marker(String)

    Specifies the multipart upload after which listing should begin.

    *General purpose buckets - For general purpose buckets,key-marker is an object key. Together withupload-id-marker, this parameter specifies the multipart upload after which listing should begin.

    Ifupload-id-marker is not specified, only the keys lexicographically greater than the specifiedkey-marker will be included in the list.

    Ifupload-id-marker is specified, any multipart uploads for a key equal to thekey-marker might also be included, provided those multipart uploads have upload IDs lexicographically greater than the specifiedupload-id-marker.

    • Directory buckets - For directory buckets,key-marker isobfuscated and isn't a real object key. Theupload-id-markerparameter isn't supported by directory buckets. To list theadditional multipart uploads, you only need to set the value ofkey-marker to theNextKeyMarker value from the previousresponse.

      In theListMultipartUploads response, the multipart uploadsaren't sorted lexicographically based on the object keys.

  • :max_uploads(Integer)

    Sets the maximum number of multipart uploads, from 1 to 1,000, toreturn in the response body. 1,000 is the maximum number of uploadsthat can be returned in a response.

  • :prefix(String)

    Lists in-progress uploads only for those keys that begin with thespecified prefix. You can use prefixes to separate a bucket intodifferent grouping of keys. (You can think of usingprefix to makegroups in the same way that you'd use a folder in a file system.)

    Directory buckets - For directory buckets, only prefixes that endin a delimiter (/) are supported.

  • :upload_id_marker(String)

    Together with key-marker, specifies the multipart upload after whichlisting should begin. If key-marker is not specified, theupload-id-marker parameter is ignored. Otherwise, any multipartuploads for a key equal to the key-marker might be included in thelist only if they have an upload ID lexicographically greater than thespecifiedupload-id-marker.

    This functionality is not supported for directory buckets.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

  • :request_payer(String)

    Confirms that the requester knows that they will be charged for therequest. Bucket owners need not specify this parameter in theirrequests. If either the source or destination S3 bucket has RequesterPays enabled, the requester will pay for corresponding charges to copythe object. For information about downloading objects from RequesterPays buckets, seeDownloading Objects in Requester Pays Bucketsin theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

Returns:

See Also:

12742127431274412745
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 12742deflist_multipart_uploads(params={},options={})req=build_request(:list_multipart_uploads,params)req.send_request(options)end

#list_object_versions(params = {}) ⇒Types::ListObjectVersionsOutput

This operation is not supported for directory buckets.

Returns metadata about all versions of the objects in a bucket. Youcan also use request parameters as selection criteria to returnmetadata about a subset of all the object versions.

To use this operation, you must have permission to perform thes3:ListBucketVersions action. Be aware of the name difference.

A200 OK response can contain valid or invalid XML. Make sure todesign your application to parse the contents of the response andhandle it appropriately.

To use this operation, you must have READ access to the bucket.

The following operations are related toListObjectVersions:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

The returnedresponse is a pageable response and is Enumerable. For details on usage seePageableResponse.

Examples:

Example: To list object versions

# The following example returns versions of an object with specific key name prefix.resp=client.list_object_versions({bucket:"examplebucket",prefix:"HappyFace.jpg",})resp.to_houtputsthefollowing:{versions:[{etag:"\"6805f2cfc46c0f04559748bb039d69ae\"",is_latest:true,key:"HappyFace.jpg",last_modified:Time.parse("2016-12-15T01:19:41.000Z"),owner:{display_name:"owner-display-name",id:"examplee7a2f25102679df27bb0ae12b3f85be6f290b936c4393484be31bebcc",},size:3191,storage_class:"STANDARD",version_id:"null",},{etag:"\"6805f2cfc46c0f04559748bb039d69ae\"",is_latest:false,key:"HappyFace.jpg",last_modified:Time.parse("2016-12-13T00:58:26.000Z"),owner:{display_name:"owner-display-name",id:"examplee7a2f25102679df27bb0ae12b3f85be6f290b936c4393484be31bebcc",},size:3191,storage_class:"STANDARD",version_id:"PHtexPGjH2y.zBgT8LmB7wwLI2mpbz.k",},],}

Request syntax with placeholder values

resp=client.list_object_versions({bucket:"BucketName",# requireddelimiter:"Delimiter",encoding_type:"url",# accepts urlkey_marker:"KeyMarker",max_keys:1,prefix:"Prefix",version_id_marker:"VersionIdMarker",expected_bucket_owner:"AccountId",request_payer:"requester",# accepts requesteroptional_object_attributes:["RestoreStatus"],# accepts RestoreStatus})

Response structure

resp.is_truncated#=> Booleanresp.key_marker#=> Stringresp.version_id_marker#=> Stringresp.next_key_marker#=> Stringresp.next_version_id_marker#=> Stringresp.versions#=> Arrayresp.versions[0].etag#=> Stringresp.versions[0].checksum_algorithm#=> Arrayresp.versions[0].checksum_algorithm[0]#=> String, one of "CRC32", "CRC32C", "SHA1", "SHA256", "CRC64NVME"resp.versions[0].checksum_type#=> String, one of "COMPOSITE", "FULL_OBJECT"resp.versions[0].size#=> Integerresp.versions[0].storage_class#=> String, one of "STANDARD"resp.versions[0].key#=> Stringresp.versions[0].version_id#=> Stringresp.versions[0].is_latest#=> Booleanresp.versions[0].last_modified#=> Timeresp.versions[0].owner.display_name#=> Stringresp.versions[0].owner.id#=> Stringresp.versions[0].restore_status.is_restore_in_progress#=> Booleanresp.versions[0].restore_status.restore_expiry_date#=> Timeresp.delete_markers#=> Arrayresp.delete_markers[0].owner.display_name#=> Stringresp.delete_markers[0].owner.id#=> Stringresp.delete_markers[0].key#=> Stringresp.delete_markers[0].version_id#=> Stringresp.delete_markers[0].is_latest#=> Booleanresp.delete_markers[0].last_modified#=> Timeresp.name#=> Stringresp.prefix#=> Stringresp.delimiter#=> Stringresp.max_keys#=> Integerresp.common_prefixes#=> Arrayresp.common_prefixes[0].prefix#=> Stringresp.encoding_type#=> String, one of "url"resp.request_charged#=> String, one of "requester"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket name that contains the objects.

  • :delimiter(String)

    A delimiter is a character that you specify to group keys. All keysthat contain the same string between theprefix and the firstoccurrence of the delimiter are grouped under a single result elementinCommonPrefixes. These groups are counted as one result againstthemax-keys limitation. These keys are not returned elsewhere inthe response.

    CommonPrefixes is filtered out from results if it is notlexicographically greater than the key-marker.

  • :encoding_type(String)

    Encoding type used by Amazon S3 to encode theobject keys in theresponse. Responses are encoded only in UTF-8. An object key cancontain any Unicode character. However, the XML 1.0 parser can'tparse certain characters, such as characters with an ASCII value from0 to 10. For characters that aren't supported in XML 1.0, you can addthis parameter to request that Amazon S3 encode the keys in theresponse. For more information about characters to avoid in object keynames, seeObject key naming guidelines.

    When using the URL encoding type, non-ASCII characters that are usedin an object's key name will be percent-encoded according to UTF-8code values. For example, the objecttest_file(3).png will appear astest_file%283%29.png.

  • :key_marker(String)

    Specifies the key to start with when listing objects in a bucket.

  • :max_keys(Integer)

    Sets the maximum number of keys returned in the response. By default,the action returns up to 1,000 key names. The response might containfewer keys but will never contain more. If additional keys satisfy thesearch criteria, but were not returned becausemax-keys wasexceeded, the response contains<isTruncated>true</isTruncated>. Toreturn the additional keys, seekey-marker andversion-id-marker.

  • :prefix(String)

    Use this parameter to select only those keys that begin with thespecified prefix. You can use prefixes to separate a bucket intodifferent groupings of keys. (You can think of usingprefix to makegroups in the same way that you'd use a folder in a file system.) Youcan useprefix withdelimiter to roll up numerous objects into asingle result underCommonPrefixes.

  • :version_id_marker(String)

    Specifies the object version you want to start listing from.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

  • :request_payer(String)

    Confirms that the requester knows that they will be charged for therequest. Bucket owners need not specify this parameter in theirrequests. If either the source or destination S3 bucket has RequesterPays enabled, the requester will pay for corresponding charges to copythe object. For information about downloading objects from RequesterPays buckets, seeDownloading Objects in Requester Pays Bucketsin theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :optional_object_attributes(Array<String>)

    Specifies the optional fields that you want returned in the response.Fields that you do not specify are not returned.

Returns:

See Also:

12989129901299112992
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 12989deflist_object_versions(params={},options={})req=build_request(:list_object_versions,params)req.send_request(options)end

#list_objects(params = {}) ⇒Types::ListObjectsOutput

This operation is not supported for directory buckets.

Returns some or all (up to 1,000) of the objects in a bucket. You canuse the request parameters as selection criteria to return a subset ofthe objects in a bucket. A 200 OK response can contain valid orinvalid XML. Be sure to design your application to parse the contentsof the response and handle it appropriately.

This action has been revised. We recommend that you use the newerversion,ListObjectsV2, when developing applications. Forbackward compatibility, Amazon S3 continues to supportListObjects.

The following operations are related toListObjects:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

The returnedresponse is a pageable response and is Enumerable. For details on usage seePageableResponse.

Examples:

Example: To list objects in a bucket

# The following example list two objects in a bucket.resp=client.list_objects({bucket:"examplebucket",max_keys:2,})resp.to_houtputsthefollowing:{contents:[{etag:"\"70ee1738b6b21e2c8a43f3a5ab0eee71\"",key:"example1.jpg",last_modified:Time.parse("2014-11-21T19:40:05.000Z"),owner:{display_name:"myname",id:"12345example25102679df27bb0ae12b3f85be6f290b936c4393484be31bebcc",},size:11,storage_class:"STANDARD",},{etag:"\"9c8af9a76df052144598c115ef33e511\"",key:"example2.jpg",last_modified:Time.parse("2013-11-15T01:10:49.000Z"),owner:{display_name:"myname",id:"12345example25102679df27bb0ae12b3f85be6f290b936c4393484be31bebcc",},size:713193,storage_class:"STANDARD",},],next_marker:"eyJNYXJrZXIiOiBudWxsLCAiYm90b190cnVuY2F0ZV9hbW91bnQiOiAyfQ==",}

Request syntax with placeholder values

resp=client.list_objects({bucket:"BucketName",# requireddelimiter:"Delimiter",encoding_type:"url",# accepts urlmarker:"Marker",max_keys:1,prefix:"Prefix",request_payer:"requester",# accepts requesterexpected_bucket_owner:"AccountId",optional_object_attributes:["RestoreStatus"],# accepts RestoreStatus})

Response structure

resp.is_truncated#=> Booleanresp.marker#=> Stringresp.next_marker#=> Stringresp.contents#=> Arrayresp.contents[0].key#=> Stringresp.contents[0].last_modified#=> Timeresp.contents[0].etag#=> Stringresp.contents[0].checksum_algorithm#=> Arrayresp.contents[0].checksum_algorithm[0]#=> String, one of "CRC32", "CRC32C", "SHA1", "SHA256", "CRC64NVME"resp.contents[0].checksum_type#=> String, one of "COMPOSITE", "FULL_OBJECT"resp.contents[0].size#=> Integerresp.contents[0].storage_class#=> String, one of "STANDARD", "REDUCED_REDUNDANCY", "GLACIER", "STANDARD_IA", "ONEZONE_IA", "INTELLIGENT_TIERING", "DEEP_ARCHIVE", "OUTPOSTS", "GLACIER_IR", "SNOW", "EXPRESS_ONEZONE", "FSX_OPENZFS", "FSX_ONTAP"resp.contents[0].owner.display_name#=> Stringresp.contents[0].owner.id#=> Stringresp.contents[0].restore_status.is_restore_in_progress#=> Booleanresp.contents[0].restore_status.restore_expiry_date#=> Timeresp.name#=> Stringresp.prefix#=> Stringresp.delimiter#=> Stringresp.max_keys#=> Integerresp.common_prefixes#=> Arrayresp.common_prefixes[0].prefix#=> Stringresp.encoding_type#=> String, one of "url"resp.request_charged#=> String, one of "requester"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket containing the objects.

    Directory buckets - When you use this operation with a directorybucket, you must use virtual-hosted-style requests in the formatBucket-name.s3express-zone-id.region-code.amazonaws.com. Path-stylerequests are not supported. Directory bucket names must be unique inthe chosen Zone (Availability Zone or Local Zone). Bucket names mustfollow the formatbucket-base-name--zone-id--x-s3 (for example,amzn-s3-demo-bucket--usw2-az1--x-s3). For information about bucketnaming restrictions, seeDirectory bucket naming rules in theAmazon S3 User Guide.

    Access points - When you use this action with an access point forgeneral purpose buckets, you must provide the alias of the accesspoint in place of the bucket name or specify the access point ARN.When you use this action with an access point for directory buckets,you must provide the access point name in place of the bucket name.When using the access point ARN, you must direct requests to theaccess point hostname. The access point hostname takes the formAccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.When using this action with an access point through the Amazon WebServices SDKs, you provide the access point ARN in place of the bucketname. For more information about access point ARNs, seeUsing accesspoints in theAmazon S3 User Guide.

    Object Lambda access points are not supported by directory buckets.

    S3 on Outposts - When you use this action with S3 on Outposts, youmust direct requests to the S3 on Outposts hostname. The S3 onOutposts hostname takes the formAccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com.When you use this action with S3 on Outposts, the destination bucketmust be the Outposts access point ARN or the access point alias. Formore information about S3 on Outposts, seeWhat is S3 onOutposts? in theAmazon S3 User Guide.

  • :delimiter(String)

    A delimiter is a character that you use to group keys.

    CommonPrefixes is filtered out from results if it is notlexicographically greater than the key-marker.

  • :encoding_type(String)

    Encoding type used by Amazon S3 to encode theobject keys in theresponse. Responses are encoded only in UTF-8. An object key cancontain any Unicode character. However, the XML 1.0 parser can'tparse certain characters, such as characters with an ASCII value from0 to 10. For characters that aren't supported in XML 1.0, you can addthis parameter to request that Amazon S3 encode the keys in theresponse. For more information about characters to avoid in object keynames, seeObject key naming guidelines.

    When using the URL encoding type, non-ASCII characters that are usedin an object's key name will be percent-encoded according to UTF-8code values. For example, the objecttest_file(3).png will appear astest_file%283%29.png.

  • :marker(String)

    Marker is where you want Amazon S3 to start listing from. Amazon S3starts listing after this specified key. Marker can be any key in thebucket.

  • :max_keys(Integer)

    Sets the maximum number of keys returned in the response. By default,the action returns up to 1,000 key names. The response might containfewer keys but will never contain more.

  • :prefix(String)

    Limits the response to keys that begin with the specified prefix.

  • :request_payer(String)

    Confirms that the requester knows that she or he will be charged forthe list objects request. Bucket owners need not specify thisparameter in their requests.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

  • :optional_object_attributes(Array<String>)

    Specifies the optional fields that you want returned in the response.Fields that you do not specify are not returned.

Returns:

See Also:

13232132331323413235
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 13232deflist_objects(params={},options={})req=build_request(:list_objects,params)req.send_request(options)end

#list_objects_v2(params = {}) ⇒Types::ListObjectsV2Output

Returns some or all (up to 1,000) of the objects in a bucket with eachrequest. You can use the request parameters as selection criteria toreturn a subset of the objects in a bucket. A200 OK response cancontain valid or invalid XML. Make sure to design your application toparse the contents of the response and handle it appropriately. Formore information about listing objects, seeListing object keysprogrammatically in theAmazon S3 User Guide. To get a list ofyour buckets, seeListBuckets.

*General purpose bucket - For general purpose buckets,ListObjectsV2 doesn't return prefixes that are related only to in-progress multipart uploads.

  • Directory buckets - For directory buckets,ListObjectsV2response includes the prefixes that are related only to in-progressmultipart uploads.

  • Directory buckets - For directory buckets, you must makerequests for this API operation to the Zonal endpoint. Theseendpoints support virtual-hosted-style requests in the formathttps://amzn-s3-demo-bucket.s3express-zone-id.region-code.amazonaws.com/key-name. Path-style requests are not supported. For more information aboutendpoints in Availability Zones, seeRegional and Zonal endpointsfor directory buckets in Availability Zones in theAmazon S3User Guide. For more information about endpoints in Local Zones,seeConcepts for directory buckets in Local Zones in theAmazon S3 User Guide.

Permissions
  • General purpose bucket permissions - To use this operation,you must have READ access to the bucket. You must have permissionto perform thes3:ListBucket action. The bucket owner has thispermission by default and can grant this permission to others. Formore information about permissions, seePermissions Related toBucket Subresource Operations andManaging Access Permissionsto Your Amazon S3 Resources in theAmazon S3 User Guide.

  • Directory bucket permissions - To grant access to this APIoperation on a directory bucket, we recommend that you use theCreateSession API operation for session-basedauthorization. Specifically, you grant thes3express:CreateSession permission to the directory bucket in abucket policy or an IAM identity-based policy. Then, you make theCreateSession API call on the bucket to obtain a session token.With the session token in your request header, you can make APIrequests to this operation. After the session token expires, youmake anotherCreateSession API call to generate a new sessiontoken for use. Amazon Web Services CLI or SDKs create session andrefresh the session token automatically to avoid serviceinterruptions when a session expires. For more information aboutauthorization, seeCreateSession.

Sorting order of returned objects
  • General purpose bucket - For general purpose buckets,ListObjectsV2 returns objects in lexicographical order based ontheir key names.

  • Directory bucket - For directory buckets,ListObjectsV2 doesnot return objects in lexicographical order.

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax isBucket-name.s3express-zone-id.region-code.amazonaws.com.

This section describes the latest revision of this action. Werecommend that you use this revised API operation for applicationdevelopment. For backward compatibility, Amazon S3 continues tosupport the prior version of this API operation,ListObjects.

The following operations are related toListObjectsV2:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

The returnedresponse is a pageable response and is Enumerable. For details on usage seePageableResponse.

Examples:

Example: To get object list

# The following example retrieves object list. The request specifies max keys to limit response to include only 2 object# keys.resp=client.list_objects_v2({bucket:"DOC-EXAMPLE-BUCKET",max_keys:2,})resp.to_houtputsthefollowing:{contents:[{etag:"\"70ee1738b6b21e2c8a43f3a5ab0eee71\"",key:"happyface.jpg",last_modified:Time.parse("2014-11-21T19:40:05.000Z"),size:11,storage_class:"STANDARD",},{etag:"\"becf17f89c30367a9a44495d62ed521a-1\"",key:"test.jpg",last_modified:Time.parse("2014-05-02T04:51:50.000Z"),size:4192256,storage_class:"STANDARD",},],is_truncated:true,key_count:2,max_keys:2,name:"DOC-EXAMPLE-BUCKET",next_continuation_token:"1w41l63U0xa8q7smH50vCxyTQqdxo69O3EmK28Bi5PcROI4wI/EyIJg==",prefix:"",}

Request syntax with placeholder values

resp=client.list_objects_v2({bucket:"BucketName",# requireddelimiter:"Delimiter",encoding_type:"url",# accepts urlmax_keys:1,prefix:"Prefix",continuation_token:"Token",fetch_owner:false,start_after:"StartAfter",request_payer:"requester",# accepts requesterexpected_bucket_owner:"AccountId",optional_object_attributes:["RestoreStatus"],# accepts RestoreStatus})

Response structure

resp.is_truncated#=> Booleanresp.contents#=> Arrayresp.contents[0].key#=> Stringresp.contents[0].last_modified#=> Timeresp.contents[0].etag#=> Stringresp.contents[0].checksum_algorithm#=> Arrayresp.contents[0].checksum_algorithm[0]#=> String, one of "CRC32", "CRC32C", "SHA1", "SHA256", "CRC64NVME"resp.contents[0].checksum_type#=> String, one of "COMPOSITE", "FULL_OBJECT"resp.contents[0].size#=> Integerresp.contents[0].storage_class#=> String, one of "STANDARD", "REDUCED_REDUNDANCY", "GLACIER", "STANDARD_IA", "ONEZONE_IA", "INTELLIGENT_TIERING", "DEEP_ARCHIVE", "OUTPOSTS", "GLACIER_IR", "SNOW", "EXPRESS_ONEZONE", "FSX_OPENZFS", "FSX_ONTAP"resp.contents[0].owner.display_name#=> Stringresp.contents[0].owner.id#=> Stringresp.contents[0].restore_status.is_restore_in_progress#=> Booleanresp.contents[0].restore_status.restore_expiry_date#=> Timeresp.name#=> Stringresp.prefix#=> Stringresp.delimiter#=> Stringresp.max_keys#=> Integerresp.common_prefixes#=> Arrayresp.common_prefixes[0].prefix#=> Stringresp.encoding_type#=> String, one of "url"resp.key_count#=> Integerresp.continuation_token#=> Stringresp.next_continuation_token#=> Stringresp.start_after#=> Stringresp.request_charged#=> String, one of "requester"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    Directory buckets - When you use this operation with a directorybucket, you must use virtual-hosted-style requests in the formatBucket-name.s3express-zone-id.region-code.amazonaws.com. Path-stylerequests are not supported. Directory bucket names must be unique inthe chosen Zone (Availability Zone or Local Zone). Bucket names mustfollow the formatbucket-base-name--zone-id--x-s3 (for example,amzn-s3-demo-bucket--usw2-az1--x-s3). For information about bucketnaming restrictions, seeDirectory bucket naming rules in theAmazon S3 User Guide.

    Access points - When you use this action with an access point forgeneral purpose buckets, you must provide the alias of the accesspoint in place of the bucket name or specify the access point ARN.When you use this action with an access point for directory buckets,you must provide the access point name in place of the bucket name.When using the access point ARN, you must direct requests to theaccess point hostname. The access point hostname takes the formAccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.When using this action with an access point through the Amazon WebServices SDKs, you provide the access point ARN in place of the bucketname. For more information about access point ARNs, seeUsing accesspoints in theAmazon S3 User Guide.

    Object Lambda access points are not supported by directory buckets.

    S3 on Outposts - When you use this action with S3 on Outposts, youmust direct requests to the S3 on Outposts hostname. The S3 onOutposts hostname takes the formAccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com.When you use this action with S3 on Outposts, the destination bucketmust be the Outposts access point ARN or the access point alias. Formore information about S3 on Outposts, seeWhat is S3 onOutposts? in theAmazon S3 User Guide.

  • :delimiter(String)

    A delimiter is a character that you use to group keys.

    CommonPrefixes is filtered out from results if it is notlexicographically greater than theStartAfter value.

    *Directory buckets - For directory buckets,/ is the only supported delimiter.

    • Directory buckets - When you queryListObjectsV2 with adelimiter during in-progress multipart uploads, theCommonPrefixesresponse parameter contains the prefixes that are associated withthe in-progress multipart uploads. For more information aboutmultipart uploads, seeMultipart Upload Overview in theAmazonS3 User Guide.

  • :encoding_type(String)

    Encoding type used by Amazon S3 to encode theobject keys in theresponse. Responses are encoded only in UTF-8. An object key cancontain any Unicode character. However, the XML 1.0 parser can'tparse certain characters, such as characters with an ASCII value from0 to 10. For characters that aren't supported in XML 1.0, you can addthis parameter to request that Amazon S3 encode the keys in theresponse. For more information about characters to avoid in object keynames, seeObject key naming guidelines.

    When using the URL encoding type, non-ASCII characters that are usedin an object's key name will be percent-encoded according to UTF-8code values. For example, the objecttest_file(3).png will appear astest_file%283%29.png.

  • :max_keys(Integer)

    Sets the maximum number of keys returned in the response. By default,the action returns up to 1,000 key names. The response might containfewer keys but will never contain more.

  • :prefix(String)

    Limits the response to keys that begin with the specified prefix.

    Directory buckets - For directory buckets, only prefixes that endin a delimiter (/) are supported.

  • :continuation_token(String)

    ContinuationToken indicates to Amazon S3 that the list is beingcontinued on this bucket with a token.ContinuationToken isobfuscated and is not a real key. You can use thisContinuationTokenfor pagination of the list results.

  • :fetch_owner(Boolean)

    The owner field is not present inListObjectsV2 by default. If youwant to return the owner field with each key in the result, then settheFetchOwner field totrue.

    Directory buckets - For directory buckets, the bucket owner isreturned as the object owner for all objects.

  • :start_after(String)

    StartAfter is where you want Amazon S3 to start listing from. AmazonS3 starts listing after this specified key. StartAfter can be any keyin the bucket.

    This functionality is not supported for directory buckets.

  • :request_payer(String)

    Confirms that the requester knows that she or he will be charged forthe list objects request in V2 style. Bucket owners need not specifythis parameter in their requests.

    This functionality is not supported for directory buckets.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

  • :optional_object_attributes(Array<String>)

    Specifies the optional fields that you want returned in the response.Fields that you do not specify are not returned.

    This functionality is not supported for directory buckets.

Returns:

See Also:

13586135871358813589
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 13586deflist_objects_v2(params={},options={})req=build_request(:list_objects_v2,params)req.send_request(options)end

#list_parts(params = {}) ⇒Types::ListPartsOutput

Lists the parts that have been uploaded for a specific multipartupload.

To use this operation, you must provide theupload ID in therequest. You obtain this uploadID by sending the initiate multipartupload request throughCreateMultipartUpload.

TheListParts request returns a maximum of 1,000 uploaded parts. Thelimit of 1,000 parts is also the default value. You can restrict thenumber of parts in a response by specifying themax-parts requestparameter. If your multipart upload consists of more than 1,000 parts,the response returns anIsTruncated field with the value oftrue,and aNextPartNumberMarker element. To list remaining uploadedparts, in subsequentListParts requests, include thepart-number-marker query string parameter and set its value to theNextPartNumberMarker field value from the previous response.

For more information on multipart uploads, seeUploading ObjectsUsing Multipart Upload in theAmazon S3 User Guide.

Directory buckets - For directory buckets, you must make requestsfor this API operation to the Zonal endpoint. These endpoints supportvirtual-hosted-style requests in the formathttps://amzn-s3-demo-bucket.s3express-zone-id.region-code.amazonaws.com/key-name. Path-style requests are not supported. For more information aboutendpoints in Availability Zones, seeRegional and Zonal endpoints fordirectory buckets in Availability Zones in theAmazon S3 UserGuide. For more information about endpoints in Local Zones, seeConcepts for directory buckets in Local Zones in theAmazon S3User Guide.

Permissions
  • General purpose bucket permissions - For information aboutpermissions required to use the multipart upload API, seeMultipart Upload and Permissions in theAmazon S3 UserGuide.

    If the upload was created using server-side encryption with KeyManagement Service (KMS) keys (SSE-KMS) or dual-layer server-sideencryption with Amazon Web Services KMS keys (DSSE-KMS), you musthave permission to thekms:Decrypt action for theListPartsrequest to succeed.

  • Directory bucket permissions - To grant access to this APIoperation on a directory bucket, we recommend that you use theCreateSession API operation for session-basedauthorization. Specifically, you grant thes3express:CreateSession permission to the directory bucket in abucket policy or an IAM identity-based policy. Then, you make theCreateSession API call on the bucket to obtain a session token.With the session token in your request header, you can make APIrequests to this operation. After the session token expires, youmake anotherCreateSession API call to generate a new sessiontoken for use. Amazon Web Services CLI or SDKs create session andrefresh the session token automatically to avoid serviceinterruptions when a session expires. For more information aboutauthorization, seeCreateSession.

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax isBucket-name.s3express-zone-id.region-code.amazonaws.com.

The following operations are related toListParts:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

The returnedresponse is a pageable response and is Enumerable. For details on usage seePageableResponse.

Examples:

Example: To list parts of a multipart upload.

# The following example lists parts uploaded for a specific multipart upload.resp=client.list_parts({bucket:"examplebucket",key:"bigobject",upload_id:"example7YPBOJuoFiQ9cz4P3Pe6FIZwO4f7wN93uHsNBEw97pl5eNwzExg0LAT2dUN91cOmrEQHDsP3WA60CEg--",})resp.to_houtputsthefollowing:{initiator:{display_name:"owner-display-name",id:"examplee7a2f25102679df27bb0ae12b3f85be6f290b936c4393484be31bebcc",},owner:{display_name:"owner-display-name",id:"examplee7a2f25102679df27bb0ae12b3f85be6f290b936c4393484be31bebcc",},parts:[{etag:"\"d8c2eafd90c266e19ab9dcacc479f8af\"",last_modified:Time.parse("2016-12-16T00:11:42.000Z"),part_number:1,size:26246026,},{etag:"\"d8c2eafd90c266e19ab9dcacc479f8af\"",last_modified:Time.parse("2016-12-16T00:15:01.000Z"),part_number:2,size:26246026,},],storage_class:"STANDARD",}

Request syntax with placeholder values

resp=client.list_parts({bucket:"BucketName",# requiredkey:"ObjectKey",# requiredmax_parts:1,part_number_marker:1,upload_id:"MultipartUploadId",# requiredrequest_payer:"requester",# accepts requesterexpected_bucket_owner:"AccountId",sse_customer_algorithm:"SSECustomerAlgorithm",sse_customer_key:"SSECustomerKey",sse_customer_key_md5:"SSECustomerKeyMD5",})

Response structure

resp.abort_date#=> Timeresp.abort_rule_id#=> Stringresp.bucket#=> Stringresp.key#=> Stringresp.upload_id#=> Stringresp.part_number_marker#=> Integerresp.next_part_number_marker#=> Integerresp.max_parts#=> Integerresp.is_truncated#=> Booleanresp.parts#=> Arrayresp.parts[0].part_number#=> Integerresp.parts[0].last_modified#=> Timeresp.parts[0].etag#=> Stringresp.parts[0].size#=> Integerresp.parts[0].checksum_crc32#=> Stringresp.parts[0].checksum_crc32c#=> Stringresp.parts[0].checksum_crc64nvme#=> Stringresp.parts[0].checksum_sha1#=> Stringresp.parts[0].checksum_sha256#=> Stringresp.initiator.id#=> Stringresp.initiator.display_name#=> Stringresp.owner.display_name#=> Stringresp.owner.id#=> Stringresp.storage_class#=> String, one of "STANDARD", "REDUCED_REDUNDANCY", "STANDARD_IA", "ONEZONE_IA", "INTELLIGENT_TIERING", "GLACIER", "DEEP_ARCHIVE", "OUTPOSTS", "GLACIER_IR", "SNOW", "EXPRESS_ONEZONE", "FSX_OPENZFS", "FSX_ONTAP"resp.request_charged#=> String, one of "requester"resp.checksum_algorithm#=> String, one of "CRC32", "CRC32C", "SHA1", "SHA256", "CRC64NVME"resp.checksum_type#=> String, one of "COMPOSITE", "FULL_OBJECT"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket to which the parts are being uploaded.

    Directory buckets - When you use this operation with a directorybucket, you must use virtual-hosted-style requests in the formatBucket-name.s3express-zone-id.region-code.amazonaws.com. Path-stylerequests are not supported. Directory bucket names must be unique inthe chosen Zone (Availability Zone or Local Zone). Bucket names mustfollow the formatbucket-base-name--zone-id--x-s3 (for example,amzn-s3-demo-bucket--usw2-az1--x-s3). For information about bucketnaming restrictions, seeDirectory bucket naming rules in theAmazon S3 User Guide.

    Access points - When you use this action with an access point forgeneral purpose buckets, you must provide the alias of the accesspoint in place of the bucket name or specify the access point ARN.When you use this action with an access point for directory buckets,you must provide the access point name in place of the bucket name.When using the access point ARN, you must direct requests to theaccess point hostname. The access point hostname takes the formAccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.When using this action with an access point through the Amazon WebServices SDKs, you provide the access point ARN in place of the bucketname. For more information about access point ARNs, seeUsing accesspoints in theAmazon S3 User Guide.

    Object Lambda access points are not supported by directory buckets.

    S3 on Outposts - When you use this action with S3 on Outposts, youmust direct requests to the S3 on Outposts hostname. The S3 onOutposts hostname takes the formAccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com.When you use this action with S3 on Outposts, the destination bucketmust be the Outposts access point ARN or the access point alias. Formore information about S3 on Outposts, seeWhat is S3 onOutposts? in theAmazon S3 User Guide.

  • :key(required,String)

    Object key for which the multipart upload was initiated.

  • :max_parts(Integer)

    Sets the maximum number of parts to return.

  • :part_number_marker(Integer)

    Specifies the part after which listing should begin. Only parts withhigher part numbers will be listed.

  • :upload_id(required,String)

    Upload ID identifying the multipart upload whose parts are beinglisted.

  • :request_payer(String)

    Confirms that the requester knows that they will be charged for therequest. Bucket owners need not specify this parameter in theirrequests. If either the source or destination S3 bucket has RequesterPays enabled, the requester will pay for corresponding charges to copythe object. For information about downloading objects from RequesterPays buckets, seeDownloading Objects in Requester Pays Bucketsin theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

  • :sse_customer_algorithm(String)

    The server-side encryption (SSE) algorithm used to encrypt the object.This parameter is needed only when the object was created using achecksum algorithm. For more information, seeProtecting data usingSSE-C keys in theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :sse_customer_key(String)

    The server-side encryption (SSE) customer managed key. This parameteris needed only when the object was created using a checksum algorithm.For more information, seeProtecting data using SSE-C keys in theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :sse_customer_key_md5(String)

    The MD5 server-side encryption (SSE) customer managed key. Thisparameter is needed only when the object was created using a checksumalgorithm. For more information, seeProtecting data using SSE-Ckeys in theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

Returns:

See Also:

13919139201392113922
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 13919deflist_parts(params={},options={})req=build_request(:list_parts,params)req.send_request(options)end

#put_bucket_abac(params = {}) ⇒Struct

Sets the attribute-based access control (ABAC) property of the generalpurpose bucket. You must haves3:PutBucketABAC permission to performthis action. When you enable ABAC, you can use tags for access controlon your buckets. Additionally, when ABAC is enabled, you must use theTagResource andUntagResource actions to manage tags on yourbuckets. You can nolonger use thePutBucketTagging andDeleteBucketTagging actions to tag your bucket. For moreinformation, seeEnabling ABAC in general purpose buckets.

Examples:

Request syntax with placeholder values

resp=client.put_bucket_abac({bucket:"BucketName",# requiredcontent_md5:"ContentMD5",checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVMEexpected_bucket_owner:"AccountId",abac_status:{# requiredstatus:"Enabled",# accepts Enabled, Disabled},})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the general purpose bucket.

  • :content_md5(String)

    The MD5 hash of thePutBucketAbac request body.

    For requests made using the Amazon Web Services Command Line Interface(CLI) or Amazon Web Services SDKs, this field is calculatedautomatically.

  • :checksum_algorithm(String)

    Indicates the algorithm that you want Amazon S3 to use to create thechecksum. For more information, see Checking object integrity intheAmazon S3 User Guide.

  • :expected_bucket_owner(String)

    The Amazon Web Services account ID of the general purpose bucket'sowner.

  • :abac_status(required,Types::AbacStatus)

    The ABAC status of the general purpose bucket. When ABAC is enabledfor the general purpose bucket, you can use tags to manage access tothe general purpose buckets as well as for cost tracking purposes.When ABAC is disabled for the general purpose buckets, you can onlyuse tags for cost tracking purposes. For more information, seeUsingtags with S3 general purpose buckets.

Returns:

See Also:

13994139951399613997
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 13994defput_bucket_abac(params={},options={})req=build_request(:put_bucket_abac,params)req.send_request(options)end

#put_bucket_accelerate_configuration(params = {}) ⇒Struct

This operation is not supported for directory buckets.

Sets the accelerate configuration of an existing bucket. Amazon S3Transfer Acceleration is a bucket-level feature that enables you toperform faster data transfers to Amazon S3.

To use this operation, you must have permission to perform thes3:PutAccelerateConfiguration action. The bucket owner has thispermission by default. The bucket owner can grant this permission toothers. For more information about permissions, seePermissionsRelated to Bucket Subresource Operations andManaging AccessPermissions to Your Amazon S3 Resources.

The Transfer Acceleration state of a bucket can be set to one of thefollowing two values:

  • Enabled – Enables accelerated data transfers to the bucket.

  • Suspended – Disables accelerated data transfers to the bucket.

TheGetBucketAccelerateConfiguration action returns the transferacceleration state of a bucket.

After setting the Transfer Acceleration state of a bucket to Enabled,it might take up to thirty minutes before the data transfer rates tothe bucket increase.

The name of the bucket used for Transfer Acceleration must beDNS-compliant and must not contain periods (".").

For more information about transfer acceleration, seeTransferAcceleration.

The following operations are related toPutBucketAccelerateConfiguration:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.put_bucket_accelerate_configuration({bucket:"BucketName",# requiredaccelerate_configuration:{# requiredstatus:"Enabled",# accepts Enabled, Suspended},expected_bucket_owner:"AccountId",checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVME})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket for which the accelerate configuration is set.

  • :accelerate_configuration(required,Types::AccelerateConfiguration)

    Container for setting the transfer acceleration state.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

  • :checksum_algorithm(String)

    Indicates the algorithm used to create the checksum for the requestwhen you use the SDK. This header will not provide any additionalfunctionality if you don't use the SDK. When you send this header,there must be a correspondingx-amz-checksum orx-amz-trailerheader sent. Otherwise, Amazon S3 fails the request with the HTTPstatus code400 Bad Request. For more information, seeCheckingobject integrity in theAmazon S3 User Guide.

    If you provide an individual checksum, Amazon S3 ignores any providedChecksumAlgorithm parameter.

Returns:

See Also:

14097140981409914100
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 14097defput_bucket_accelerate_configuration(params={},options={})req=build_request(:put_bucket_accelerate_configuration,params)req.send_request(options)end

#put_bucket_acl(params = {}) ⇒Struct

End of support notice: As of October 1, 2025, Amazon S3 hasdiscontinued support for Email Grantee Access Control Lists (ACLs). Ifyou attempt to use an Email Grantee ACL in a request after October 1,2025, the request will receive anHTTP 405 (Method Not Allowed)error.

This change affects the following Amazon Web Services Regions: USEast(N. Virginia), US West (N. California), US West (Oregon), Asia Pacific(Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe(Ireland), and South America (São Paulo).

This operation is not supported for directory buckets.

Sets the permissions on an existing bucket using access control lists(ACL). For more information, seeUsing ACLs. To set the ACL of abucket, you must have theWRITE_ACP permission.

You can use one of the following two ways to set a bucket'spermissions:

  • Specify the ACL in the request body

  • Specify permissions using request headers

You cannot specify access permission using both the body and therequest headers.

Depending on your application needs, you may choose to set the ACL ona bucket using either the request body or the headers. For example, ifyou have an existing application that updates a bucket ACL using therequest body, then you can continue to use that approach.

If your bucket uses the bucket owner enforced setting for S3 ObjectOwnership, ACLs are disabled and no longer affect permissions. Youmust use policies to grant access to your bucket and the objects init. Requests to set ACLs or update ACLs fail and return theAccessControlListNotSupported error code. Requests to read ACLs arestill supported. For more information, seeControlling objectownership in theAmazon S3 User Guide.

Permissions

You can set access permissions by using one of the followingmethods:

  • Specify a canned ACL with thex-amz-acl request header. AmazonS3 supports a set of predefined ACLs, known ascanned ACLs. Eachcanned ACL has a predefined set of grantees and permissions.Specify the canned ACL name as the value ofx-amz-acl. If youuse this header, you cannot use other access control-specificheaders in your request. For more information, seeCannedACL.

  • Specify access permissions explicitly with thex-amz-grant-read,x-amz-grant-read-acp,x-amz-grant-write-acp, andx-amz-grant-full-control headers. When using these headers, youspecify explicit access permissions and grantees (Amazon WebServices accounts or Amazon S3 groups) who will receive thepermission. If you use these ACL-specific headers, you cannot usethex-amz-acl header to set a canned ACL. These parameters mapto the set of permissions that Amazon S3 supports in an ACL. Formore information, seeAccess Control List (ACL) Overview.

    You specify each grantee as a type=value pair, where the type isone of the following:

    • id – if the value specified is the canonical user ID of anAmazon Web Services account

    • uri – if you are granting permissions to a predefined group

    • emailAddress – if the value specified is the email address ofan Amazon Web Services account

      Using email addresses to specify a grantee is only supported inthe following Amazon Web Services Regions:

      • US East (N. Virginia)

      • US West (N. California)

      • US West (Oregon)

      • Asia Pacific (Singapore)

      • Asia Pacific (Sydney)

      • Asia Pacific (Tokyo)

      • Europe (Ireland)

      • South America (São Paulo)

      For a list of all the Amazon S3 supported Regions and endpoints,seeRegions and Endpoints in the Amazon Web ServicesGeneral Reference.

      For example, the followingx-amz-grant-write header grantscreate, overwrite, and delete objects permission to LogDeliverygroup predefined by Amazon S3 and two Amazon Web Services accountsidentified by their email addresses.

    x-amz-grant-write:uri="http://acs.amazonaws.com/groups/s3/LogDelivery",id="111122223333",

You can use either a canned ACL or specify access permissionsexplicitly. You cannot do both.

Grantee Values

You can specify the person (grantee) to whom you're assigningaccess rights (using request elements) in the following ways. Forexamples of how to specify these grantee values in JSON format, seethe Amazon Web Services CLI example in Enabling Amazon S3 serveraccess logging in theAmazon S3 User Guide.

  • By the person's ID:

    <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:type="CanonicalUser"><ID><>ID<></ID><DisplayName><>GranteesEmail<></DisplayName></Grantee>

    DisplayName is optional and ignored in the request

  • By URI:

    <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:type="Group"><URI><>http://acs.amazonaws.com/groups/global/AuthenticatedUsers<></URI></Grantee>

  • By Email address:

    <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:type="AmazonCustomerByEmail"><EmailAddress><>Grantees@email.com<></EmailAddress>&</Grantee>

    The grantee is resolved to the CanonicalUser and, in a response toa GET Object acl request, appears as the CanonicalUser.

    Using email addresses to specify a grantee is only supported inthe following Amazon Web Services Regions:

    • US East (N. Virginia)

    • US West (N. California)

    • US West (Oregon)

    • Asia Pacific (Singapore)

    • Asia Pacific (Sydney)

    • Asia Pacific (Tokyo)

    • Europe (Ireland)

    • South America (São Paulo)

    For a list of all the Amazon S3 supported Regions and endpoints,seeRegions and Endpoints in the Amazon Web Services GeneralReference.

The following operations are related toPutBucketAcl:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: Put bucket acl

# The following example replaces existing ACL on a bucket. The ACL grants the bucket owner (specified using the owner ID)# and write permission to the LogDelivery group. Because this is a replace operation, you must specify all the grants in# your request. To incrementally add or remove ACL grants, you might use the console.resp=client.put_bucket_acl({bucket:"examplebucket",grant_full_control:"id=examplee7a2f25102679df27bb0ae12b3f85be6f290b936c4393484",grant_write:"uri=http://acs.amazonaws.com/groups/s3/LogDelivery",})

Request syntax with placeholder values

resp=client.put_bucket_acl({acl:"private",# accepts private, public-read, public-read-write, authenticated-readaccess_control_policy:{grants:[{grantee:{display_name:"DisplayName",email_address:"EmailAddress",id:"ID",type:"CanonicalUser",# required, accepts CanonicalUser, AmazonCustomerByEmail, Groupuri:"URI",},permission:"FULL_CONTROL",# accepts FULL_CONTROL, WRITE, WRITE_ACP, READ, READ_ACP},],owner:{display_name:"DisplayName",id:"ID",},},bucket:"BucketName",# requiredcontent_md5:"ContentMD5",checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVMEgrant_full_control:"GrantFullControl",grant_read:"GrantRead",grant_read_acp:"GrantReadACP",grant_write:"GrantWrite",grant_write_acp:"GrantWriteACP",expected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :acl(String)

    The canned ACL to apply to the bucket.

  • :access_control_policy(Types::AccessControlPolicy)

    Contains the elements that set the ACL permissions for an object pergrantee.

  • :bucket(required,String)

    The bucket to which to apply the ACL.

  • :content_md5(String)

    The Base64 encoded 128-bitMD5 digest of the data. This header mustbe used as a message integrity check to verify that the request bodywas not corrupted in transit. For more information, go toRFC1864.

    For requests made using the Amazon Web Services Command Line Interface(CLI) or Amazon Web Services SDKs, this field is calculatedautomatically.

  • :checksum_algorithm(String)

    Indicates the algorithm used to create the checksum for the requestwhen you use the SDK. This header will not provide any additionalfunctionality if you don't use the SDK. When you send this header,there must be a correspondingx-amz-checksum orx-amz-trailerheader sent. Otherwise, Amazon S3 fails the request with the HTTPstatus code400 Bad Request. For more information, seeCheckingobject integrity in theAmazon S3 User Guide.

    If you provide an individual checksum, Amazon S3 ignores any providedChecksumAlgorithm parameter.

  • :grant_full_control(String)

    Allows grantee the read, write, read ACP, and write ACP permissions onthe bucket.

  • :grant_read(String)

    Allows grantee to list the objects in the bucket.

  • :grant_read_acp(String)

    Allows grantee to read the bucket ACL.

  • :grant_write(String)

    Allows grantee to create new objects in the bucket.

    For the bucket and object owners of existing objects, also allowsdeletions and overwrites of those objects.

  • :grant_write_acp(String)

    Allows grantee to write the ACL for the applicable bucket.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

14411144121441314414
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 14411defput_bucket_acl(params={},options={})req=build_request(:put_bucket_acl,params)req.send_request(options)end

#put_bucket_analytics_configuration(params = {}) ⇒Struct

This operation is not supported for directory buckets.

Sets an analytics configuration for the bucket (specified by theanalytics configuration ID). You can have up to 1,000 analyticsconfigurations per bucket.

You can choose to have storage class analysis export analysis reportssent to a comma-separated values (CSV) flat file. See theDataExportrequest element. Reports are updated daily and are based on the objectfilters that you configure. When selecting data export, you specify adestination bucket and an optional destination prefix where the fileis written. You can export the data to a destination bucket in adifferent account. However, the destination bucket must be in the sameRegion as the bucket that you are making the PUT analyticsconfiguration to. For more information, seeAmazon S3 Analytics –Storage Class Analysis.

You must create a bucket policy on the destination bucket where theexported file is written to grant permissions to Amazon S3 to writeobjects to the bucket. For an example policy, seeGrantingPermissions for Amazon S3 Inventory and Storage Class Analysis.

To use this operation, you must have permissions to perform thes3:PutAnalyticsConfiguration action. The bucket owner has thispermission by default. The bucket owner can grant this permission toothers. For more information about permissions, seePermissionsRelated to Bucket Subresource Operations andManaging AccessPermissions to Your Amazon S3 Resources.

PutBucketAnalyticsConfiguration has the following special errors:

    • HTTP Error: HTTP 400 Bad Request

    • Code: InvalidArgument

    • Cause: Invalid argument.

    • HTTP Error: HTTP 400 Bad Request

    • Code: TooManyConfigurations

    • Cause: You are attempting to create a new configuration but havealready reached the 1,000-configuration limit.

    • HTTP Error: HTTP 403 Forbidden

    • Code: AccessDenied

    • Cause: You are not the owner of the specified bucket, or you donot have the s3:PutAnalyticsConfiguration bucket permission to setthe configuration on the bucket.

The following operations are related toPutBucketAnalyticsConfiguration:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.put_bucket_analytics_configuration({bucket:"BucketName",# requiredid:"AnalyticsId",# requiredanalytics_configuration:{# requiredid:"AnalyticsId",# requiredfilter:{prefix:"Prefix",tag:{key:"ObjectKey",# requiredvalue:"Value",# required},and:{prefix:"Prefix",tags:[{key:"ObjectKey",# requiredvalue:"Value",# required},],},},storage_class_analysis:{# requireddata_export:{output_schema_version:"V_1",# required, accepts V_1destination:{# requireds3_bucket_destination:{# requiredformat:"CSV",# required, accepts CSVbucket_account_id:"AccountId",bucket:"BucketName",# requiredprefix:"Prefix",},},},},},expected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket to which an analytics configuration is stored.

  • :id(required,String)

    The ID that identifies the analytics configuration.

  • :analytics_configuration(required,Types::AnalyticsConfiguration)

    The configuration and any analyses for the analytics filter.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

14551145521455314554
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 14551defput_bucket_analytics_configuration(params={},options={})req=build_request(:put_bucket_analytics_configuration,params)req.send_request(options)end

#put_bucket_cors(params = {}) ⇒Struct

This operation is not supported for directory buckets.

Sets thecors configuration for your bucket. If the configurationexists, Amazon S3 replaces it.

To use this operation, you must be allowed to perform thes3:PutBucketCORS action. By default, the bucket owner has thispermission and can grant it to others.

You set this configuration on a bucket so that the bucket can servicecross-origin requests. For example, you might want to enable a requestwhose origin ishttp://www.example.com to access your Amazon S3bucket atmy.example.bucket.com by using the browser'sXMLHttpRequest capability.

To enable cross-origin resource sharing (CORS) on a bucket, you addthecors subresource to the bucket. Thecors subresource is an XMLdocument in which you configure rules that identify origins and theHTTP methods that can be executed on your bucket. The document islimited to 64 KB in size.

When Amazon S3 receives a cross-origin request (or a pre-flightOPTIONS request) against a bucket, it evaluates thecorsconfiguration on the bucket and uses the firstCORSRule rule thatmatches the incoming browser request to enable a cross-origin request.For a rule to match, the following conditions must be met:

  • The request'sOrigin header must matchAllowedOrigin elements.

  • The request method (for example, GET, PUT, HEAD, and so on) or theAccess-Control-Request-Method header in case of a pre-flightOPTIONS request must be one of theAllowedMethod elements.

  • Every header specified in theAccess-Control-Request-Headersrequest header of a pre-flight request must match anAllowedHeaderelement.

For more information about CORS, go toEnabling Cross-Origin ResourceSharing in theAmazon S3 User Guide.

The following operations are related toPutBucketCors:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To set cors configuration on a bucket.

# The following example enables PUT, POST, and DELETE requests from www.example.com, and enables GET requests from any# domain.resp=client.put_bucket_cors({bucket:"",cors_configuration:{cors_rules:[{allowed_headers:["*",],allowed_methods:["PUT","POST","DELETE",],allowed_origins:["http://www.example.com",],expose_headers:["x-amz-server-side-encryption",],max_age_seconds:3000,},{allowed_headers:["Authorization",],allowed_methods:["GET",],allowed_origins:["*",],max_age_seconds:3000,},],},content_md5:"",})

Request syntax with placeholder values

resp=client.put_bucket_cors({bucket:"BucketName",# requiredcors_configuration:{# requiredcors_rules:[# required{id:"ID",allowed_headers:["AllowedHeader"],allowed_methods:["AllowedMethod"],# requiredallowed_origins:["AllowedOrigin"],# requiredexpose_headers:["ExposeHeader"],max_age_seconds:1,},],},content_md5:"ContentMD5",checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVMEexpected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    Specifies the bucket impacted by thecorsconfiguration.

  • :cors_configuration(required,Types::CORSConfiguration)

    Describes the cross-origin access configuration for objects in anAmazon S3 bucket. For more information, seeEnabling Cross-OriginResource Sharing in theAmazon S3 User Guide.

  • :content_md5(String)

    The Base64 encoded 128-bitMD5 digest of the data. This header mustbe used as a message integrity check to verify that the request bodywas not corrupted in transit. For more information, go toRFC1864.

    For requests made using the Amazon Web Services Command Line Interface(CLI) or Amazon Web Services SDKs, this field is calculatedautomatically.

  • :checksum_algorithm(String)

    Indicates the algorithm used to create the checksum for the requestwhen you use the SDK. This header will not provide any additionalfunctionality if you don't use the SDK. When you send this header,there must be a correspondingx-amz-checksum orx-amz-trailerheader sent. Otherwise, Amazon S3 fails the request with the HTTPstatus code400 Bad Request. For more information, seeCheckingobject integrity in theAmazon S3 User Guide.

    If you provide an individual checksum, Amazon S3 ignores any providedChecksumAlgorithm parameter.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

14735147361473714738
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 14735defput_bucket_cors(params={},options={})req=build_request(:put_bucket_cors,params)req.send_request(options)end

#put_bucket_encryption(params = {}) ⇒Struct

This operation configures default encryption and Amazon S3 Bucket Keysfor an existing bucket. You can alsoblock encryption types usingthis operation.

Directory buckets - For directory buckets, you must makerequests for this API operation to the Regional endpoint. Theseendpoints support path-style requests in the formathttps://s3express-control.region-code.amazonaws.com/bucket-name.Virtual-hosted-style requests aren't supported. For more informationabout endpoints in Availability Zones, seeRegional and Zonalendpoints for directory buckets in Availability Zones in theAmazon S3 User Guide. For more information about endpoints in LocalZones, seeConcepts for directory buckets in Local Zones in theAmazon S3 User Guide.

By default, all buckets have a default encryption configuration thatuses server-side encryption with Amazon S3 managed keys (SSE-S3).

*General purpose buckets

  • You can optionally configure default encryption for a bucket byusing server-side encryption with Key Management Service (KMS)keys (SSE-KMS) or dual-layer server-side encryption with AmazonWeb Services KMS keys (DSSE-KMS). If you specify defaultencryption by using SSE-KMS, you can also configureAmazon S3Bucket Keys. For information about the bucket defaultencryption feature, seeAmazon S3 Bucket Default Encryptionin theAmazon S3 User Guide.

  • If you use PutBucketEncryption to set yourdefault bucketencryption to SSE-KMS, you should verify that your KMS key IDis correct. Amazon S3 doesn't validate the KMS key ID provided inPutBucketEncryption requests.

  • Directory buckets - You can optionally configure defaultencryption for a bucket by using server-side encryption with KeyManagement Service (KMS) keys (SSE-KMS).

    • We recommend that the bucket's default encryption uses thedesired encryption configuration and you don't override thebucket default encryption in yourCreateSession requests orPUT object requests. Then, new objects are automaticallyencrypted with the desired encryption settings. For moreinformation about the encryption overriding behaviors in directorybuckets, seeSpecifying server-side encryption with KMS for newobject uploads.

    • Your SSE-KMS configuration can only support 1customer managedkey per directory bucket's lifetime. TheAmazon Web Servicesmanaged key (aws/s3) isn't supported.

    • S3 Bucket Keys are always enabled forGET andPUT operationsin a directory bucket and can’t be disabled. S3 Bucket Keysaren't supported, when you copy SSE-KMS encrypted objects fromgeneral purpose buckets to directory buckets, from directorybuckets to general purpose buckets, or between directory buckets,throughCopyObject,UploadPartCopy,the Copy operationin Batch Operations, orthe import jobs. In this case,Amazon S3 makes a call to KMS every time a copy request is madefor a KMS-encrypted object.

    • When you specify anKMS customer managed key for encryptionin your directory bucket, only use the key ID or key ARN. The keyalias format of the KMS key isn't supported.

    • For directory buckets, if you use PutBucketEncryption to set yourdefault bucket encryption to SSE-KMS, Amazon S3 validates theKMS key ID provided in PutBucketEncryption requests.

If you're specifying a customer managed KMS key, we recommend using afully qualified KMS key ARN. If you use a KMS key alias instead, thenKMS resolves the key within the requester’s account. This behavior canresult in data that's encrypted with a KMS key that belongs to therequester, and not the bucket owner.

Also, this action requires Amazon Web Services Signature Version 4.For more information, see Authenticating Requests (Amazon WebServices Signature Version 4).

Permissions
  • General purpose bucket permissions - Thes3:PutEncryptionConfiguration permission is required in apolicy. The bucket owner has this permission by default. Thebucket owner can grant this permission to others. For moreinformation about permissions, seePermissions Related to BucketOperations andManaging Access Permissions to Your Amazon S3Resources in theAmazon S3 User Guide.

  • Directory bucket permissions - To grant access to this APIoperation, you must have thes3express:PutEncryptionConfiguration permission in an IAMidentity-based policy instead of a bucket policy. Cross-accountaccess to this API operation isn't supported. This operation canonly be performed by the Amazon Web Services account that owns theresource. For more information about directory bucket policies andpermissions, seeAmazon Web Services Identity and AccessManagement (IAM) for S3 Express One Zone in theAmazon S3User Guide.

    To set a directory bucket default encryption with SSE-KMS, youmust also have thekms:GenerateDataKey and thekms:Decryptpermissions in IAM identity-based policies and KMS key policiesfor the target KMS key.

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax iss3express-control.region-code.amazonaws.com.

The following operations are related toPutBucketEncryption:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.put_bucket_encryption({bucket:"BucketName",# requiredcontent_md5:"ContentMD5",checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVMEserver_side_encryption_configuration:{# requiredrules:[# required{apply_server_side_encryption_by_default:{sse_algorithm:"AES256",# required, accepts AES256, aws:fsx, aws:kms, aws:kms:dssekms_master_key_id:"SSEKMSKeyId",},bucket_key_enabled:false,blocked_encryption_types:{encryption_type:["NONE"],# accepts NONE, SSE-C},},],},expected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    Specifies default encryption for a bucket using server-side encryptionwith different key options.

    Directory buckets - When you use this operation with adirectory bucket, you must use path-style requests in the formathttps://s3express-control.region-code.amazonaws.com/bucket-name.Virtual-hosted-style requests aren't supported. Directory bucketnames must be unique in the chosen Zone (Availability Zone or LocalZone). Bucket names must also follow the formatbucket-base-name--zone-id--x-s3 (for example,DOC-EXAMPLE-BUCKET--usw2-az1--x-s3). For information about bucketnaming restrictions, seeDirectory bucket naming rules in theAmazon S3 User Guide

  • :content_md5(String)

    The Base64 encoded 128-bitMD5 digest of the server-side encryptionconfiguration.

    For requests made using the Amazon Web Services Command Line Interface(CLI) or Amazon Web Services SDKs, this field is calculatedautomatically.

    This functionality is not supported for directory buckets.

  • :checksum_algorithm(String)

    Indicates the algorithm used to create the checksum for the requestwhen you use the SDK. This header will not provide any additionalfunctionality if you don't use the SDK. When you send this header,there must be a correspondingx-amz-checksum orx-amz-trailerheader sent. Otherwise, Amazon S3 fails the request with the HTTPstatus code400 Bad Request. For more information, seeCheckingobject integrity in theAmazon S3 User Guide.

    If you provide an individual checksum, Amazon S3 ignores any providedChecksumAlgorithm parameter.

    For directory buckets, when you use Amazon Web Services SDKs,CRC32is the default checksum algorithm that's used for performance.

  • :server_side_encryption_configuration(required,Types::ServerSideEncryptionConfiguration)

    Specifies the default server-side-encryption configuration.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

    For directory buckets, this header is not supported in this APIoperation. If you specify this header, the request fails with the HTTPstatus code501 Not Implemented.

Returns:

See Also:

14978149791498014981
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 14978defput_bucket_encryption(params={},options={})req=build_request(:put_bucket_encryption,params)req.send_request(options)end

#put_bucket_intelligent_tiering_configuration(params = {}) ⇒Struct

This operation is not supported for directory buckets.

Puts a S3 Intelligent-Tiering configuration to the specified bucket.You can have up to 1,000 S3 Intelligent-Tiering configurations perbucket.

The S3 Intelligent-Tiering storage class is designed to optimizestorage costs by automatically moving data to the most cost-effectivestorage access tier, without performance impact or operationaloverhead. S3 Intelligent-Tiering delivers automatic cost savings inthree low latency and high throughput access tiers. To get the loweststorage cost on data that can be accessed in minutes to hours, you canchoose to activate additional archiving capabilities.

The S3 Intelligent-Tiering storage class is the ideal storage classfor data with unknown, changing, or unpredictable access patterns,independent of object size or retention period. If the size of anobject is less than 128 KB, it is not monitored and not eligible forauto-tiering. Smaller objects can be stored, but they are alwayscharged at the Frequent Access tier rates in the S3Intelligent-Tiering storage class.

For more information, seeStorage class for automatically optimizingfrequently and infrequently accessed objects.

Operations related toPutBucketIntelligentTieringConfigurationinclude:

You only need S3 Intelligent-Tiering enabled on a bucket if you wantto automatically move objects stored in the S3 Intelligent-Tieringstorage class to the Archive Access or Deep Archive Access tier.

PutBucketIntelligentTieringConfiguration has the following specialerrors:

HTTP 400 Bad Request Error

Code: InvalidArgument

Cause: Invalid Argument

HTTP 400 Bad Request Error

Code: TooManyConfigurations

Cause: You are attempting to create a new configuration but havealready reached the 1,000-configuration limit.

HTTP 403 Forbidden Error

Cause: You are not the owner of the specified bucket, or you donot have thes3:PutIntelligentTieringConfiguration bucketpermission to set the configuration on the bucket.

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.put_bucket_intelligent_tiering_configuration({bucket:"BucketName",# requiredid:"IntelligentTieringId",# requiredexpected_bucket_owner:"AccountId",intelligent_tiering_configuration:{# requiredid:"IntelligentTieringId",# requiredfilter:{prefix:"Prefix",tag:{key:"ObjectKey",# requiredvalue:"Value",# required},and:{prefix:"Prefix",tags:[{key:"ObjectKey",# requiredvalue:"Value",# required},],},},status:"Enabled",# required, accepts Enabled, Disabledtierings:[# required{days:1,# requiredaccess_tier:"ARCHIVE_ACCESS",# required, accepts ARCHIVE_ACCESS, DEEP_ARCHIVE_ACCESS},],},})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the Amazon S3 bucket whose configuration you want tomodify or retrieve.

  • :id(required,String)

    The ID used to identify the S3 Intelligent-Tiering configuration.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

  • :intelligent_tiering_configuration(required,Types::IntelligentTieringConfiguration)

    Container for S3 Intelligent-Tiering configuration.

Returns:

See Also:

15113151141511515116
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 15113defput_bucket_intelligent_tiering_configuration(params={},options={})req=build_request(:put_bucket_intelligent_tiering_configuration,params)req.send_request(options)end

#put_bucket_inventory_configuration(params = {}) ⇒Struct

This operation is not supported for directory buckets.

This implementation of thePUT action adds an S3 Inventoryconfiguration (identified by the inventory ID) to the bucket. You canhave up to 1,000 inventory configurations per bucket.

Amazon S3 inventory generates inventories of the objects in the bucketon a daily or weekly basis, and the results are published to a flatfile. The bucket that is inventoried is called thesource bucket,and the bucket where the inventory flat file is stored is called thedestination bucket. Thedestination bucket must be in the sameAmazon Web Services Region as thesource bucket.

When you configure an inventory for asource bucket, you specify thedestination bucket where you want the inventory to be stored, andwhether to generate the inventory daily or weekly. You can alsoconfigure what object metadata to include and whether to inventory allobject versions or only current versions. For more information, seeAmazon S3 Inventory in the Amazon S3 User Guide.

You must create a bucket policy on thedestination bucket to grantpermissions to Amazon S3 to write objects to the bucket in the definedlocation. For an example policy, see Granting Permissions for AmazonS3 Inventory and Storage Class Analysis.

Permissions

To use this operation, you must have permission to perform thes3:PutInventoryConfiguration action. The bucket owner has thispermission by default and can grant this permission to others.

Thes3:PutInventoryConfiguration permission allows a user tocreate anS3 Inventory report that includes all object metadatafields available and to specify the destination bucket to store theinventory. A user with read access to objects in the destinationbucket can also access all object metadata fields that are availablein the inventory report.

To restrict access to an inventory report, seeRestricting accessto an Amazon S3 Inventory report in theAmazon S3 User Guide.For more information about the metadata fields available in S3Inventory, seeAmazon S3 Inventory lists in theAmazon S3 UserGuide. For more information about permissions, seePermissionsrelated to bucket subresource operations andIdentity andaccess management in Amazon S3 in theAmazon S3 User Guide.

PutBucketInventoryConfiguration has the following special errors:

HTTP 400 Bad Request Error

Code: InvalidArgument

Cause: Invalid Argument

HTTP 400 Bad Request Error

Code: TooManyConfigurations

Cause: You are attempting to create a new configuration but havealready reached the 1,000-configuration limit.

HTTP 403 Forbidden Error

Cause: You are not the owner of the specified bucket, or you donot have thes3:PutInventoryConfiguration bucket permission to setthe configuration on the bucket.

The following operations are related toPutBucketInventoryConfiguration:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.put_bucket_inventory_configuration({bucket:"BucketName",# requiredid:"InventoryId",# requiredinventory_configuration:{# requireddestination:{# requireds3_bucket_destination:{# requiredaccount_id:"AccountId",bucket:"BucketName",# requiredformat:"CSV",# required, accepts CSV, ORC, Parquetprefix:"Prefix",encryption:{sses3:{},ssekms:{key_id:"SSEKMSKeyId",# required},},},},is_enabled:false,# requiredfilter:{prefix:"Prefix",# required},id:"InventoryId",# requiredincluded_object_versions:"All",# required, accepts All, Currentoptional_fields:["Size"],# accepts Size, LastModifiedDate, StorageClass, ETag, IsMultipartUploaded, ReplicationStatus, EncryptionStatus, ObjectLockRetainUntilDate, ObjectLockMode, ObjectLockLegalHoldStatus, IntelligentTieringAccessTier, BucketKeyStatus, ChecksumAlgorithm, ObjectAccessControlList, ObjectOwner, LifecycleExpirationDateschedule:{# requiredfrequency:"Daily",# required, accepts Daily, Weekly},},expected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket where the inventory configuration will bestored.

  • :id(required,String)

    The ID used to identify the inventory configuration.

  • :inventory_configuration(required,Types::InventoryConfiguration)

    Specifies the inventory configuration.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

15269152701527115272
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 15269defput_bucket_inventory_configuration(params={},options={})req=build_request(:put_bucket_inventory_configuration,params)req.send_request(options)end

#put_bucket_lifecycle(params = {}) ⇒Struct

This operation is not supported for directory buckets.

For an updated version of this API, seePutBucketLifecycleConfiguration. This version has beendeprecated. Existing lifecycle configurations will work. For newlifecycle configurations, use the updated API.

This operation is not supported for directory buckets.

Creates a new lifecycle configuration for the bucket or replaces anexisting lifecycle configuration. For information about lifecycleconfiguration, seeObject Lifecycle Management in theAmazon S3User Guide.

By default, all Amazon S3 resources, including buckets, objects, andrelated subresources (for example, lifecycle configuration and websiteconfiguration) are private. Only the resource owner, the Amazon WebServices account that created the resource, can access it. Theresource owner can optionally grant access permissions to others bywriting an access policy. For this operation, users must get thes3:PutLifecycleConfiguration permission.

You can also explicitly deny permissions. Explicit denial alsosupersedes any other permissions. If you want to prevent users oraccounts from removing or deleting objects from your bucket, you mustdeny them permissions for the following actions:

  • s3:DeleteObject

  • s3:DeleteObjectVersion

  • s3:PutLifecycleConfiguration

For more information about permissions, seeManaging AccessPermissions to your Amazon S3 Resources in theAmazon S3 UserGuide.

For more examples of transitioning objects to storage classes such asSTANDARD_IA or ONEZONE_IA, seeExamples of LifecycleConfiguration.

The following operations are related toPutBucketLifecycle:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.put_bucket_lifecycle({bucket:"BucketName",# requiredcontent_md5:"ContentMD5",checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVMElifecycle_configuration:{rules:[# required{expiration:{date:Time.now,days:1,expired_object_delete_marker:false,},id:"ID",prefix:"Prefix",# requiredstatus:"Enabled",# required, accepts Enabled, Disabledtransition:{date:Time.now,days:1,storage_class:"GLACIER",# accepts GLACIER, STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING, DEEP_ARCHIVE, GLACIER_IR},noncurrent_version_transition:{noncurrent_days:1,storage_class:"GLACIER",# accepts GLACIER, STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING, DEEP_ARCHIVE, GLACIER_IRnewer_noncurrent_versions:1,},noncurrent_version_expiration:{noncurrent_days:1,newer_noncurrent_versions:1,},abort_incomplete_multipart_upload:{days_after_initiation:1,},},],},expected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)
  • :content_md5(String)

    For requests made using the Amazon Web Services Command Line Interface(CLI) or Amazon Web Services SDKs, this field is calculatedautomatically.

  • :checksum_algorithm(String)

    Indicates the algorithm used to create the checksum for the requestwhen you use the SDK. This header will not provide any additionalfunctionality if you don't use the SDK. When you send this header,there must be a correspondingx-amz-checksum orx-amz-trailerheader sent. Otherwise, Amazon S3 fails the request with the HTTPstatus code400 Bad Request. For more information, seeCheckingobject integrity in theAmazon S3 User Guide.

    If you provide an individual checksum, Amazon S3 ignores any providedChecksumAlgorithm parameter.

  • :lifecycle_configuration(Types::LifecycleConfiguration)
  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

15428154291543015431
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 15428defput_bucket_lifecycle(params={},options={})req=build_request(:put_bucket_lifecycle,params)req.send_request(options)end

#put_bucket_lifecycle_configuration(params = {}) ⇒Types::PutBucketLifecycleConfigurationOutput

Creates a new lifecycle configuration for the bucket or replaces anexisting lifecycle configuration. Keep in mind that this willoverwrite an existing lifecycle configuration, so if you want toretain any configuration details, they must be included in the newlifecycle configuration. For information about lifecycleconfiguration, seeManaging your storage lifecycle.

Bucket lifecycle configuration now supports specifying a lifecyclerule using an object key name prefix, one or more object tags, objectsize, or any combination of these. Accordingly, this section describesthe latest API. The previous version of the API supported filteringbased only on an object key name prefix, which is supported forbackward compatibility. For the related API description, seePutBucketLifecycle.

Rules
Permissions
HTTP Host header syntax

You specify the lifecycle configuration in your request body. Thelifecycle configuration is specified as XML consisting of one ormore rules. An Amazon S3 Lifecycle configuration can have up to1,000 rules. This limit is not adjustable.

Bucket lifecycle configuration supports specifying a lifecycle ruleusing an object key name prefix, one or more object tags, objectsize, or any combination of these. Accordingly, this sectiondescribes the latest API. The previous version of the API supportedfiltering based only on an object key name prefix, which issupported for backward compatibility for general purpose buckets.For the related API description, seePutBucketLifecycle.

Lifecyle configurations for directory buckets only support expiringobjects and cancelling multipart uploads. Expiring of versionedobjects,transitions and tag filters are not supported.

A lifecycle rule consists of the following:

  • A filter identifying a subset of objects to which the ruleapplies. The filter can be based on a key name prefix, objecttags, object size, or any combination of these.

  • A status indicating whether the rule is in effect.

  • One or more lifecycle transition and expiration actions that youwant Amazon S3 to perform on the objects identified by the filter.If the state of your bucket is versioning-enabled orversioning-suspended, you can have many versions of the sameobject (one current version and zero or more noncurrent versions).Amazon S3 provides predefined actions that you can specify forcurrent and noncurrent object versions.

For more information, seeObject Lifecycle Management andLifecycle Configuration Elements.

  • General purpose bucket permissions - By default, all Amazon S3resources are private, including buckets, objects, and relatedsubresources (for example, lifecycle configuration and websiteconfiguration). Only the resource owner (that is, the Amazon WebServices account that created it) can access the resource. Theresource owner can optionally grant access permissions to othersby writing an access policy. For this operation, a user must havethes3:PutLifecycleConfiguration permission.

    You can also explicitly deny permissions. An explicit deny alsosupersedes any other permissions. If you want to block users oraccounts from removing or deleting objects from your bucket, youmust deny them permissions for the following actions:

  • Directory bucket permissions - You must have thes3express:PutLifecycleConfiguration permission in an IAMidentity-based policy to use this operation. Cross-account accessto this API operation isn't supported. The resource owner canoptionally grant access permissions to others by creating a roleor user for them as long as they are within the same account asthe owner and resource.

    For more information about directory bucket policies andpermissions, seeAuthorizing Regional endpoint APIs with IAMin theAmazon S3 User Guide.

    Directory buckets - For directory buckets, you must makerequests for this API operation to the Regional endpoint. Theseendpoints support path-style requests in the formathttps://s3express-control.region-code.amazonaws.com/bucket-name. Virtual-hosted-style requests aren't supported. For moreinformation about endpoints in Availability Zones, seeRegionaland Zonal endpoints for directory buckets in AvailabilityZones in theAmazon S3 User Guide. For more informationabout endpoints in Local Zones, seeConcepts for directorybuckets in Local Zones in theAmazon S3 User Guide.

Directory buckets - The HTTP Host header syntax iss3express-control.region.amazonaws.com.

The following operations are related toPutBucketLifecycleConfiguration:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: Put bucket lifecycle

# The following example replaces existing lifecycle configuration, if any, on the specified bucket.resp=client.put_bucket_lifecycle_configuration({bucket:"examplebucket",lifecycle_configuration:{rules:[{expiration:{days:3650,},filter:{prefix:"documents/",},id:"TestOnly",status:"Enabled",transitions:[{days:365,storage_class:"GLACIER",},],},],},})

Request syntax with placeholder values

resp=client.put_bucket_lifecycle_configuration({bucket:"BucketName",# requiredchecksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVMElifecycle_configuration:{rules:[# required{expiration:{date:Time.now,days:1,expired_object_delete_marker:false,},id:"ID",prefix:"Prefix",filter:{prefix:"Prefix",tag:{key:"ObjectKey",# requiredvalue:"Value",# required},object_size_greater_than:1,object_size_less_than:1,and:{prefix:"Prefix",tags:[{key:"ObjectKey",# requiredvalue:"Value",# required},],object_size_greater_than:1,object_size_less_than:1,},},status:"Enabled",# required, accepts Enabled, Disabledtransitions:[{date:Time.now,days:1,storage_class:"GLACIER",# accepts GLACIER, STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING, DEEP_ARCHIVE, GLACIER_IR},],noncurrent_version_transitions:[{noncurrent_days:1,storage_class:"GLACIER",# accepts GLACIER, STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING, DEEP_ARCHIVE, GLACIER_IRnewer_noncurrent_versions:1,},],noncurrent_version_expiration:{noncurrent_days:1,newer_noncurrent_versions:1,},abort_incomplete_multipart_upload:{days_after_initiation:1,},},],},expected_bucket_owner:"AccountId",transition_default_minimum_object_size:"varies_by_storage_class",# accepts varies_by_storage_class, all_storage_classes_128K})

Response structure

resp.transition_default_minimum_object_size#=> String, one of "varies_by_storage_class", "all_storage_classes_128K"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket for which to set the configuration.

  • :checksum_algorithm(String)

    Indicates the algorithm used to create the checksum for the requestwhen you use the SDK. This header will not provide any additionalfunctionality if you don't use the SDK. When you send this header,there must be a correspondingx-amz-checksum orx-amz-trailerheader sent. Otherwise, Amazon S3 fails the request with the HTTPstatus code400 Bad Request. For more information, seeCheckingobject integrity in theAmazon S3 User Guide.

    If you provide an individual checksum, Amazon S3 ignores any providedChecksumAlgorithm parameter.

  • :lifecycle_configuration(Types::BucketLifecycleConfiguration)

    Container for lifecycle rules. You can add as many as 1,000 rules.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

    This parameter applies to general purpose buckets only. It is notsupported for directory bucket lifecycle configurations.

  • :transition_default_minimum_object_size(String)

    Indicates which default minimum object size behavior is applied to thelifecycle configuration.

    This parameter applies to general purpose buckets only. It is notsupported for directory bucket lifecycle configurations.

    • all_storage_classes_128K - Objects smaller than 128 KB will nottransition to any storage class by default.

    • varies_by_storage_class - Objects smaller than 128 KB willtransition to Glacier Flexible Retrieval or Glacier Deep Archivestorage classes. By default, all other storage classes will preventtransitions smaller than 128 KB.

    To customize the minimum object size for any transition you can add afilter that specifies a customObjectSizeGreaterThan orObjectSizeLessThan in the body of your transition rule. Customfilters always take precedence over the default transition behavior.

Returns:

See Also:

15726157271572815729
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 15726defput_bucket_lifecycle_configuration(params={},options={})req=build_request(:put_bucket_lifecycle_configuration,params)req.send_request(options)end

#put_bucket_logging(params = {}) ⇒Struct

End of support notice: As of October 1, 2025, Amazon S3 hasdiscontinued support for Email Grantee Access Control Lists (ACLs). Ifyou attempt to use an Email Grantee ACL in a request after October 1,2025, the request will receive anHTTP 405 (Method Not Allowed)error.

This change affects the following Amazon Web Services Regions: USEast(N. Virginia), US West (N. California), US West (Oregon), Asia Pacific(Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe(Ireland), and South America (São Paulo).

This operation is not supported for directory buckets.

Set the logging parameters for a bucket and to specify permissions forwho can view and modify the logging parameters. All logs are saved tobuckets in the same Amazon Web Services Region as the source bucket.To set the logging status of a bucket, you must be the bucket owner.

The bucket owner is automatically granted FULL_CONTROL to all logs.You use theGrantee request element to grant access to other people.ThePermissions request element specifies the kind of access thegrantee has to the logs.

If the target bucket for log delivery uses the bucket owner enforcedsetting for S3 Object Ownership, you can't use theGrantee requestelement to grant access to others. Permissions can only be grantedusing policies. For more information, seePermissions for serveraccess log delivery in theAmazon S3 User Guide.

Grantee Values

You can specify the person (grantee) to whom you're assigningaccess rights (by using request elements) in the following ways. Forexamples of how to specify these grantee values in JSON format, seethe Amazon Web Services CLI example in Enabling Amazon S3 serveraccess logging in theAmazon S3 User Guide.

  • By the person's ID:

    <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:type="CanonicalUser"><ID><>ID<></ID><DisplayName><>GranteesEmail<></DisplayName></Grantee>

    DisplayName is optional and ignored in the request.

  • By Email address:

    <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:type="AmazonCustomerByEmail"><EmailAddress><>Grantees@email.com<></EmailAddress></Grantee>

    The grantee is resolved to theCanonicalUser and, in a responseto aGETObjectAcl request, appears as the CanonicalUser.

  • By URI:

    <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:type="Group"><URI><>http://acs.amazonaws.com/groups/global/AuthenticatedUsers<></URI></Grantee>

To enable logging, you useLoggingEnabled and its children requestelements. To disable logging, you use an emptyBucketLoggingStatusrequest element:

<BucketLoggingStatus xmlns="http://doc.s3.amazonaws.com/2006-03-01"/>

For more information about server access logging, seeServer AccessLogging in theAmazon S3 User Guide.

For more information about creating a bucket, seeCreateBucket.For more information about returning the logging status of a bucket,seeGetBucketLogging.

The following operations are related toPutBucketLogging:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: Set logging configuration for a bucket

# The following example sets logging policy on a bucket. For the Log Delivery group to deliver logs to the destination# bucket, it needs permission for the READ_ACP action which the policy grants.resp=client.put_bucket_logging({bucket:"sourcebucket",bucket_logging_status:{logging_enabled:{target_bucket:"targetbucket",target_grants:[{grantee:{type:"Group",uri:"http://acs.amazonaws.com/groups/global/AllUsers",},permission:"READ",},],target_prefix:"MyBucketLogs/",},},})

Request syntax with placeholder values

resp=client.put_bucket_logging({bucket:"BucketName",# requiredbucket_logging_status:{# requiredlogging_enabled:{target_bucket:"TargetBucket",# requiredtarget_grants:[{grantee:{display_name:"DisplayName",email_address:"EmailAddress",id:"ID",type:"CanonicalUser",# required, accepts CanonicalUser, AmazonCustomerByEmail, Groupuri:"URI",},permission:"FULL_CONTROL",# accepts FULL_CONTROL, READ, WRITE},],target_prefix:"TargetPrefix",# requiredtarget_object_key_format:{simple_prefix:{},partitioned_prefix:{partition_date_source:"EventTime",# accepts EventTime, DeliveryTime},},},},content_md5:"ContentMD5",checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVMEexpected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket for which to set the logging parameters.

  • :bucket_logging_status(required,Types::BucketLoggingStatus)

    Container for logging status information.

  • :content_md5(String)

    The MD5 hash of thePutBucketLogging request body.

    For requests made using the Amazon Web Services Command Line Interface(CLI) or Amazon Web Services SDKs, this field is calculatedautomatically.

  • :checksum_algorithm(String)

    Indicates the algorithm used to create the checksum for the requestwhen you use the SDK. This header will not provide any additionalfunctionality if you don't use the SDK. When you send this header,there must be a correspondingx-amz-checksum orx-amz-trailerheader sent. Otherwise, Amazon S3 fails the request with the HTTPstatus code400 Bad Request. For more information, seeCheckingobject integrity in theAmazon S3 User Guide.

    If you provide an individual checksum, Amazon S3 ignores any providedChecksumAlgorithm parameter.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

15929159301593115932
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 15929defput_bucket_logging(params={},options={})req=build_request(:put_bucket_logging,params)req.send_request(options)end

#put_bucket_metrics_configuration(params = {}) ⇒Struct

This operation is not supported for directory buckets.

Sets a metrics configuration (specified by the metrics configurationID) for the bucket. You can have up to 1,000 metrics configurationsper bucket. If you're updating an existing metrics configuration,note that this is a full replacement of the existing metricsconfiguration. If you don't include the elements you want to keep,they are erased.

To use this operation, you must have permissions to perform thes3:PutMetricsConfiguration action. The bucket owner has thispermission by default. The bucket owner can grant this permission toothers. For more information about permissions, seePermissionsRelated to Bucket Subresource Operations andManaging AccessPermissions to Your Amazon S3 Resources.

For information about CloudWatch request metrics for Amazon S3, seeMonitoring Metrics with Amazon CloudWatch.

The following operations are related toPutBucketMetricsConfiguration:

PutBucketMetricsConfiguration has the following special error:

  • Error code:TooManyConfigurations

    • Description: You are attempting to create a new configuration buthave already reached the 1,000-configuration limit.

    • HTTP Status Code: HTTP 400 Bad Request

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.put_bucket_metrics_configuration({bucket:"BucketName",# requiredid:"MetricsId",# requiredmetrics_configuration:{# requiredid:"MetricsId",# requiredfilter:{prefix:"Prefix",tag:{key:"ObjectKey",# requiredvalue:"Value",# required},access_point_arn:"AccessPointArn",and:{prefix:"Prefix",tags:[{key:"ObjectKey",# requiredvalue:"Value",# required},],access_point_arn:"AccessPointArn",},},},expected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket for which the metrics configuration is set.

  • :id(required,String)

    The ID used to identify the metrics configuration. The ID has a 64character limit and can only contain letters, numbers, periods,dashes, and underscores.

  • :metrics_configuration(required,Types::MetricsConfiguration)

    Specifies the metrics configuration.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

16037160381603916040
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 16037defput_bucket_metrics_configuration(params={},options={})req=build_request(:put_bucket_metrics_configuration,params)req.send_request(options)end

#put_bucket_notification(params = {}) ⇒Struct

This operation is not supported for directory buckets.

No longer used, see thePutBucketNotificationConfigurationoperation.

Examples:

Request syntax with placeholder values

resp=client.put_bucket_notification({bucket:"BucketName",# requiredcontent_md5:"ContentMD5",checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVMEnotification_configuration:{# requiredtopic_configuration:{id:"NotificationId",events:["s3:ReducedRedundancyLostObject"],# accepts s3:ReducedRedundancyLostObject, s3:ObjectCreated:*, s3:ObjectCreated:Put, s3:ObjectCreated:Post, s3:ObjectCreated:Copy, s3:ObjectCreated:CompleteMultipartUpload, s3:ObjectRemoved:*, s3:ObjectRemoved:Delete, s3:ObjectRemoved:DeleteMarkerCreated, s3:ObjectRestore:*, s3:ObjectRestore:Post, s3:ObjectRestore:Completed, s3:Replication:*, s3:Replication:OperationFailedReplication, s3:Replication:OperationNotTracked, s3:Replication:OperationMissedThreshold, s3:Replication:OperationReplicatedAfterThreshold, s3:ObjectRestore:Delete, s3:LifecycleTransition, s3:IntelligentTiering, s3:ObjectAcl:Put, s3:LifecycleExpiration:*, s3:LifecycleExpiration:Delete, s3:LifecycleExpiration:DeleteMarkerCreated, s3:ObjectTagging:*, s3:ObjectTagging:Put, s3:ObjectTagging:Deleteevent:"s3:ReducedRedundancyLostObject",# accepts s3:ReducedRedundancyLostObject, s3:ObjectCreated:*, s3:ObjectCreated:Put, s3:ObjectCreated:Post, s3:ObjectCreated:Copy, s3:ObjectCreated:CompleteMultipartUpload, s3:ObjectRemoved:*, s3:ObjectRemoved:Delete, s3:ObjectRemoved:DeleteMarkerCreated, s3:ObjectRestore:*, s3:ObjectRestore:Post, s3:ObjectRestore:Completed, s3:Replication:*, s3:Replication:OperationFailedReplication, s3:Replication:OperationNotTracked, s3:Replication:OperationMissedThreshold, s3:Replication:OperationReplicatedAfterThreshold, s3:ObjectRestore:Delete, s3:LifecycleTransition, s3:IntelligentTiering, s3:ObjectAcl:Put, s3:LifecycleExpiration:*, s3:LifecycleExpiration:Delete, s3:LifecycleExpiration:DeleteMarkerCreated, s3:ObjectTagging:*, s3:ObjectTagging:Put, s3:ObjectTagging:Deletetopic:"TopicArn",},queue_configuration:{id:"NotificationId",event:"s3:ReducedRedundancyLostObject",# accepts s3:ReducedRedundancyLostObject, s3:ObjectCreated:*, s3:ObjectCreated:Put, s3:ObjectCreated:Post, s3:ObjectCreated:Copy, s3:ObjectCreated:CompleteMultipartUpload, s3:ObjectRemoved:*, s3:ObjectRemoved:Delete, s3:ObjectRemoved:DeleteMarkerCreated, s3:ObjectRestore:*, s3:ObjectRestore:Post, s3:ObjectRestore:Completed, s3:Replication:*, s3:Replication:OperationFailedReplication, s3:Replication:OperationNotTracked, s3:Replication:OperationMissedThreshold, s3:Replication:OperationReplicatedAfterThreshold, s3:ObjectRestore:Delete, s3:LifecycleTransition, s3:IntelligentTiering, s3:ObjectAcl:Put, s3:LifecycleExpiration:*, s3:LifecycleExpiration:Delete, s3:LifecycleExpiration:DeleteMarkerCreated, s3:ObjectTagging:*, s3:ObjectTagging:Put, s3:ObjectTagging:Deleteevents:["s3:ReducedRedundancyLostObject"],# accepts s3:ReducedRedundancyLostObject, s3:ObjectCreated:*, s3:ObjectCreated:Put, s3:ObjectCreated:Post, s3:ObjectCreated:Copy, s3:ObjectCreated:CompleteMultipartUpload, s3:ObjectRemoved:*, s3:ObjectRemoved:Delete, s3:ObjectRemoved:DeleteMarkerCreated, s3:ObjectRestore:*, s3:ObjectRestore:Post, s3:ObjectRestore:Completed, s3:Replication:*, s3:Replication:OperationFailedReplication, s3:Replication:OperationNotTracked, s3:Replication:OperationMissedThreshold, s3:Replication:OperationReplicatedAfterThreshold, s3:ObjectRestore:Delete, s3:LifecycleTransition, s3:IntelligentTiering, s3:ObjectAcl:Put, s3:LifecycleExpiration:*, s3:LifecycleExpiration:Delete, s3:LifecycleExpiration:DeleteMarkerCreated, s3:ObjectTagging:*, s3:ObjectTagging:Put, s3:ObjectTagging:Deletequeue:"QueueArn",},cloud_function_configuration:{id:"NotificationId",event:"s3:ReducedRedundancyLostObject",# accepts s3:ReducedRedundancyLostObject, s3:ObjectCreated:*, s3:ObjectCreated:Put, s3:ObjectCreated:Post, s3:ObjectCreated:Copy, s3:ObjectCreated:CompleteMultipartUpload, s3:ObjectRemoved:*, s3:ObjectRemoved:Delete, s3:ObjectRemoved:DeleteMarkerCreated, s3:ObjectRestore:*, s3:ObjectRestore:Post, s3:ObjectRestore:Completed, s3:Replication:*, s3:Replication:OperationFailedReplication, s3:Replication:OperationNotTracked, s3:Replication:OperationMissedThreshold, s3:Replication:OperationReplicatedAfterThreshold, s3:ObjectRestore:Delete, s3:LifecycleTransition, s3:IntelligentTiering, s3:ObjectAcl:Put, s3:LifecycleExpiration:*, s3:LifecycleExpiration:Delete, s3:LifecycleExpiration:DeleteMarkerCreated, s3:ObjectTagging:*, s3:ObjectTagging:Put, s3:ObjectTagging:Deleteevents:["s3:ReducedRedundancyLostObject"],# accepts s3:ReducedRedundancyLostObject, s3:ObjectCreated:*, s3:ObjectCreated:Put, s3:ObjectCreated:Post, s3:ObjectCreated:Copy, s3:ObjectCreated:CompleteMultipartUpload, s3:ObjectRemoved:*, s3:ObjectRemoved:Delete, s3:ObjectRemoved:DeleteMarkerCreated, s3:ObjectRestore:*, s3:ObjectRestore:Post, s3:ObjectRestore:Completed, s3:Replication:*, s3:Replication:OperationFailedReplication, s3:Replication:OperationNotTracked, s3:Replication:OperationMissedThreshold, s3:Replication:OperationReplicatedAfterThreshold, s3:ObjectRestore:Delete, s3:LifecycleTransition, s3:IntelligentTiering, s3:ObjectAcl:Put, s3:LifecycleExpiration:*, s3:LifecycleExpiration:Delete, s3:LifecycleExpiration:DeleteMarkerCreated, s3:ObjectTagging:*, s3:ObjectTagging:Put, s3:ObjectTagging:Deletecloud_function:"CloudFunction",invocation_role:"CloudFunctionInvocationRole",},},expected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket.

  • :content_md5(String)

    The MD5 hash of thePutPublicAccessBlock request body.

    For requests made using the Amazon Web Services Command Line Interface(CLI) or Amazon Web Services SDKs, this field is calculatedautomatically.

  • :checksum_algorithm(String)

    Indicates the algorithm used to create the checksum for the requestwhen you use the SDK. This header will not provide any additionalfunctionality if you don't use the SDK. When you send this header,there must be a correspondingx-amz-checksum orx-amz-trailerheader sent. Otherwise, Amazon S3 fails the request with the HTTPstatus code400 Bad Request. For more information, seeCheckingobject integrity in theAmazon S3 User Guide.

    If you provide an individual checksum, Amazon S3 ignores any providedChecksumAlgorithm parameter.

  • :notification_configuration(required,Types::NotificationConfigurationDeprecated)

    The container for the configuration.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

16123161241612516126
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 16123defput_bucket_notification(params={},options={})req=build_request(:put_bucket_notification,params)req.send_request(options)end

#put_bucket_notification_configuration(params = {}) ⇒Struct

This operation is not supported for directory buckets.

Enables notifications of specified events for a bucket. For moreinformation about event notifications, seeConfiguring EventNotifications.

Using this API, you can replace an existing notificationconfiguration. The configuration is an XML file that defines the eventtypes that you want Amazon S3 to publish and the destination where youwant Amazon S3 to publish an event notification when it detects anevent of the specified type.

By default, your bucket has no event notifications configured. Thatis, the notification configuration will be an emptyNotificationConfiguration.

<NotificationConfiguration>

</NotificationConfiguration>

This action replaces the existing notification configuration with theconfiguration you include in the request body.

After Amazon S3 receives this request, it first verifies that anyAmazon Simple Notification Service (Amazon SNS) or Amazon Simple QueueService (Amazon SQS) destination exists, and that the bucket owner haspermission to publish to it by sending a test notification. In thecase of Lambda destinations, Amazon S3 verifies that the Lambdafunction permissions grant Amazon S3 permission to invoke the functionfrom the Amazon S3 bucket. For more information, seeConfiguringNotifications for Amazon S3 Events.

You can disable notifications by adding the emptyNotificationConfiguration element.

For more information about the number of event notificationconfigurations that you can create per bucket, seeAmazon S3 servicequotas inAmazon Web Services General Reference.

By default, only the bucket owner can configure notifications on abucket. However, bucket owners can use a bucket policy to grantpermission to other users to set this configuration with the requireds3:PutBucketNotification permission.

The PUT notification is an atomic operation. For example, suppose yournotification configuration includes SNS topic, SQS queue, and Lambdafunction configurations. When you send a PUT request with thisconfiguration, Amazon S3 sends test messages to your SNS topic. If themessage fails, the entire PUT action will fail, and Amazon S3 will notadd the configuration to your bucket.

If the configuration in the request body includes only oneTopicConfiguration specifying only thes3:ReducedRedundancyLostObject event type, the response will alsoinclude thex-amz-sns-test-message-id header containing the messageID of the test notification sent to the topic.

The following action is related toPutBucketNotificationConfiguration:

^

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: Set notification configuration for a bucket

# The following example sets notification configuration on a bucket to publish the object created events to an SNS topic.resp=client.put_bucket_notification_configuration({bucket:"examplebucket",notification_configuration:{topic_configurations:[{events:["s3:ObjectCreated:*",],topic_arn:"arn:aws:sns:us-west-2:123456789012:s3-notification-topic",},],},})

Request syntax with placeholder values

resp=client.put_bucket_notification_configuration({bucket:"BucketName",# requirednotification_configuration:{# requiredtopic_configurations:[{id:"NotificationId",topic_arn:"TopicArn",# requiredevents:["s3:ReducedRedundancyLostObject"],# required, accepts s3:ReducedRedundancyLostObject, s3:ObjectCreated:*, s3:ObjectCreated:Put, s3:ObjectCreated:Post, s3:ObjectCreated:Copy, s3:ObjectCreated:CompleteMultipartUpload, s3:ObjectRemoved:*, s3:ObjectRemoved:Delete, s3:ObjectRemoved:DeleteMarkerCreated, s3:ObjectRestore:*, s3:ObjectRestore:Post, s3:ObjectRestore:Completed, s3:Replication:*, s3:Replication:OperationFailedReplication, s3:Replication:OperationNotTracked, s3:Replication:OperationMissedThreshold, s3:Replication:OperationReplicatedAfterThreshold, s3:ObjectRestore:Delete, s3:LifecycleTransition, s3:IntelligentTiering, s3:ObjectAcl:Put, s3:LifecycleExpiration:*, s3:LifecycleExpiration:Delete, s3:LifecycleExpiration:DeleteMarkerCreated, s3:ObjectTagging:*, s3:ObjectTagging:Put, s3:ObjectTagging:Deletefilter:{key:{filter_rules:[{name:"prefix",# accepts prefix, suffixvalue:"FilterRuleValue",},],},},},],queue_configurations:[{id:"NotificationId",queue_arn:"QueueArn",# requiredevents:["s3:ReducedRedundancyLostObject"],# required, accepts s3:ReducedRedundancyLostObject, s3:ObjectCreated:*, s3:ObjectCreated:Put, s3:ObjectCreated:Post, s3:ObjectCreated:Copy, s3:ObjectCreated:CompleteMultipartUpload, s3:ObjectRemoved:*, s3:ObjectRemoved:Delete, s3:ObjectRemoved:DeleteMarkerCreated, s3:ObjectRestore:*, s3:ObjectRestore:Post, s3:ObjectRestore:Completed, s3:Replication:*, s3:Replication:OperationFailedReplication, s3:Replication:OperationNotTracked, s3:Replication:OperationMissedThreshold, s3:Replication:OperationReplicatedAfterThreshold, s3:ObjectRestore:Delete, s3:LifecycleTransition, s3:IntelligentTiering, s3:ObjectAcl:Put, s3:LifecycleExpiration:*, s3:LifecycleExpiration:Delete, s3:LifecycleExpiration:DeleteMarkerCreated, s3:ObjectTagging:*, s3:ObjectTagging:Put, s3:ObjectTagging:Deletefilter:{key:{filter_rules:[{name:"prefix",# accepts prefix, suffixvalue:"FilterRuleValue",},],},},},],lambda_function_configurations:[{id:"NotificationId",lambda_function_arn:"LambdaFunctionArn",# requiredevents:["s3:ReducedRedundancyLostObject"],# required, accepts s3:ReducedRedundancyLostObject, s3:ObjectCreated:*, s3:ObjectCreated:Put, s3:ObjectCreated:Post, s3:ObjectCreated:Copy, s3:ObjectCreated:CompleteMultipartUpload, s3:ObjectRemoved:*, s3:ObjectRemoved:Delete, s3:ObjectRemoved:DeleteMarkerCreated, s3:ObjectRestore:*, s3:ObjectRestore:Post, s3:ObjectRestore:Completed, s3:Replication:*, s3:Replication:OperationFailedReplication, s3:Replication:OperationNotTracked, s3:Replication:OperationMissedThreshold, s3:Replication:OperationReplicatedAfterThreshold, s3:ObjectRestore:Delete, s3:LifecycleTransition, s3:IntelligentTiering, s3:ObjectAcl:Put, s3:LifecycleExpiration:*, s3:LifecycleExpiration:Delete, s3:LifecycleExpiration:DeleteMarkerCreated, s3:ObjectTagging:*, s3:ObjectTagging:Put, s3:ObjectTagging:Deletefilter:{key:{filter_rules:[{name:"prefix",# accepts prefix, suffixvalue:"FilterRuleValue",},],},},},],event_bridge_configuration:{},},expected_bucket_owner:"AccountId",skip_destination_validation:false,})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket.

  • :notification_configuration(required,Types::NotificationConfiguration)

    A container for specifying the notification configuration of thebucket. If this element is empty, notifications are turned off for thebucket.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

  • :skip_destination_validation(Boolean)

    Skips validation of Amazon SQS, Amazon SNS, and Lambda destinations.True or false value.

Returns:

See Also:

16311163121631316314
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 16311defput_bucket_notification_configuration(params={},options={})req=build_request(:put_bucket_notification_configuration,params)req.send_request(options)end

#put_bucket_ownership_controls(params = {}) ⇒Struct

This operation is not supported for directory buckets.

Creates or modifiesOwnershipControls for an Amazon S3 bucket. Touse this operation, you must have thes3:PutBucketOwnershipControlspermission. For more information about Amazon S3 permissions, seeSpecifying permissions in a policy.

For information about Amazon S3 Object Ownership, seeUsing objectownership.

The following operations are related toPutBucketOwnershipControls:

  • GetBucketOwnershipControls

  • DeleteBucketOwnershipControls

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.put_bucket_ownership_controls({bucket:"BucketName",# requiredcontent_md5:"ContentMD5",expected_bucket_owner:"AccountId",ownership_controls:{# requiredrules:[# required{object_ownership:"BucketOwnerPreferred",# required, accepts BucketOwnerPreferred, ObjectWriter, BucketOwnerEnforced},],},checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVME})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the Amazon S3 bucket whoseOwnershipControls you want toset.

  • :content_md5(String)

    The MD5 hash of theOwnershipControls request body.

    For requests made using the Amazon Web Services Command Line Interface(CLI) or Amazon Web Services SDKs, this field is calculatedautomatically.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

  • :ownership_controls(required,Types::OwnershipControls)

    TheOwnershipControls (BucketOwnerEnforced, BucketOwnerPreferred, orObjectWriter) that you want to apply to this Amazon S3 bucket.

  • :checksum_algorithm(String)

    Indicates the algorithm used to create the checksum for the objectwhen you use the SDK. This header will not provide any additionalfunctionality if you don't use the SDK. When you send this header,there must be a correspondingx-amz-checksum-algorithm header sent.Otherwise, Amazon S3 fails the request with the HTTP status code400Bad Request. For more information, seeChecking object integrityin theAmazon S3 User Guide.

    If you provide an individual checksum, Amazon S3 ignores any providedChecksumAlgorithm parameter.

Returns:

See Also:

16401164021640316404
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 16401defput_bucket_ownership_controls(params={},options={})req=build_request(:put_bucket_ownership_controls,params)req.send_request(options)end

#put_bucket_policy(params = {}) ⇒Struct

Applies an Amazon S3 bucket policy to an Amazon S3 bucket.

Directory buckets - For directory buckets, you must makerequests for this API operation to the Regional endpoint. Theseendpoints support path-style requests in the formathttps://s3express-control.region-code.amazonaws.com/bucket-name.Virtual-hosted-style requests aren't supported. For more informationabout endpoints in Availability Zones, seeRegional and Zonalendpoints for directory buckets in Availability Zones in theAmazon S3 User Guide. For more information about endpoints in LocalZones, seeConcepts for directory buckets in Local Zones in theAmazon S3 User Guide.

Permissions

If you are using an identity other than the root user of the AmazonWeb Services account that owns the bucket, the calling identity mustboth have thePutBucketPolicy permissions on the specified bucketand belong to the bucket owner's account in order to use thisoperation.

If you don't havePutBucketPolicy permissions, Amazon S3 returnsa403 Access Denied error. If you have the correct permissions,but you're not using an identity that belongs to the bucketowner's account, Amazon S3 returns a405 Method Not Allowederror.

To ensure that bucket owners don't inadvertently lock themselvesout of their own buckets, the root principal in a bucket owner'sAmazon Web Services account can perform theGetBucketPolicy,PutBucketPolicy, andDeleteBucketPolicy API actions, even iftheir bucket policy explicitly denies the root principal's access.Bucket owner root principals can only be blocked from performingthese API actions by VPC endpoint policies and Amazon Web ServicesOrganizations policies.

  • General purpose bucket permissions - Thes3:PutBucketPolicypermission is required in a policy. For more information aboutgeneral purpose buckets bucket policies, seeUsing BucketPolicies and User Policies in theAmazon S3 User Guide.

  • Directory bucket permissions - To grant access to this APIoperation, you must have thes3express:PutBucketPolicypermission in an IAM identity-based policy instead of a bucketpolicy. Cross-account access to this API operation isn'tsupported. This operation can only be performed by the Amazon WebServices account that owns the resource. For more informationabout directory bucket policies and permissions, seeAmazon WebServices Identity and Access Management (IAM) for S3 Express OneZone in theAmazon S3 User Guide.

Example bucket policies

General purpose buckets example bucket policies - SeeBucketpolicy examples in theAmazon S3 User Guide.

Directory bucket example bucket policies - SeeExample bucketpolicies for S3 Express One Zone in theAmazon S3 User Guide.

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax iss3express-control.region-code.amazonaws.com.

The following operations are related toPutBucketPolicy:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: Set bucket policy

# The following example sets a permission policy on a bucket.resp=client.put_bucket_policy({bucket:"examplebucket",policy:"{\"Version\": \"2012-10-17\", \"Statement\": [{ \"Sid\": \"id-1\",\"Effect\": \"Allow\",\"Principal\": {\"AWS\": \"arn:aws:iam::123456789012:root\"}, \"Action\": [ \"s3:PutObject\",\"s3:PutObjectAcl\"], \"Resource\": [\"arn:aws:s3:::acl3/*\" ] } ]}",})

Request syntax with placeholder values

resp=client.put_bucket_policy({bucket:"BucketName",# requiredcontent_md5:"ContentMD5",checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVMEconfirm_remove_self_bucket_access:false,policy:"Policy",# requiredexpected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket.

    Directory buckets - When you use this operation with adirectory bucket, you must use path-style requests in the formathttps://s3express-control.region-code.amazonaws.com/bucket-name.Virtual-hosted-style requests aren't supported. Directory bucketnames must be unique in the chosen Zone (Availability Zone or LocalZone). Bucket names must also follow the formatbucket-base-name--zone-id--x-s3 (for example,DOC-EXAMPLE-BUCKET--usw2-az1--x-s3). For information about bucketnaming restrictions, seeDirectory bucket naming rules in theAmazon S3 User Guide

  • :content_md5(String)

    The MD5 hash of the request body.

    For requests made using the Amazon Web Services Command Line Interface(CLI) or Amazon Web Services SDKs, this field is calculatedautomatically.

    This functionality is not supported for directory buckets.

  • :checksum_algorithm(String)

    Indicates the algorithm used to create the checksum for the requestwhen you use the SDK. This header will not provide any additionalfunctionality if you don't use the SDK. When you send this header,there must be a correspondingx-amz-checksum-algorithm orx-amz-trailer header sent. Otherwise, Amazon S3 fails the requestwith the HTTP status code400 Bad Request.

    For thex-amz-checksum-algorithm header, replacealgorithm withthe supported algorithm from the following list:

    • CRC32

    • CRC32C

    • CRC64NVME

    • SHA1

    • SHA256

    For more information, seeChecking object integrity in theAmazon S3 User Guide.

    If the individual checksum value you provide throughx-amz-checksum-algorithm doesn't match the checksum algorithm youset throughx-amz-sdk-checksum-algorithm, Amazon S3 fails therequest with aBadDigest error.

    For directory buckets, when you use Amazon Web Services SDKs,CRC32is the default checksum algorithm that's used for performance.

  • :confirm_remove_self_bucket_access(Boolean)

    Set this parameter to true to confirm that you want to remove yourpermissions to change this bucket policy in the future.

    This functionality is not supported for directory buckets.

  • :policy(required,String)

    The bucket policy as a JSON document.

    For directory buckets, the only IAM action supported in the bucketpolicy iss3express:CreateSession.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

    For directory buckets, this header is not supported in this APIoperation. If you specify this header, the request fails with the HTTPstatus code501 Not Implemented.

Returns:

See Also:

16612166131661416615
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 16612defput_bucket_policy(params={},options={})req=build_request(:put_bucket_policy,params)req.send_request(options)end

#put_bucket_replication(params = {}) ⇒Struct

This operation is not supported for directory buckets.

Creates a replication configuration or replaces an existing one. Formore information, seeReplication in theAmazon S3 User Guide.

Specify the replication configuration in the request body. In thereplication configuration, you provide the name of the destinationbucket or buckets where you want Amazon S3 to replicate objects, theIAM role that Amazon S3 can assume to replicate objects on yourbehalf, and other relevant information. You can invoke this requestfor a specific Amazon Web Services Region by using theaws:RequestedRegion condition key.

A replication configuration must include at least one rule, and cancontain a maximum of 1,000. Each rule identifies a subset of objectsto replicate by filtering the objects in the source bucket. To chooseadditional subsets of objects to replicate, add a rule for eachsubset.

To specify a subset of the objects in the source bucket to apply areplication rule to, add the Filter element as a child of the Ruleelement. You can filter objects based on an object key prefix, one ormore object tags, or both. When you add the Filter element in theconfiguration, you must also add the following elements:DeleteMarkerReplication,Status, andPriority.

If you are using an earlier version of the replication configuration,Amazon S3 handles replication of delete markers differently. For moreinformation, seeBackward Compatibility.

For information about enabling versioning on a bucket, seeUsingVersioning.

Handling Replication of Encrypted Objects

By default, Amazon S3 doesn't replicate objects that are stored atrest using server-side encryption with KMS keys. To replicate AmazonWeb Services KMS-encrypted objects, add the following:SourceSelectionCriteria,SseKmsEncryptedObjects,Status,EncryptionConfiguration, andReplicaKmsKeyID. For informationabout replication configuration, seeReplicating Objects Createdwith SSE Using KMS keys.

For information onPutBucketReplication errors, seeList ofreplication-related error codes

Permissions

To create aPutBucketReplication request, you must haves3:PutReplicationConfiguration permissions for the bucket.

By default, a resource owner, in this case the Amazon Web Servicesaccount that created the bucket, can perform this operation. Theresource owner can also grant others permissions to perform theoperation. For more information about permissions, seeSpecifyingPermissions in a Policy andManaging Access Permissions to YourAmazon S3 Resources.

To perform this operation, the user or role performing the actionmust have theiam:PassRole permission.

The following operations are related toPutBucketReplication:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: Set replication configuration on a bucket

# The following example sets replication configuration on a bucket.resp=client.put_bucket_replication({bucket:"examplebucket",replication_configuration:{role:"arn:aws:iam::123456789012:role/examplerole",rules:[{destination:{bucket:"arn:aws:s3:::destinationbucket",storage_class:"STANDARD",},prefix:"",status:"Enabled",},],},})

Request syntax with placeholder values

resp=client.put_bucket_replication({bucket:"BucketName",# requiredcontent_md5:"ContentMD5",checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVMEreplication_configuration:{# requiredrole:"Role",# requiredrules:[# required{id:"ID",priority:1,prefix:"Prefix",filter:{prefix:"Prefix",tag:{key:"ObjectKey",# requiredvalue:"Value",# required},and:{prefix:"Prefix",tags:[{key:"ObjectKey",# requiredvalue:"Value",# required},],},},status:"Enabled",# required, accepts Enabled, Disabledsource_selection_criteria:{sse_kms_encrypted_objects:{status:"Enabled",# required, accepts Enabled, Disabled},replica_modifications:{status:"Enabled",# required, accepts Enabled, Disabled},},existing_object_replication:{status:"Enabled",# required, accepts Enabled, Disabled},destination:{# requiredbucket:"BucketName",# requiredaccount:"AccountId",storage_class:"STANDARD",# accepts STANDARD, REDUCED_REDUNDANCY, STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING, GLACIER, DEEP_ARCHIVE, OUTPOSTS, GLACIER_IR, SNOW, EXPRESS_ONEZONE, FSX_OPENZFS, FSX_ONTAPaccess_control_translation:{owner:"Destination",# required, accepts Destination},encryption_configuration:{replica_kms_key_id:"ReplicaKmsKeyID",},replication_time:{status:"Enabled",# required, accepts Enabled, Disabledtime:{# requiredminutes:1,},},metrics:{status:"Enabled",# required, accepts Enabled, Disabledevent_threshold:{minutes:1,},},},delete_marker_replication:{status:"Enabled",# accepts Enabled, Disabled},},],},token:"ObjectLockToken",expected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the bucket

  • :content_md5(String)

    The Base64 encoded 128-bitMD5 digest of the data. You must use thisheader as a message integrity check to verify that the request bodywas not corrupted in transit. For more information, seeRFC 1864.

    For requests made using the Amazon Web Services Command Line Interface(CLI) or Amazon Web Services SDKs, this field is calculatedautomatically.

  • :checksum_algorithm(String)

    Indicates the algorithm used to create the checksum for the requestwhen you use the SDK. This header will not provide any additionalfunctionality if you don't use the SDK. When you send this header,there must be a correspondingx-amz-checksum orx-amz-trailerheader sent. Otherwise, Amazon S3 fails the request with the HTTPstatus code400 Bad Request. For more information, seeCheckingobject integrity in theAmazon S3 User Guide.

    If you provide an individual checksum, Amazon S3 ignores any providedChecksumAlgorithm parameter.

  • :replication_configuration(required,Types::ReplicationConfiguration)

    A container for replication rules. You can add up to 1,000 rules. Themaximum size of a replication configuration is 2 MB.

  • :token(String)

    A token to allow Object Lock to be enabled for an existing bucket.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

16854168551685616857
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 16854defput_bucket_replication(params={},options={})req=build_request(:put_bucket_replication,params)req.send_request(options)end

#put_bucket_request_payment(params = {}) ⇒Struct

This operation is not supported for directory buckets.

Sets the request payment configuration for a bucket. By default, thebucket owner pays for downloads from the bucket. This configurationparameter enables the bucket owner (only) to specify that the personrequesting the download will be charged for the download. For moreinformation, seeRequester Pays Buckets.

The following operations are related toPutBucketRequestPayment:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: Set request payment configuration on a bucket.

# The following example sets request payment configuration on a bucket so that person requesting the download is charged.resp=client.put_bucket_request_payment({bucket:"examplebucket",request_payment_configuration:{payer:"Requester",},})

Request syntax with placeholder values

resp=client.put_bucket_request_payment({bucket:"BucketName",# requiredcontent_md5:"ContentMD5",checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVMErequest_payment_configuration:{# requiredpayer:"Requester",# required, accepts Requester, BucketOwner},expected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket name.

  • :content_md5(String)

    The Base64 encoded 128-bitMD5 digest of the data. You must use thisheader as a message integrity check to verify that the request bodywas not corrupted in transit. For more information, seeRFC 1864.

    For requests made using the Amazon Web Services Command Line Interface(CLI) or Amazon Web Services SDKs, this field is calculatedautomatically.

  • :checksum_algorithm(String)

    Indicates the algorithm used to create the checksum for the requestwhen you use the SDK. This header will not provide any additionalfunctionality if you don't use the SDK. When you send this header,there must be a correspondingx-amz-checksum orx-amz-trailerheader sent. Otherwise, Amazon S3 fails the request with the HTTPstatus code400 Bad Request. For more information, seeCheckingobject integrity in theAmazon S3 User Guide.

    If you provide an individual checksum, Amazon S3 ignores any providedChecksumAlgorithm parameter.

  • :request_payment_configuration(required,Types::RequestPaymentConfiguration)

    Container for Payer.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

16955169561695716958
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 16955defput_bucket_request_payment(params={},options={})req=build_request(:put_bucket_request_payment,params)req.send_request(options)end

#put_bucket_tagging(params = {}) ⇒Struct

This operation is not supported for directory buckets.

Sets the tags for a general purpose bucket if attribute based accesscontrol (ABAC) is not enabled for the bucket. When youenable ABACfor a general purpose bucket, you can no longer use this operationfor that bucket and must use theTagResource orUntagResource operations instead.

Use tags to organize your Amazon Web Services bill to reflect your owncost structure. To do this, sign up to get your Amazon Web Servicesaccount bill with tag key values included. Then, to see the cost ofcombined resources, organize your billing information according toresources with the same tag key values. For example, you can tagseveral resources with a specific application name, and then organizeyour billing information to see the total cost of that applicationacross several services. For more information, seeCost Allocationand Tagging andUsing Cost Allocation in Amazon S3 BucketTags.

When this operation sets the tags for a bucket, it will overwrite anycurrent tags the bucket already has. You cannot use this operation toadd tags to an existing list of tags.

To use this operation, you must have permissions to perform thes3:PutBucketTagging action. The bucket owner has this permission bydefault and can grant this permission to others. For more informationabout permissions, seePermissions Related to Bucket SubresourceOperations andManaging Access Permissions to Your Amazon S3Resources.

PutBucketTagging has the following special errors. For more AmazonS3 errors see,Error Responses.

  • InvalidTag - The tag provided was not a valid tag. This error canoccur if the tag did not pass input validation. For moreinformation, seeUsing Cost Allocation in Amazon S3 BucketTags.

  • MalformedXML - The XML provided does not match the schema.

  • OperationAborted - A conflicting conditional action is currentlyin progress against this resource. Please try again.

  • InternalError - The service was unable to apply the provided tagto the bucket.

The following operations are related toPutBucketTagging:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: Set tags on a bucket

# The following example sets tags on a bucket. Any existing tags are replaced.resp=client.put_bucket_tagging({bucket:"examplebucket",tagging:{tag_set:[{key:"Key1",value:"Value1",},{key:"Key2",value:"Value2",},],},})

Request syntax with placeholder values

resp=client.put_bucket_tagging({bucket:"BucketName",# requiredcontent_md5:"ContentMD5",checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVMEtagging:{# requiredtag_set:[# required{key:"ObjectKey",# requiredvalue:"Value",# required},],},expected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket name.

  • :content_md5(String)

    The Base64 encoded 128-bitMD5 digest of the data. You must use thisheader as a message integrity check to verify that the request bodywas not corrupted in transit. For more information, seeRFC 1864.

    For requests made using the Amazon Web Services Command Line Interface(CLI) or Amazon Web Services SDKs, this field is calculatedautomatically.

  • :checksum_algorithm(String)

    Indicates the algorithm used to create the checksum for the requestwhen you use the SDK. This header will not provide any additionalfunctionality if you don't use the SDK. When you send this header,there must be a correspondingx-amz-checksum orx-amz-trailerheader sent. Otherwise, Amazon S3 fails the request with the HTTPstatus code400 Bad Request. For more information, seeCheckingobject integrity in theAmazon S3 User Guide.

    If you provide an individual checksum, Amazon S3 ignores any providedChecksumAlgorithm parameter.

  • :tagging(required,Types::Tagging)

    Container for theTagSet andTag elements.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

17117171181711917120
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 17117defput_bucket_tagging(params={},options={})req=build_request(:put_bucket_tagging,params)req.send_request(options)end

#put_bucket_versioning(params = {}) ⇒Struct

This operation is not supported for directory buckets.

When you enable versioning on a bucket for the first time, it mighttake a short amount of time for the change to be fully propagated.While this change is propagating, you might encounter intermittentHTTP 404 NoSuchKey errors for requests to objects created or updatedafter enabling versioning. We recommend that you wait for 15 minutesafter enabling versioning before issuing write operations (PUT orDELETE) on objects in the bucket.

Sets the versioning state of an existing bucket.

You can set the versioning state with one of the following values:

Enabled—Enables versioning for the objects in the bucket. Allobjects added to the bucket receive a unique version ID.

Suspended—Disables versioning for the objects in the bucket. Allobjects added to the bucket receive the version ID null.

If the versioning state has never been set on a bucket, it has noversioning state; aGetBucketVersioning request does not return aversioning state value.

In order to enable MFA Delete, you must be the bucket owner. If youare the bucket owner and want to enable MFA Delete in the bucketversioning configuration, you must include thex-amz-mfa requestheader and theStatus and theMfaDelete request elements in arequest to set the versioning state of the bucket.

If you have an object expiration lifecycle configuration in yournon-versioned bucket and you want to maintain the same permanentdelete behavior when you enable versioning, you must add a noncurrentexpiration policy. The noncurrent expiration lifecycle configurationwill manage the deletes of the noncurrent object versions in theversion-enabled bucket. (A version-enabled bucket maintains onecurrent and zero or more noncurrent object versions.) For moreinformation, seeLifecycle and Versioning.

The following operations are related toPutBucketVersioning:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: Set versioning configuration on a bucket

# The following example sets versioning configuration on bucket. The configuration enables versioning on the bucket.resp=client.put_bucket_versioning({bucket:"examplebucket",versioning_configuration:{mfa_delete:"Disabled",status:"Enabled",},})

Request syntax with placeholder values

resp=client.put_bucket_versioning({bucket:"BucketName",# requiredcontent_md5:"ContentMD5",checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVMEmfa:"MFA",versioning_configuration:{# requiredmfa_delete:"Enabled",# accepts Enabled, Disabledstatus:"Enabled",# accepts Enabled, Suspended},expected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket name.

  • :content_md5(String)

    >The Base64 encoded 128-bitMD5 digest of the data. You must usethis header as a message integrity check to verify that the requestbody was not corrupted in transit. For more information, seeRFC1864.

    For requests made using the Amazon Web Services Command Line Interface(CLI) or Amazon Web Services SDKs, this field is calculatedautomatically.

  • :checksum_algorithm(String)

    Indicates the algorithm used to create the checksum for the requestwhen you use the SDK. This header will not provide any additionalfunctionality if you don't use the SDK. When you send this header,there must be a correspondingx-amz-checksum orx-amz-trailerheader sent. Otherwise, Amazon S3 fails the request with the HTTPstatus code400 Bad Request. For more information, seeCheckingobject integrity in theAmazon S3 User Guide.

    If you provide an individual checksum, Amazon S3 ignores any providedChecksumAlgorithm parameter.

  • :mfa(String)

    The concatenation of the authentication device's serial number, aspace, and the value that is displayed on your authentication device.The serial number is the number that uniquely identifies the MFAdevice. For physical MFA devices, this is the unique serial numberthat's provided with the device. For virtual MFA devices, the serialnumber is the device ARN. For more information, seeEnablingversioning on buckets andConfiguring MFA delete in theAmazon Simple Storage Service User Guide.

  • :versioning_configuration(required,Types::VersioningConfiguration)

    Container for setting the versioning state.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

17273172741727517276
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 17273defput_bucket_versioning(params={},options={})req=build_request(:put_bucket_versioning,params)req.send_request(options)end

#put_bucket_website(params = {}) ⇒Struct

This operation is not supported for directory buckets.

Sets the configuration of the website that is specified in thewebsite subresource. To configure a bucket as a website, you can addthis subresource on the bucket with website configuration informationsuch as the file name of the index document and any redirect rules.For more information, seeHosting Websites on Amazon S3.

This PUT action requires theS3:PutBucketWebsite permission. Bydefault, only the bucket owner can configure the website attached to abucket; however, bucket owners can allow other users to set thewebsite configuration by writing a bucket policy that grants them theS3:PutBucketWebsite permission.

To redirect all website requests sent to the bucket's websiteendpoint, you add a website configuration with the following elements.Because all requests are sent to another website, you don't need toprovide index document name for the bucket.

  • WebsiteConfiguration

  • RedirectAllRequestsTo

  • HostName

  • Protocol

If you want granular control over redirects, you can use the followingelements to add routing rules that describe conditions for redirectingrequests and information about the redirect destination. In this case,the website configuration must provide an index document for thebucket, because some requests might not be redirected.

  • WebsiteConfiguration

  • IndexDocument

  • Suffix

  • ErrorDocument

  • Key

  • RoutingRules

  • RoutingRule

  • Condition

  • HttpErrorCodeReturnedEquals

  • KeyPrefixEquals

  • Redirect

  • Protocol

  • HostName

  • ReplaceKeyPrefixWith

  • ReplaceKeyWith

  • HttpRedirectCode

Amazon S3 has a limitation of 50 routing rules per websiteconfiguration. If you require more than 50 routing rules, you can useobject redirect. For more information, seeConfiguring an ObjectRedirect in theAmazon S3 User Guide.

The maximum request length is limited to 128 KB.

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: Set website configuration on a bucket

# The following example adds website configuration to a bucket.resp=client.put_bucket_website({bucket:"examplebucket",content_md5:"",website_configuration:{error_document:{key:"error.html",},index_document:{suffix:"index.html",},},})

Request syntax with placeholder values

resp=client.put_bucket_website({bucket:"BucketName",# requiredcontent_md5:"ContentMD5",checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVMEwebsite_configuration:{# requirederror_document:{key:"ObjectKey",# required},index_document:{suffix:"Suffix",# required},redirect_all_requests_to:{host_name:"HostName",# requiredprotocol:"http",# accepts http, https},routing_rules:[{condition:{http_error_code_returned_equals:"HttpErrorCodeReturnedEquals",key_prefix_equals:"KeyPrefixEquals",},redirect:{# requiredhost_name:"HostName",http_redirect_code:"HttpRedirectCode",protocol:"http",# accepts http, httpsreplace_key_prefix_with:"ReplaceKeyPrefixWith",replace_key_with:"ReplaceKeyWith",},},],},expected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket name.

  • :content_md5(String)

    The Base64 encoded 128-bitMD5 digest of the data. You must use thisheader as a message integrity check to verify that the request bodywas not corrupted in transit. For more information, seeRFC 1864.

    For requests made using the Amazon Web Services Command Line Interface(CLI) or Amazon Web Services SDKs, this field is calculatedautomatically.

  • :checksum_algorithm(String)

    Indicates the algorithm used to create the checksum for the requestwhen you use the SDK. This header will not provide any additionalfunctionality if you don't use the SDK. When you send this header,there must be a correspondingx-amz-checksum orx-amz-trailerheader sent. Otherwise, Amazon S3 fails the request with the HTTPstatus code400 Bad Request. For more information, seeCheckingobject integrity in theAmazon S3 User Guide.

    If you provide an individual checksum, Amazon S3 ignores any providedChecksumAlgorithm parameter.

  • :website_configuration(required,Types::WebsiteConfiguration)

    Container for the request.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

17461174621746317464
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 17461defput_bucket_website(params={},options={})req=build_request(:put_bucket_website,params)req.send_request(options)end

#put_object(params = {}) ⇒Types::PutObjectOutput

End of support notice: As of October 1, 2025, Amazon S3 hasdiscontinued support for Email Grantee Access Control Lists (ACLs). Ifyou attempt to use an Email Grantee ACL in a request after October 1,2025, the request will receive anHTTP 405 (Method Not Allowed)error.

This change affects the following Amazon Web Services Regions: USEast(N. Virginia), US West (N. California), US West (Oregon), Asia Pacific(Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe(Ireland), and South America (São Paulo).

Adds an object to a bucket.

* Amazon S3 never adds partial objects; if you receive a success response, Amazon S3 added the entire object to the bucket. You cannot usePutObject to only update a single piece of metadata for an existing object. You must put the entire object with updated metadata if you want to update some values.

  • If your bucket uses the bucket owner enforced setting for ObjectOwnership, ACLs are disabled and no longer affect permissions. Allobjects written to the bucket by any account will be owned by thebucket owner.

  • Directory buckets - For directory buckets, you must makerequests for this API operation to the Zonal endpoint. Theseendpoints support virtual-hosted-style requests in the formathttps://amzn-s3-demo-bucket.s3express-zone-id.region-code.amazonaws.com/key-name. Path-style requests are not supported. For more information aboutendpoints in Availability Zones, seeRegional and Zonal endpointsfor directory buckets in Availability Zones in theAmazon S3User Guide. For more information about endpoints in Local Zones,seeConcepts for directory buckets in Local Zones in theAmazon S3 User Guide.

Amazon S3 is a distributed system. If it receives multiple writerequests for the same object simultaneously, it overwrites all but thelast object written. However, Amazon S3 provides features that canmodify this behavior:

  • S3 Object Lock - To prevent objects from being deleted oroverwritten, you can useAmazon S3 Object Lock in theAmazonS3 User Guide.

    This functionality is not supported for directory buckets.

  • If-None-Match - Uploads the object only if the object key namedoes not already exist in the specified bucket. Otherwise, Amazon S3returns a412 Precondition Failed error. If a conflictingoperation occurs during the upload, S3 returns a409ConditionalRequestConflict response. On a 409 failure, retry theupload.

    Expects the * character (asterisk).

    For more information, seeAdd preconditions to S3 operations withconditional requests in theAmazon S3 User Guide orRFC7232.

    This functionality is not supported for S3 on Outposts.

  • S3 Versioning - When you enable versioning for a bucket, ifAmazon S3 receives multiple write requests for the same objectsimultaneously, it stores all versions of the objects. For eachwrite request that is made to the same object, Amazon S3automatically generates a unique version ID of that object beingstored in Amazon S3. You can retrieve, replace, or delete anyversion of the object. For more information about versioning, seeAdding Objects to Versioning-Enabled Buckets in theAmazon S3User Guide. For information about returning the versioning state ofa bucket, seeGetBucketVersioning.

    This functionality is not supported for directory buckets.

Permissions
  • General purpose bucket permissions - The following permissionsare required in your policies when yourPutObject requestincludes specific headers.

    • s3:PutObject - To successfully completethePutObject request, you must always have thes3:PutObjectpermission on a bucket to add an object to it.

    • s3:PutObjectAcl - To successfully changethe objects ACL of yourPutObject request, you must have thes3:PutObjectAcl.

    • s3:PutObjectTagging - To successfully setthe tag-set with yourPutObject request, you must have thes3:PutObjectTagging.

  • Directory bucket permissions - To grant access to this APIoperation on a directory bucket, we recommend that you use theCreateSession API operation for session-basedauthorization. Specifically, you grant thes3express:CreateSession permission to the directory bucket in abucket policy or an IAM identity-based policy. Then, you make theCreateSession API call on the bucket to obtain a session token.With the session token in your request header, you can make APIrequests to this operation. After the session token expires, youmake anotherCreateSession API call to generate a new sessiontoken for use. Amazon Web Services CLI or SDKs create session andrefresh the session token automatically to avoid serviceinterruptions when a session expires. For more information aboutauthorization, seeCreateSession.

    If the object is encrypted with SSE-KMS, you must also have thekms:GenerateDataKey andkms:Decrypt permissions in IAMidentity-based policies and KMS key policies for the KMS key.

Data integrity with Content-MD5
  • General purpose bucket - To ensure that data is not corruptedtraversing the network, use theContent-MD5 header. When you usethis header, Amazon S3 checks the object against the provided MD5value and, if they do not match, Amazon S3 returns an error.Alternatively, when the object's ETag is its MD5 digest, you cancalculate the MD5 while putting the object to Amazon S3 andcompare the returned ETag to the calculated MD5 value.

  • Directory bucket - This functionality is not supported fordirectory buckets.

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax isBucket-name.s3express-zone-id.region-code.amazonaws.com.

For more information about related Amazon S3 APIs, see the following:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To upload an object and specify canned ACL.

# The following example uploads and object. The request specifies optional canned ACL (access control list) to all READ# access to authenticated users. If the bucket is versioning enabled, S3 returns version ID in response.resp=client.put_object({acl:"authenticated-read",body:"filetoupload",bucket:"examplebucket",key:"exampleobject",})resp.to_houtputsthefollowing:{etag:"\"6805f2cfc46c0f04559748bb039d69ae\"",version_id:"Kirh.unyZwjQ69YxcQLA8z4F5j3kJJKr",}

Example: To upload an object

# The following example uploads an object to a versioning-enabled bucket. The source file is specified using Windows file# syntax. S3 returns VersionId of the newly created object.resp=client.put_object({body:"HappyFace.jpg",bucket:"examplebucket",key:"HappyFace.jpg",})resp.to_houtputsthefollowing:{etag:"\"6805f2cfc46c0f04559748bb039d69ae\"",version_id:"tpf3zF08nBplQK1XLOefGskR7mGDwcDk",}

Example: To upload an object (specify optional headers)

# The following example uploads an object. The request specifies optional request headers to directs S3 to use specific# storage class and use server-side encryption.resp=client.put_object({body:"HappyFace.jpg",bucket:"examplebucket",key:"HappyFace.jpg",server_side_encryption:"AES256",storage_class:"STANDARD_IA",})resp.to_houtputsthefollowing:{etag:"\"6805f2cfc46c0f04559748bb039d69ae\"",server_side_encryption:"AES256",version_id:"CG612hodqujkf8FaaNfp8U..FIhLROcp",}

Example: To upload an object and specify optional tags

# The following example uploads an object. The request specifies optional object tags. The bucket is versioned, therefore# S3 returns version ID of the newly created object.resp=client.put_object({body:"c:\\HappyFace.jpg",bucket:"examplebucket",key:"HappyFace.jpg",tagging:"key1=value1&key2=value2",})resp.to_houtputsthefollowing:{etag:"\"6805f2cfc46c0f04559748bb039d69ae\"",version_id:"psM2sYY4.o1501dSx8wMvnkOzSBB.V4a",}

Example: To upload an object and specify server-side encryption and object tags

# The following example uploads an object. The request specifies the optional server-side encryption option. The request# also specifies optional object tags. If the bucket is versioning enabled, S3 returns version ID in response.resp=client.put_object({body:"filetoupload",bucket:"examplebucket",key:"exampleobject",server_side_encryption:"AES256",tagging:"key1=value1&key2=value2",})resp.to_houtputsthefollowing:{etag:"\"6805f2cfc46c0f04559748bb039d69ae\"",server_side_encryption:"AES256",version_id:"Ri.vC6qVlA4dEnjgRV4ZHsHoFIjqEMNt",}

Example: To upload object and specify user-defined metadata

# The following example creates an object. The request also specifies optional metadata. If the bucket is versioning# enabled, S3 returns version ID in response.resp=client.put_object({body:"filetoupload",bucket:"examplebucket",key:"exampleobject",metadata:{"metadata1"=>"value1","metadata2"=>"value2",},})resp.to_houtputsthefollowing:{etag:"\"6805f2cfc46c0f04559748bb039d69ae\"",version_id:"pSKidl4pHBiNwukdbcPXAIs.sshFFOc0",}

Example: To create an object.

# The following example creates an object. If the bucket is versioning enabled, S3 returns version ID in response.resp=client.put_object({body:"filetoupload",bucket:"examplebucket",key:"objectkey",})resp.to_houtputsthefollowing:{etag:"\"6805f2cfc46c0f04559748bb039d69ae\"",version_id:"Bvq0EDKxOcXLJXNo_Lkz37eM3R4pfzyQ",}

Streaming a file from disk

# upload file from disk in a single request, may not exceed 5GBFile.open('/source/file/path','rb')do|file|s3.put_object(bucket:'bucket-name',key:'object-key',body:file)end

Request syntax with placeholder values

resp=client.put_object({acl:"private",# accepts private, public-read, public-read-write, authenticated-read, aws-exec-read, bucket-owner-read, bucket-owner-full-controlbody:source_file,bucket:"BucketName",# requiredcache_control:"CacheControl",content_disposition:"ContentDisposition",content_encoding:"ContentEncoding",content_language:"ContentLanguage",content_length:1,content_md5:"ContentMD5",content_type:"ContentType",checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVMEchecksum_crc32:"ChecksumCRC32",checksum_crc32c:"ChecksumCRC32C",checksum_crc64nvme:"ChecksumCRC64NVME",checksum_sha1:"ChecksumSHA1",checksum_sha256:"ChecksumSHA256",expires:Time.now,if_match:"IfMatch",if_none_match:"IfNoneMatch",grant_full_control:"GrantFullControl",grant_read:"GrantRead",grant_read_acp:"GrantReadACP",grant_write_acp:"GrantWriteACP",key:"ObjectKey",# requiredwrite_offset_bytes:1,metadata:{"MetadataKey"=>"MetadataValue",},server_side_encryption:"AES256",# accepts AES256, aws:fsx, aws:kms, aws:kms:dssestorage_class:"STANDARD",# accepts STANDARD, REDUCED_REDUNDANCY, STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING, GLACIER, DEEP_ARCHIVE, OUTPOSTS, GLACIER_IR, SNOW, EXPRESS_ONEZONE, FSX_OPENZFS, FSX_ONTAPwebsite_redirect_location:"WebsiteRedirectLocation",sse_customer_algorithm:"SSECustomerAlgorithm",sse_customer_key:"SSECustomerKey",sse_customer_key_md5:"SSECustomerKeyMD5",ssekms_key_id:"SSEKMSKeyId",ssekms_encryption_context:"SSEKMSEncryptionContext",bucket_key_enabled:false,request_payer:"requester",# accepts requestertagging:"TaggingHeader",object_lock_mode:"GOVERNANCE",# accepts GOVERNANCE, COMPLIANCEobject_lock_retain_until_date:Time.now,object_lock_legal_hold_status:"ON",# accepts ON, OFFexpected_bucket_owner:"AccountId",})

Response structure

resp.expiration#=> Stringresp.etag#=> Stringresp.checksum_crc32#=> Stringresp.checksum_crc32c#=> Stringresp.checksum_crc64nvme#=> Stringresp.checksum_sha1#=> Stringresp.checksum_sha256#=> Stringresp.checksum_type#=> String, one of "COMPOSITE", "FULL_OBJECT"resp.server_side_encryption#=> String, one of "AES256", "aws:fsx", "aws:kms", "aws:kms:dsse"resp.version_id#=> Stringresp.sse_customer_algorithm#=> Stringresp.sse_customer_key_md5#=> Stringresp.ssekms_key_id#=> Stringresp.ssekms_encryption_context#=> Stringresp.bucket_key_enabled#=> Booleanresp.size#=> Integerresp.request_charged#=> String, one of "requester"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :acl(String)

    The canned ACL to apply to the object. For more information, seeCanned ACL in theAmazon S3 User Guide.

    When adding a new object, you can use headers to grant ACL-basedpermissions to individual Amazon Web Services accounts or topredefined groups defined by Amazon S3. These permissions are thenadded to the ACL on the object. By default, all objects are private.Only the owner has full access control. For more information, seeAccess Control List (ACL) Overview andManaging ACLs Using theREST API in theAmazon S3 User Guide.

    If the bucket that you're uploading objects to uses the bucket ownerenforced setting for S3 Object Ownership, ACLs are disabled and nolonger affect permissions. Buckets that use this setting only acceptPUT requests that don't specify an ACL or PUT requests that specifybucket owner full control ACLs, such as thebucket-owner-full-control canned ACL or an equivalent form of thisACL expressed in the XML format. PUT requests that contain other ACLs(for example, custom grants to certain Amazon Web Services accounts)fail and return a400 error with the error codeAccessControlListNotSupported. For more information, seeControlling ownership of objects and disabling ACLs in theAmazonS3 User Guide.

    * This functionality is not supported for directory buckets.

    • This functionality is not supported for Amazon S3 on Outposts.

  • :body(String,StringIO,File)

    Object data.

  • :bucket(required,String)

    The bucket name to which the PUT action was initiated.

    Directory buckets - When you use this operation with a directorybucket, you must use virtual-hosted-style requests in the formatBucket-name.s3express-zone-id.region-code.amazonaws.com. Path-stylerequests are not supported. Directory bucket names must be unique inthe chosen Zone (Availability Zone or Local Zone). Bucket names mustfollow the formatbucket-base-name--zone-id--x-s3 (for example,amzn-s3-demo-bucket--usw2-az1--x-s3). For information about bucketnaming restrictions, seeDirectory bucket naming rules in theAmazon S3 User Guide.

    Access points - When you use this action with an access point forgeneral purpose buckets, you must provide the alias of the accesspoint in place of the bucket name or specify the access point ARN.When you use this action with an access point for directory buckets,you must provide the access point name in place of the bucket name.When using the access point ARN, you must direct requests to theaccess point hostname. The access point hostname takes the formAccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.When using this action with an access point through the Amazon WebServices SDKs, you provide the access point ARN in place of the bucketname. For more information about access point ARNs, seeUsing accesspoints in theAmazon S3 User Guide.

    Object Lambda access points are not supported by directory buckets.

    S3 on Outposts - When you use this action with S3 on Outposts, youmust direct requests to the S3 on Outposts hostname. The S3 onOutposts hostname takes the formAccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com.When you use this action with S3 on Outposts, the destination bucketmust be the Outposts access point ARN or the access point alias. Formore information about S3 on Outposts, seeWhat is S3 onOutposts? in theAmazon S3 User Guide.

  • :cache_control(String)

    Can be used to specify caching behavior along the request/reply chain.For more information, seehttp://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.

  • :content_disposition(String)

    Specifies presentational information for the object. For moreinformation, seehttps://www.rfc-editor.org/rfc/rfc6266#section-4.

  • :content_encoding(String)

    Specifies what content encodings have been applied to the object andthus what decoding mechanisms must be applied to obtain the media-typereferenced by the Content-Type header field. For more information, seehttps://www.rfc-editor.org/rfc/rfc9110.html#field.content-encoding.

  • :content_language(String)

    The language the content is in.

  • :content_length(Integer)

    Size of the body in bytes. This parameter is useful when the size ofthe body cannot be determined automatically. For more information, seehttps://www.rfc-editor.org/rfc/rfc9110.html#name-content-length.

  • :content_md5(String)

    The Base64 encoded 128-bitMD5 digest of the message (without theheaders) according to RFC 1864. This header can be used as a messageintegrity check to verify that the data is the same data that wasoriginally sent. Although it is optional, we recommend using theContent-MD5 mechanism as an end-to-end integrity check. For moreinformation about REST request authentication, seeRESTAuthentication.

    TheContent-MD5 orx-amz-sdk-checksum-algorithm header is requiredfor any request to upload an object with a retention period configuredusing Amazon S3 Object Lock. For more information, seeUploadingobjects to an Object Lock enabled bucket in theAmazon S3 UserGuide.

    This functionality is not supported for directory buckets.

  • :content_type(String)

    A standard MIME type describing the format of the contents. For moreinformation, seehttps://www.rfc-editor.org/rfc/rfc9110.html#name-content-type.

  • :checksum_algorithm(String)

    Indicates the algorithm used to create the checksum for the objectwhen you use the SDK. This header will not provide any additionalfunctionality if you don't use the SDK. When you send this header,there must be a correspondingx-amz-checksum-algorithm orx-amz-trailer header sent. Otherwise, Amazon S3 fails the requestwith the HTTP status code400 Bad Request.

    For thex-amz-checksum-algorithm header, replacealgorithm withthe supported algorithm from the following list:

    • CRC32

    • CRC32C

    • CRC64NVME

    • SHA1

    • SHA256

    For more information, seeChecking object integrity in theAmazon S3 User Guide.

    If the individual checksum value you provide throughx-amz-checksum-algorithm doesn't match the checksum algorithm youset throughx-amz-sdk-checksum-algorithm, Amazon S3 fails therequest with aBadDigest error.

    TheContent-MD5 orx-amz-sdk-checksum-algorithm header is requiredfor any request to upload an object with a retention period configuredusing Amazon S3 Object Lock. For more information, seeUploadingobjects to an Object Lock enabled bucket in theAmazon S3 UserGuide.

    For directory buckets, when you use Amazon Web Services SDKs,CRC32is the default checksum algorithm that's used for performance.

  • :checksum_crc32(String)

    This header can be used as a data integrity check to verify that thedata received is the same data that was originally sent. This headerspecifies the Base64 encoded, 32-bitCRC32 checksum of the object.For more information, seeChecking object integrity in theAmazon S3 User Guide.

  • :checksum_crc32c(String)

    This header can be used as a data integrity check to verify that thedata received is the same data that was originally sent. This headerspecifies the Base64 encoded, 32-bitCRC32C checksum of the object.For more information, seeChecking object integrity in theAmazon S3 User Guide.

  • :checksum_crc64nvme(String)

    This header can be used as a data integrity check to verify that thedata received is the same data that was originally sent. This headerspecifies the Base64 encoded, 64-bitCRC64NVME checksum of theobject. TheCRC64NVME checksum is always a full object checksum. Formore information, seeChecking object integrity in the Amazon S3 UserGuide.

  • :checksum_sha1(String)

    This header can be used as a data integrity check to verify that thedata received is the same data that was originally sent. This headerspecifies the Base64 encoded, 160-bitSHA1 digest of the object. Formore information, seeChecking object integrity in theAmazon S3User Guide.

  • :checksum_sha256(String)

    This header can be used as a data integrity check to verify that thedata received is the same data that was originally sent. This headerspecifies the Base64 encoded, 256-bitSHA256 digest of the object.For more information, seeChecking object integrity in theAmazon S3 User Guide.

  • :expires(Time,DateTime,Date,Integer,String)

    The date and time at which the object is no longer cacheable. For moreinformation, seehttps://www.rfc-editor.org/rfc/rfc7234#section-5.3.

  • :if_match(String)

    Uploads the object only if the ETag (entity tag) value provided duringthe WRITE operation matches the ETag of the object in S3. If the ETagvalues do not match, the operation returns a412 Precondition Failederror.

    If a conflicting operation occurs during the upload S3 returns a409ConditionalRequestConflict response. On a 409 failure you shouldfetch the object's ETag and retry the upload.

    Expects the ETag value as a string.

    For more information about conditional requests, seeRFC 7232, orConditional requests in theAmazon S3 User Guide.

  • :if_none_match(String)

    Uploads the object only if the object key name does not already existin the bucket specified. Otherwise, Amazon S3 returns a412Precondition Failed error.

    If a conflicting operation occurs during the upload S3 returns a409ConditionalRequestConflict response. On a 409 failure you shouldretry the upload.

    Expects the '*' (asterisk) character.

    For more information about conditional requests, seeRFC 7232, orConditional requests in theAmazon S3 User Guide.

  • :grant_full_control(String)

    Gives the grantee READ, READ_ACP, and WRITE_ACP permissions on theobject.

    * This functionality is not supported for directory buckets.

    • This functionality is not supported for Amazon S3 on Outposts.

  • :grant_read(String)

    Allows grantee to read the object data and its metadata.

    * This functionality is not supported for directory buckets.

    • This functionality is not supported for Amazon S3 on Outposts.

  • :grant_read_acp(String)

    Allows grantee to read the object ACL.

    * This functionality is not supported for directory buckets.

    • This functionality is not supported for Amazon S3 on Outposts.

  • :grant_write_acp(String)

    Allows grantee to write the ACL for the applicable object.

    * This functionality is not supported for directory buckets.

    • This functionality is not supported for Amazon S3 on Outposts.

  • :key(required,String)

    Object key for which the PUT action was initiated.

  • :write_offset_bytes(Integer)

    Specifies the offset for appending data to existing objects in bytes.The offset must be equal to the size of the existing object beingappended to. If no object exists, setting this header to 0 will createa new object.

    This functionality is only supported for objects in the Amazon S3Express One Zone storage class in directory buckets.

  • :metadata(Hash<String,String>)

    A map of metadata to store with the object in S3.

  • :server_side_encryption(String)

    The server-side encryption algorithm that was used when you store thisobject in Amazon S3 or Amazon FSx.

    • General purpose buckets - You have four mutually exclusiveoptions to protect data using server-side encryption in Amazon S3,depending on how you choose to manage the encryption keys.Specifically, the encryption key options are Amazon S3 managed keys(SSE-S3), Amazon Web Services KMS keys (SSE-KMS or DSSE-KMS), andcustomer-provided keys (SSE-C). Amazon S3 encrypts data withserver-side encryption by using Amazon S3 managed keys (SSE-S3) bydefault. You can optionally tell Amazon S3 to encrypt data at restby using server-side encryption with other key options. For moreinformation, seeUsing Server-Side Encryption in theAmazon S3User Guide.

    • Directory buckets - For directory buckets, there are onlytwo supported options for server-side encryption: server-sideencryption with Amazon S3 managed keys (SSE-S3) (AES256) andserver-side encryption with KMS keys (SSE-KMS) (aws:kms). Werecommend that the bucket's default encryption uses the desiredencryption configuration and you don't override the bucket defaultencryption in yourCreateSession requests orPUT objectrequests. Then, new objects are automatically encrypted with thedesired encryption settings. For more information, seeProtectingdata with server-side encryption in theAmazon S3 User Guide.For more information about the encryption overriding behaviors indirectory buckets, seeSpecifying server-side encryption with KMSfor new object uploads.

      In the Zonal endpoint API calls (exceptCopyObject andUploadPartCopy) using the REST API, the encryption requestheaders must match the encryption settings that are specified in theCreateSession request. You can't override the values of theencryption settings (x-amz-server-side-encryption,x-amz-server-side-encryption-aws-kms-key-id,x-amz-server-side-encryption-context, andx-amz-server-side-encryption-bucket-key-enabled) that arespecified in theCreateSession request. You don't need toexplicitly specify these encryption settings values in Zonalendpoint API calls, and Amazon S3 will use the encryption settingsvalues from theCreateSession request to protect new objects inthe directory bucket.

      When you use the CLI or the Amazon Web Services SDKs, forCreateSession, the session token refreshes automatically to avoidservice interruptions when a session expires. The CLI or the AmazonWeb Services SDKs use the bucket's default encryption configurationfor theCreateSession request. It's not supported to override theencryption settings values in theCreateSession request. So in theZonal endpoint API calls (exceptCopyObject andUploadPartCopy), the encryption request headers must match thedefault encryption configuration of the directory bucket.

    • S3 access points for Amazon FSx - When accessing data storedin Amazon FSx file systems using S3 access points, the only validserver side encryption option isaws:fsx. All Amazon FSx filesystems have encryption configured by default and are encrypted atrest. Data is automatically encrypted before being written to thefile system, and automatically decrypted as it is read. Theseprocesses are handled transparently by Amazon FSx.

  • :storage_class(String)

    By default, Amazon S3 uses the STANDARD Storage Class to store newlycreated objects. The STANDARD storage class provides high durabilityand high availability. Depending on performance needs, you can specifya different Storage Class. For more information, seeStorageClasses in theAmazon S3 User Guide.

    * Directory buckets only supportEXPRESS_ONEZONE (the S3 Express One Zone storage class) in Availability Zones andONEZONE_IA (the S3 One Zone-Infrequent Access storage class) in Dedicated Local Zones.

    • Amazon S3 on Outposts only uses the OUTPOSTS Storage Class.

  • :website_redirect_location(String)

    If the bucket is configured as a website, redirects requests for thisobject to another object in the same bucket or to an external URL.Amazon S3 stores the value of this header in the object metadata. Forinformation about object metadata, seeObject Key and Metadata intheAmazon S3 User Guide.

    In the following example, the request header sets the redirect to anobject (anotherPage.html) in the same bucket:

    x-amz-website-redirect-location: /anotherPage.html

    In the following example, the request header sets the object redirectto another website:

    x-amz-website-redirect-location: http://www.example.com/

    For more information about website hosting in Amazon S3, seeHostingWebsites on Amazon S3 andHow to Configure Website PageRedirects in theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :sse_customer_algorithm(String)

    Specifies the algorithm to use when encrypting the object (forexample,AES256).

    This functionality is not supported for directory buckets.

  • :sse_customer_key(String)

    Specifies the customer-provided encryption key for Amazon S3 to use inencrypting data. This value is used to store the object and then it isdiscarded; Amazon S3 does not store the encryption key. The key mustbe appropriate for use with the algorithm specified in thex-amz-server-side-encryption-customer-algorithm header.

    This functionality is not supported for directory buckets.

  • :sse_customer_key_md5(String)

    Specifies the 128-bit MD5 digest of the encryption key according toRFC 1321. Amazon S3 uses this header for a message integrity check toensure that the encryption key was transmitted without error.

    This functionality is not supported for directory buckets.

  • :ssekms_key_id(String)

    Specifies the KMS key ID (Key ID, Key ARN, or Key Alias) to use forobject encryption. If the KMS key doesn't exist in the same accountthat's issuing the command, you must use the full Key ARN not the KeyID.

    General purpose buckets - If you specifyx-amz-server-side-encryption withaws:kms oraws:kms:dsse, thisheader specifies the ID (Key ID, Key ARN, or Key Alias) of the KMS keyto use. If you specifyx-amz-server-side-encryption:aws:kms orx-amz-server-side-encryption:aws:kms:dsse, but do not providex-amz-server-side-encryption-aws-kms-key-id, Amazon S3 uses theAmazon Web Services managed key (aws/s3) to protect the data.

    Directory buckets - To encrypt data using SSE-KMS, it'srecommended to specify thex-amz-server-side-encryption header toaws:kms. Then, thex-amz-server-side-encryption-aws-kms-key-idheader implicitly uses the bucket's default KMS customer managed keyID. If you want to explicitly set thex-amz-server-side-encryption-aws-kms-key-id header, it must match thebucket's default customer managed key (using key ID or ARN, notalias). Your SSE-KMS configuration can only support 1customermanaged key per directory bucket's lifetime. TheAmazon WebServices managed key (aws/s3) isn't supported. Incorrect keyspecification results in an HTTP400 Bad Request error.

  • :ssekms_encryption_context(String)

    Specifies the Amazon Web Services KMS Encryption Context as anadditional encryption context to use for object encryption. The valueof this header is a Base64 encoded string of a UTF-8 encoded JSON,which contains the encryption context as key-value pairs. This valueis stored as object metadata and automatically gets passed on toAmazon Web Services KMS for futureGetObject operations on thisobject.

    General purpose buckets - This value must be explicitly addedduringCopyObject operations if you want an additional encryptioncontext for your object. For more information, seeEncryptioncontext in theAmazon S3 User Guide.

    Directory buckets - You can optionally provide an explicitencryption context value. The value must match the default encryptioncontext - the bucket Amazon Resource Name (ARN). An additionalencryption context value is not supported.

  • :bucket_key_enabled(Boolean)

    Specifies whether Amazon S3 should use an S3 Bucket Key for objectencryption with server-side encryption using Key Management Service(KMS) keys (SSE-KMS).

    General purpose buckets - Setting this header totrue causesAmazon S3 to use an S3 Bucket Key for object encryption with SSE-KMS.Also, specifying this header with a PUT action doesn't affectbucket-level settings for S3 Bucket Key.

    Directory buckets - S3 Bucket Keys are always enabled forGETandPUT operations in a directory bucket and can’t be disabled. S3Bucket Keys aren't supported, when you copy SSE-KMS encrypted objectsfrom general purpose buckets to directory buckets, from directorybuckets to general purpose buckets, or between directory buckets,throughCopyObject,UploadPartCopy,the Copy operation inBatch Operations, orthe import jobs. In this case, Amazon S3makes a call to KMS every time a copy request is made for aKMS-encrypted object.

  • :request_payer(String)

    Confirms that the requester knows that they will be charged for therequest. Bucket owners need not specify this parameter in theirrequests. If either the source or destination S3 bucket has RequesterPays enabled, the requester will pay for corresponding charges to copythe object. For information about downloading objects from RequesterPays buckets, seeDownloading Objects in Requester Pays Bucketsin theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :tagging(String)

    The tag-set for the object. The tag-set must be encoded as URL Queryparameters. (For example, "Key1=Value1")

    This functionality is not supported for directory buckets.

  • :object_lock_mode(String)

    The Object Lock mode that you want to apply to this object.

    This functionality is not supported for directory buckets.

  • :object_lock_retain_until_date(Time,DateTime,Date,Integer,String)

    The date and time when you want this object's Object Lock to expire.Must be formatted as a timestamp parameter.

    This functionality is not supported for directory buckets.

  • :object_lock_legal_hold_status(String)

    Specifies whether a legal hold will be applied to this object. Formore information about S3 Object Lock, seeObject Lock in theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

18506185071850818509
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 18506defput_object(params={},options={})req=build_request(:put_object,params)req.send_request(options)end

#put_object_acl(params = {}) ⇒Types::PutObjectAclOutput

End of support notice: As of October 1, 2025, Amazon S3 hasdiscontinued support for Email Grantee Access Control Lists (ACLs). Ifyou attempt to use an Email Grantee ACL in a request after October 1,2025, the request will receive anHTTP 405 (Method Not Allowed)error.

This change affects the following Amazon Web Services Regions: USEast(N. Virginia), US West (N. California), US West (Oregon), Asia Pacific(Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe(Ireland), and South America (São Paulo).

This operation is not supported for directory buckets.

Uses theacl subresource to set the access control list (ACL)permissions for a new or existing object in an S3 bucket. You musthave theWRITE_ACP permission to set the ACL of an object. For moreinformation, seeWhat permissions can I grant? in theAmazon S3User Guide.

This functionality is not supported for Amazon S3 on Outposts.

Depending on your application needs, you can choose to set the ACL onan object using either the request body or the headers. For example,if you have an existing application that updates a bucket ACL usingthe request body, you can continue to use that approach. For moreinformation, seeAccess Control List (ACL) Overview in theAmazon S3 User Guide.

If your bucket uses the bucket owner enforced setting for S3 ObjectOwnership, ACLs are disabled and no longer affect permissions. Youmust use policies to grant access to your bucket and the objects init. Requests to set ACLs or update ACLs fail and return theAccessControlListNotSupported error code. Requests to read ACLs arestill supported. For more information, seeControlling objectownership in theAmazon S3 User Guide.

Permissions

You can set access permissions using one of the following methods:

  • Specify a canned ACL with thex-amz-acl request header. AmazonS3 supports a set of predefined ACLs, known as canned ACLs. Eachcanned ACL has a predefined set of grantees and permissions.Specify the canned ACL name as the value ofx-amz-acl. If youuse this header, you cannot use other access control-specificheaders in your request. For more information, seeCannedACL.

  • Specify access permissions explicitly with thex-amz-grant-read,x-amz-grant-read-acp,x-amz-grant-write-acp, andx-amz-grant-full-control headers. When using these headers, youspecify explicit access permissions and grantees (Amazon WebServices accounts or Amazon S3 groups) who will receive thepermission. If you use these ACL-specific headers, you cannot usex-amz-acl header to set a canned ACL. These parameters map tothe set of permissions that Amazon S3 supports in an ACL. For moreinformation, seeAccess Control List (ACL) Overview.

    You specify each grantee as a type=value pair, where the type isone of the following:

    • id – if the value specified is the canonical user ID of anAmazon Web Services account

    • uri – if you are granting permissions to a predefined group

    • emailAddress – if the value specified is the email address ofan Amazon Web Services account

      Using email addresses to specify a grantee is only supported inthe following Amazon Web Services Regions:

      • US East (N. Virginia)

      • US West (N. California)

      • US West (Oregon)

      • Asia Pacific (Singapore)

      • Asia Pacific (Sydney)

      • Asia Pacific (Tokyo)

      • Europe (Ireland)

      • South America (São Paulo)

      For a list of all the Amazon S3 supported Regions and endpoints,seeRegions and Endpoints in the Amazon Web ServicesGeneral Reference.

      For example, the followingx-amz-grant-read header grants listobjects permission to the two Amazon Web Services accountsidentified by their email addresses.

    x-amz-grant-read: emailAddress="xyz@amazon.com",emailAddress="abc@amazon.com"

You can use either a canned ACL or specify access permissionsexplicitly. You cannot do both.

Grantee Values

You can specify the person (grantee) to whom you're assigningaccess rights (using request elements) in the following ways. Forexamples of how to specify these grantee values in JSON format, seethe Amazon Web Services CLI example in Enabling Amazon S3 serveraccess logging in theAmazon S3 User Guide.

  • By the person's ID:

    <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:type="CanonicalUser"><ID><>ID<></ID><DisplayName><>GranteesEmail<></DisplayName></Grantee>

    DisplayName is optional and ignored in the request.

  • By URI:

    <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:type="Group"><URI><>http://acs.amazonaws.com/groups/global/AuthenticatedUsers<></URI></Grantee>

  • By Email address:

    <Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:type="AmazonCustomerByEmail"><EmailAddress><>Grantees@email.com<></EmailAddress>lt;/Grantee>

    The grantee is resolved to the CanonicalUser and, in a response toa GET Object acl request, appears as the CanonicalUser.

    Using email addresses to specify a grantee is only supported inthe following Amazon Web Services Regions:

    • US East (N. Virginia)

    • US West (N. California)

    • US West (Oregon)

    • Asia Pacific (Singapore)

    • Asia Pacific (Sydney)

    • Asia Pacific (Tokyo)

    • Europe (Ireland)

    • South America (São Paulo)

    For a list of all the Amazon S3 supported Regions and endpoints,seeRegions and Endpoints in the Amazon Web Services GeneralReference.

Versioning

The ACL of an object is set at the object version level. By default,PUT sets the ACL of the current version of an object. To set the ACLof a different version, use theversionId subresource.

The following operations are related toPutObjectAcl:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To grant permissions using object ACL

# The following example adds grants to an object ACL. The first permission grants user1 and user2 FULL_CONTROL and the# AllUsers group READ permission.resp=client.put_object_acl({access_control_policy:{},bucket:"examplebucket",grant_full_control:"emailaddress=user1@example.com,emailaddress=user2@example.com",grant_read:"uri=http://acs.amazonaws.com/groups/global/AllUsers",key:"HappyFace.jpg",})resp.to_houtputsthefollowing:{}

Request syntax with placeholder values

resp=client.put_object_acl({acl:"private",# accepts private, public-read, public-read-write, authenticated-read, aws-exec-read, bucket-owner-read, bucket-owner-full-controlaccess_control_policy:{grants:[{grantee:{display_name:"DisplayName",email_address:"EmailAddress",id:"ID",type:"CanonicalUser",# required, accepts CanonicalUser, AmazonCustomerByEmail, Groupuri:"URI",},permission:"FULL_CONTROL",# accepts FULL_CONTROL, WRITE, WRITE_ACP, READ, READ_ACP},],owner:{display_name:"DisplayName",id:"ID",},},bucket:"BucketName",# requiredcontent_md5:"ContentMD5",checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVMEgrant_full_control:"GrantFullControl",grant_read:"GrantRead",grant_read_acp:"GrantReadACP",grant_write:"GrantWrite",grant_write_acp:"GrantWriteACP",key:"ObjectKey",# requiredrequest_payer:"requester",# accepts requesterversion_id:"ObjectVersionId",expected_bucket_owner:"AccountId",})

Response structure

resp.request_charged#=> String, one of "requester"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :acl(String)

    The canned ACL to apply to the object. For more information, seeCanned ACL.

  • :access_control_policy(Types::AccessControlPolicy)

    Contains the elements that set the ACL permissions for an object pergrantee.

  • :bucket(required,String)

    The bucket name that contains the object to which you want to attachthe ACL.

    Access points - When you use this action with an access point forgeneral purpose buckets, you must provide the alias of the accesspoint in place of the bucket name or specify the access point ARN.When you use this action with an access point for directory buckets,you must provide the access point name in place of the bucket name.When using the access point ARN, you must direct requests to theaccess point hostname. The access point hostname takes the formAccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.When using this action with an access point through the Amazon WebServices SDKs, you provide the access point ARN in place of the bucketname. For more information about access point ARNs, seeUsing accesspoints in theAmazon S3 User Guide.

    S3 on Outposts - When you use this action with S3 on Outposts, youmust direct requests to the S3 on Outposts hostname. The S3 onOutposts hostname takes the formAccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com.When you use this action with S3 on Outposts, the destination bucketmust be the Outposts access point ARN or the access point alias. Formore information about S3 on Outposts, seeWhat is S3 onOutposts? in theAmazon S3 User Guide.

  • :content_md5(String)

    The Base64 encoded 128-bitMD5 digest of the data. This header mustbe used as a message integrity check to verify that the request bodywas not corrupted in transit. For more information, go toRFC1864.>

    For requests made using the Amazon Web Services Command Line Interface(CLI) or Amazon Web Services SDKs, this field is calculatedautomatically.

  • :checksum_algorithm(String)

    Indicates the algorithm used to create the checksum for the objectwhen you use the SDK. This header will not provide any additionalfunctionality if you don't use the SDK. When you send this header,there must be a correspondingx-amz-checksum orx-amz-trailerheader sent. Otherwise, Amazon S3 fails the request with the HTTPstatus code400 Bad Request. For more information, seeCheckingobject integrity in theAmazon S3 User Guide.

    If you provide an individual checksum, Amazon S3 ignores any providedChecksumAlgorithm parameter.

  • :grant_full_control(String)

    Allows grantee the read, write, read ACP, and write ACP permissions onthe bucket.

    This functionality is not supported for Amazon S3 on Outposts.

  • :grant_read(String)

    Allows grantee to list the objects in the bucket.

    This functionality is not supported for Amazon S3 on Outposts.

  • :grant_read_acp(String)

    Allows grantee to read the bucket ACL.

    This functionality is not supported for Amazon S3 on Outposts.

  • :grant_write(String)

    Allows grantee to create new objects in the bucket.

    For the bucket and object owners of existing objects, also allowsdeletions and overwrites of those objects.

  • :grant_write_acp(String)

    Allows grantee to write the ACL for the applicable bucket.

    This functionality is not supported for Amazon S3 on Outposts.

  • :key(required,String)

    Key for which the PUT action was initiated.

  • :request_payer(String)

    Confirms that the requester knows that they will be charged for therequest. Bucket owners need not specify this parameter in theirrequests. If either the source or destination S3 bucket has RequesterPays enabled, the requester will pay for corresponding charges to copythe object. For information about downloading objects from RequesterPays buckets, seeDownloading Objects in Requester Pays Bucketsin theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :version_id(String)

    Version ID used to reference a specific version of the object.

    This functionality is not supported for directory buckets.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

18897188981889918900
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 18897defput_object_acl(params={},options={})req=build_request(:put_object_acl,params)req.send_request(options)end

#put_object_legal_hold(params = {}) ⇒Types::PutObjectLegalHoldOutput

This operation is not supported for directory buckets.

Applies a legal hold configuration to the specified object. For moreinformation, seeLocking Objects.

This functionality is not supported for Amazon S3 on Outposts.

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.put_object_legal_hold({bucket:"BucketName",# requiredkey:"ObjectKey",# requiredlegal_hold:{status:"ON",# accepts ON, OFF},request_payer:"requester",# accepts requesterversion_id:"ObjectVersionId",content_md5:"ContentMD5",checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVMEexpected_bucket_owner:"AccountId",})

Response structure

resp.request_charged#=> String, one of "requester"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket name containing the object that you want to place a legalhold on.

    Access points - When you use this action with an access point forgeneral purpose buckets, you must provide the alias of the accesspoint in place of the bucket name or specify the access point ARN.When you use this action with an access point for directory buckets,you must provide the access point name in place of the bucket name.When using the access point ARN, you must direct requests to theaccess point hostname. The access point hostname takes the formAccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.When using this action with an access point through the Amazon WebServices SDKs, you provide the access point ARN in place of the bucketname. For more information about access point ARNs, seeUsing accesspoints in theAmazon S3 User Guide.

  • :key(required,String)

    The key name for the object that you want to place a legal hold on.

  • :legal_hold(Types::ObjectLockLegalHold)

    Container element for the legal hold configuration you want to applyto the specified object.

  • :request_payer(String)

    Confirms that the requester knows that they will be charged for therequest. Bucket owners need not specify this parameter in theirrequests. If either the source or destination S3 bucket has RequesterPays enabled, the requester will pay for corresponding charges to copythe object. For information about downloading objects from RequesterPays buckets, seeDownloading Objects in Requester Pays Bucketsin theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :version_id(String)

    The version ID of the object that you want to place a legal hold on.

  • :content_md5(String)

    The MD5 hash for the request body.

    For requests made using the Amazon Web Services Command Line Interface(CLI) or Amazon Web Services SDKs, this field is calculatedautomatically.

  • :checksum_algorithm(String)

    Indicates the algorithm used to create the checksum for the objectwhen you use the SDK. This header will not provide any additionalfunctionality if you don't use the SDK. When you send this header,there must be a correspondingx-amz-checksum orx-amz-trailerheader sent. Otherwise, Amazon S3 fails the request with the HTTPstatus code400 Bad Request. For more information, seeCheckingobject integrity in theAmazon S3 User Guide.

    If you provide an individual checksum, Amazon S3 ignores any providedChecksumAlgorithm parameter.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

19022190231902419025
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 19022defput_object_legal_hold(params={},options={})req=build_request(:put_object_legal_hold,params)req.send_request(options)end

#put_object_lock_configuration(params = {}) ⇒Types::PutObjectLockConfigurationOutput

This operation is not supported for directory buckets.

Places an Object Lock configuration on the specified bucket. The rulespecified in the Object Lock configuration will be applied by defaultto every new object placed in the specified bucket. For moreinformation, seeLocking Objects.

* TheDefaultRetention settings require both a mode and a period.

  • TheDefaultRetention period can be eitherDays orYears butyou must select one. You cannot specifyDays andYears at thesame time.

  • You can enable Object Lock for new or existing buckets. For moreinformation, seeConfiguring Object Lock.

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.put_object_lock_configuration({bucket:"BucketName",# requiredobject_lock_configuration:{object_lock_enabled:"Enabled",# accepts Enabledrule:{default_retention:{mode:"GOVERNANCE",# accepts GOVERNANCE, COMPLIANCEdays:1,years:1,},},},request_payer:"requester",# accepts requestertoken:"ObjectLockToken",content_md5:"ContentMD5",checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVMEexpected_bucket_owner:"AccountId",})

Response structure

resp.request_charged#=> String, one of "requester"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket whose Object Lock configuration you want to create orreplace.

  • :object_lock_configuration(Types::ObjectLockConfiguration)

    The Object Lock configuration that you want to apply to the specifiedbucket.

  • :request_payer(String)

    Confirms that the requester knows that they will be charged for therequest. Bucket owners need not specify this parameter in theirrequests. If either the source or destination S3 bucket has RequesterPays enabled, the requester will pay for corresponding charges to copythe object. For information about downloading objects from RequesterPays buckets, seeDownloading Objects in Requester Pays Bucketsin theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :token(String)

    A token to allow Object Lock to be enabled for an existing bucket.

  • :content_md5(String)

    The MD5 hash for the request body.

    For requests made using the Amazon Web Services Command Line Interface(CLI) or Amazon Web Services SDKs, this field is calculatedautomatically.

  • :checksum_algorithm(String)

    Indicates the algorithm used to create the checksum for the objectwhen you use the SDK. This header will not provide any additionalfunctionality if you don't use the SDK. When you send this header,there must be a correspondingx-amz-checksum orx-amz-trailerheader sent. Otherwise, Amazon S3 fails the request with the HTTPstatus code400 Bad Request. For more information, seeCheckingobject integrity in theAmazon S3 User Guide.

    If you provide an individual checksum, Amazon S3 ignores any providedChecksumAlgorithm parameter.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

19145191461914719148
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 19145defput_object_lock_configuration(params={},options={})req=build_request(:put_object_lock_configuration,params)req.send_request(options)end

#put_object_retention(params = {}) ⇒Types::PutObjectRetentionOutput

This operation is not supported for directory buckets.

Places an Object Retention configuration on an object. For moreinformation, seeLocking Objects. Users or accounts require thes3:PutObjectRetention permission in order to place an ObjectRetention configuration on objects. Bypassing a Governance Retentionconfiguration requires thes3:BypassGovernanceRetention permission.

This functionality is not supported for Amazon S3 on Outposts.

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.put_object_retention({bucket:"BucketName",# requiredkey:"ObjectKey",# requiredretention:{mode:"GOVERNANCE",# accepts GOVERNANCE, COMPLIANCEretain_until_date:Time.now,},request_payer:"requester",# accepts requesterversion_id:"ObjectVersionId",bypass_governance_retention:false,content_md5:"ContentMD5",checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVMEexpected_bucket_owner:"AccountId",})

Response structure

resp.request_charged#=> String, one of "requester"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket name that contains the object you want to apply this ObjectRetention configuration to.

    Access points - When you use this action with an access point forgeneral purpose buckets, you must provide the alias of the accesspoint in place of the bucket name or specify the access point ARN.When you use this action with an access point for directory buckets,you must provide the access point name in place of the bucket name.When using the access point ARN, you must direct requests to theaccess point hostname. The access point hostname takes the formAccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.When using this action with an access point through the Amazon WebServices SDKs, you provide the access point ARN in place of the bucketname. For more information about access point ARNs, seeUsing accesspoints in theAmazon S3 User Guide.

  • :key(required,String)

    The key name for the object that you want to apply this ObjectRetention configuration to.

  • :retention(Types::ObjectLockRetention)

    The container element for the Object Retention configuration.

  • :request_payer(String)

    Confirms that the requester knows that they will be charged for therequest. Bucket owners need not specify this parameter in theirrequests. If either the source or destination S3 bucket has RequesterPays enabled, the requester will pay for corresponding charges to copythe object. For information about downloading objects from RequesterPays buckets, seeDownloading Objects in Requester Pays Bucketsin theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :version_id(String)

    The version ID for the object that you want to apply this ObjectRetention configuration to.

  • :bypass_governance_retention(Boolean)

    Indicates whether this action should bypass Governance-moderestrictions.

  • :content_md5(String)

    The MD5 hash for the request body.

    For requests made using the Amazon Web Services Command Line Interface(CLI) or Amazon Web Services SDKs, this field is calculatedautomatically.

  • :checksum_algorithm(String)

    Indicates the algorithm used to create the checksum for the objectwhen you use the SDK. This header will not provide any additionalfunctionality if you don't use the SDK. When you send this header,there must be a correspondingx-amz-checksum orx-amz-trailerheader sent. Otherwise, Amazon S3 fails the request with the HTTPstatus code400 Bad Request. For more information, seeCheckingobject integrity in theAmazon S3 User Guide.

    If you provide an individual checksum, Amazon S3 ignores any providedChecksumAlgorithm parameter.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

19280192811928219283
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 19280defput_object_retention(params={},options={})req=build_request(:put_object_retention,params)req.send_request(options)end

#put_object_tagging(params = {}) ⇒Types::PutObjectTaggingOutput

This operation is not supported for directory buckets.

Sets the supplied tag-set to an object that already exists in abucket. A tag is a key-value pair. For more information, seeObjectTagging.

You can associate tags with an object by sending a PUT request againstthe tagging subresource that is associated with the object. You canretrieve tags by sending a GET request. For more information, seeGetObjectTagging.

For tagging-related restrictions related to characters and encodings,seeTag Restrictions. Note that Amazon S3 limits the maximumnumber of tags to 10 tags per object.

To use this operation, you must have permission to perform thes3:PutObjectTagging action. By default, the bucket owner has thispermission and can grant this permission to others.

To put tags of any other version, use theversionId query parameter.You also need permission for thes3:PutObjectVersionTagging action.

PutObjectTagging has the following special errors. For more AmazonS3 errors see,Error Responses.

  • InvalidTag - The tag provided was not a valid tag. This error canoccur if the tag did not pass input validation. For moreinformation, seeObject Tagging.

  • MalformedXML - The XML provided does not match the schema.

  • OperationAborted - A conflicting conditional action is currentlyin progress against this resource. Please try again.

  • InternalError - The service was unable to apply the provided tagto the object.

The following operations are related toPutObjectTagging:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To add tags to an existing object

# The following example adds tags to an existing object.resp=client.put_object_tagging({bucket:"examplebucket",key:"HappyFace.jpg",tagging:{tag_set:[{key:"Key3",value:"Value3",},{key:"Key4",value:"Value4",},],},})resp.to_houtputsthefollowing:{version_id:"null",}

Request syntax with placeholder values

resp=client.put_object_tagging({bucket:"BucketName",# requiredkey:"ObjectKey",# requiredversion_id:"ObjectVersionId",content_md5:"ContentMD5",checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVMEtagging:{# requiredtag_set:[# required{key:"ObjectKey",# requiredvalue:"Value",# required},],},expected_bucket_owner:"AccountId",request_payer:"requester",# accepts requester})

Response structure

resp.version_id#=> String

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket name containing the object.

    Access points - When you use this action with an access point forgeneral purpose buckets, you must provide the alias of the accesspoint in place of the bucket name or specify the access point ARN.When you use this action with an access point for directory buckets,you must provide the access point name in place of the bucket name.When using the access point ARN, you must direct requests to theaccess point hostname. The access point hostname takes the formAccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.When using this action with an access point through the Amazon WebServices SDKs, you provide the access point ARN in place of the bucketname. For more information about access point ARNs, seeUsing accesspoints in theAmazon S3 User Guide.

    S3 on Outposts - When you use this action with S3 on Outposts, youmust direct requests to the S3 on Outposts hostname. The S3 onOutposts hostname takes the formAccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com.When you use this action with S3 on Outposts, the destination bucketmust be the Outposts access point ARN or the access point alias. Formore information about S3 on Outposts, seeWhat is S3 onOutposts? in theAmazon S3 User Guide.

  • :key(required,String)

    Name of the object key.

  • :version_id(String)

    The versionId of the object that the tag-set will be added to.

  • :content_md5(String)

    The MD5 hash for the request body.

    For requests made using the Amazon Web Services Command Line Interface(CLI) or Amazon Web Services SDKs, this field is calculatedautomatically.

  • :checksum_algorithm(String)

    Indicates the algorithm used to create the checksum for the objectwhen you use the SDK. This header will not provide any additionalfunctionality if you don't use the SDK. When you send this header,there must be a correspondingx-amz-checksum orx-amz-trailerheader sent. Otherwise, Amazon S3 fails the request with the HTTPstatus code400 Bad Request. For more information, seeCheckingobject integrity in theAmazon S3 User Guide.

    If you provide an individual checksum, Amazon S3 ignores any providedChecksumAlgorithm parameter.

  • :tagging(required,Types::Tagging)

    Container for theTagSet andTag elements

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

  • :request_payer(String)

    Confirms that the requester knows that she or he will be charged forthe tagging object request. Bucket owners need not specify thisparameter in their requests.

Returns:

See Also:

19473194741947519476
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 19473defput_object_tagging(params={},options={})req=build_request(:put_object_tagging,params)req.send_request(options)end

#put_public_access_block(params = {}) ⇒Struct

This operation is not supported for directory buckets.

Creates or modifies thePublicAccessBlock configuration for anAmazon S3 bucket. To use this operation, you must have thes3:PutBucketPublicAccessBlock permission. For more information aboutAmazon S3 permissions, seeSpecifying Permissions in a Policy.

When Amazon S3 evaluates thePublicAccessBlock configuration for abucket or an object, it checks thePublicAccessBlock configurationfor both the bucket (or the bucket that contains the object) and thebucket owner's account. Account-level settings automatically inheritfrom organization-level policies when present. If thePublicAccessBlock configurations are different between the bucketand the account, Amazon S3 uses the most restrictive combination ofthe bucket-level and account-level settings.

For more information about when Amazon S3 considers a bucket or anobject public, seeThe Meaning of "Public".

The following operations are related toPutPublicAccessBlock:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.put_public_access_block({bucket:"BucketName",# requiredcontent_md5:"ContentMD5",checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVMEpublic_access_block_configuration:{# requiredblock_public_acls:false,ignore_public_acls:false,block_public_policy:false,restrict_public_buckets:false,},expected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The name of the Amazon S3 bucket whosePublicAccessBlockconfiguration you want to set.

  • :content_md5(String)

    The MD5 hash of thePutPublicAccessBlock request body.

    For requests made using the Amazon Web Services Command Line Interface(CLI) or Amazon Web Services SDKs, this field is calculatedautomatically.

  • :checksum_algorithm(String)

    Indicates the algorithm used to create the checksum for the objectwhen you use the SDK. This header will not provide any additionalfunctionality if you don't use the SDK. When you send this header,there must be a correspondingx-amz-checksum orx-amz-trailerheader sent. Otherwise, Amazon S3 fails the request with the HTTPstatus code400 Bad Request. For more information, seeCheckingobject integrity in theAmazon S3 User Guide.

    If you provide an individual checksum, Amazon S3 ignores any providedChecksumAlgorithm parameter.

  • :public_access_block_configuration(required,Types::PublicAccessBlockConfiguration)

    ThePublicAccessBlock configuration that you want to apply to thisAmazon S3 bucket. You can enable the configuration options in anycombination. For more information about when Amazon S3 considers abucket or object public, seeThe Meaning of "Public" in theAmazon S3 User Guide.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

19586195871958819589
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 19586defput_public_access_block(params={},options={})req=build_request(:put_public_access_block,params)req.send_request(options)end

#rename_object(params = {}) ⇒Struct

Renames an existing object in a directory bucket that uses the S3Express One Zone storage class. You can useRenameObject byspecifying an existing object’s name as the source and the new name ofthe object as the destination within the same directory bucket.

RenameObject is only supported for objects stored in the S3 ExpressOne Zone storage class.

To prevent overwriting an object, you can use theIf-None-Matchconditional header.

  • If-None-Match - Renames the object only if an object with thespecified name does not already exist in the directory bucket. Ifyou don't want to overwrite an existing object, you can add theIf-None-Match conditional header with the value‘*’ in theRenameObject request. Amazon S3 then returns a412 PreconditionFailed error if the object with the specified name already exists.For more information, seeRFC 7232.

^

Permissions

To grant access to theRenameObject operation on a directorybucket, we recommend that you use theCreateSession operation forsession-based authorization. Specifically, you grant thes3express:CreateSession permission to the directory bucket in abucket policy or an IAM identity-based policy. Then, you make theCreateSession API call on the directory bucket to obtain a sessiontoken. With the session token in your request header, you can makeAPI requests to this operation. After the session token expires, youmake anotherCreateSession API call to generate a new sessiontoken for use. The Amazon Web Services CLI and SDKs will create andmanage your session including refreshing the session tokenautomatically to avoid service interruptions when a session expires.In your bucket policy, you can specify thes3express:SessionModecondition key to control who can create aReadWrite orReadOnlysession. AReadWrite session is required for executing all theZonal endpoint API operations, includingRenameObject. For moreinformation about authorization, seeCreateSession. Tolearn more about Zonal endpoint API operations, seeAuthorizingZonal endpoint API operations with CreateSession in theAmazonS3 User Guide.

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax isBucket-name.s3express-zone-id.region-code.amazonaws.com.

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.rename_object({bucket:"BucketName",# requiredkey:"ObjectKey",# requiredrename_source:"RenameSource",# requireddestination_if_match:"IfMatch",destination_if_none_match:"IfNoneMatch",destination_if_modified_since:Time.now,destination_if_unmodified_since:Time.now,source_if_match:"RenameSourceIfMatch",source_if_none_match:"RenameSourceIfNoneMatch",source_if_modified_since:Time.now,source_if_unmodified_since:Time.now,client_token:"ClientToken",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket name of the directory bucket containing the object.

    You must use virtual-hosted-style requests in the formatBucket-name.s3express-zone-id.region-code.amazonaws.com. Path-stylerequests are not supported. Directory bucket names must be unique inthe chosen Availability Zone. Bucket names must follow the formatbucket-base-name--zone-id--x-s3 (for example,amzn-s3-demo-bucket--usw2-az1--x-s3). For information about bucketnaming restrictions, seeDirectory bucket naming rules in theAmazon S3 User Guide.

  • :key(required,String)

    Key name of the object to rename.

  • :rename_source(required,String)

    Specifies the source for the rename operation. The value must be URLencoded.

  • :destination_if_match(String)

    Renames the object only if the ETag (entity tag) value provided duringthe operation matches the ETag of the object in S3. TheIf-Matchheader field makes the request method conditional on ETags. If theETag values do not match, the operation returns a412 PreconditionFailed error.

    Expects the ETag value as a string.

  • :destination_if_none_match(String)

    Renames the object only if the destination does not already exist inthe specified directory bucket. If the object does exist when you senda request withIf-None-Match:*, the S3 API will return a412Precondition Failed error, preventing an overwrite. TheIf-None-Match header prevents overwrites of existing data byvalidating that there's not an object with the same key name alreadyin your directory bucket.

    Expects the* character (asterisk).

  • :destination_if_modified_since(Time,DateTime,Date,Integer,String)

    Renames the object if the destination exists and if it has beenmodified since the specified time.

  • :destination_if_unmodified_since(Time,DateTime,Date,Integer,String)

    Renames the object if it hasn't been modified since the specifiedtime.

  • :source_if_match(String)

    Renames the object if the source exists and if its entity tag (ETag)matches the specified ETag.

  • :source_if_none_match(String)

    Renames the object if the source exists and if its entity tag (ETag)is different than the specified ETag. If an asterisk (*) characteris provided, the operation will fail and return a412 PreconditionFailed error.

  • :source_if_modified_since(Time,DateTime,Date,Integer,String)

    Renames the object if the source exists and if it has been modifiedsince the specified time.

  • :source_if_unmodified_since(Time,DateTime,Date,Integer,String)

    Renames the object if the source exists and hasn't been modifiedsince the specified time.

  • :client_token(String)

    A unique string with a max of 64 ASCII characters in the ASCII rangeof 33 - 126.

    RenameObject supports idempotency using a client token. To make anidempotent API request usingRenameObject, specify a client token inthe request. You should not reuse the same client token for other APIrequests. If you retry a request that completed successfully using thesame client token and the same parameters, the retry succeeds withoutperforming any further actions. If you retry a successful requestusing the same client token, but one or more of the parameters aredifferent, the retry fails and anIdempotentParameterMismatch erroris returned.

    A suitable default value is auto-generated. You should normallynot need to pass this option.**

Returns:

See Also:

19763197641976519766
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 19763defrename_object(params={},options={})req=build_request(:rename_object,params)req.send_request(options)end

#restore_object(params = {}) ⇒Types::RestoreObjectOutput

This operation is not supported for directory buckets.

Restores an archived copy of an object back into Amazon S3

This functionality is not supported for Amazon S3 on Outposts.

This action performs the following types of requests:

  • restore an archive - Restore an archived object

^

For more information about theS3 structure in the request body, seethe following:

Permissions

To use this operation, you must have permissions to perform thes3:RestoreObject action. The bucket owner has this permission bydefault and can grant this permission to others. For moreinformation about permissions, seePermissions Related to BucketSubresource Operations andManaging Access Permissions to YourAmazon S3 Resources in theAmazon S3 User Guide.

Restoring objects

Objects that you archive to the S3 Glacier Flexible Retrieval or S3Glacier Deep Archive storage class, and S3 Intelligent-TieringArchive or S3 Intelligent-Tiering Deep Archive tiers, are notaccessible in real time. For objects in the S3 Glacier FlexibleRetrieval or S3 Glacier Deep Archive storage classes, you must firstinitiate a restore request, and then wait until a temporary copy ofthe object is available. If you want a permanent copy of the object,create a copy of it in the Amazon S3 Standard storage class in yourS3 bucket. To access an archived object, you must restore the objectfor the duration (number of days) that you specify. For objects inthe Archive Access or Deep Archive Access tiers of S3Intelligent-Tiering, you must first initiate a restore request, andthen wait until the object is moved into the Frequent Access tier.

To restore a specific object version, you can provide a version ID.If you don't provide a version ID, Amazon S3 restores the currentversion.

When restoring an archived object, you can specify one of thefollowing data access tier options in theTier element of therequest body:

  • Expedited - Expedited retrievals allow you to quickly accessyour data stored in the S3 Glacier Flexible Retrieval storageclass or S3 Intelligent-Tiering Archive tier when occasionalurgent requests for restoring archives are required. For all butthe largest archived objects (250 MB+), data accessed usingExpedited retrievals is typically made available within 1–5minutes. Provisioned capacity ensures that retrieval capacity forExpedited retrievals is available when you need it. Expeditedretrievals and provisioned capacity are not available for objectsstored in the S3 Glacier Deep Archive storage class or S3Intelligent-Tiering Deep Archive tier.

  • Standard - Standard retrievals allow you to access any of yourarchived objects within several hours. This is the default optionfor retrieval requests that do not specify the retrieval option.Standard retrievals typically finish within 3–5 hours for objectsstored in the S3 Glacier Flexible Retrieval storage class or S3Intelligent-Tiering Archive tier. They typically finish within 12hours for objects stored in the S3 Glacier Deep Archive storageclass or S3 Intelligent-Tiering Deep Archive tier. Standardretrievals are free for objects stored in S3 Intelligent-Tiering.

  • Bulk - Bulk retrievals free for objects stored in the S3 GlacierFlexible Retrieval and S3 Intelligent-Tiering storage classes,enabling you to retrieve large amounts, even petabytes, of data atno cost. Bulk retrievals typically finish within 5–12 hours forobjects stored in the S3 Glacier Flexible Retrieval storage classor S3 Intelligent-Tiering Archive tier. Bulk retrievals are alsothe lowest-cost retrieval option when restoring objects from S3Glacier Deep Archive. They typically finish within 48 hours forobjects stored in the S3 Glacier Deep Archive storage class or S3Intelligent-Tiering Deep Archive tier.

For more information about archive retrieval options and provisionedcapacity forExpedited data access, seeRestoring ArchivedObjects in theAmazon S3 User Guide.

You can use Amazon S3 restore speed upgrade to change the restorespeed to a faster speed while it is in progress. For moreinformation, see Upgrading the speed of an in-progress restorein theAmazon S3 User Guide.

To get the status of object restoration, you can send aHEADrequest. Operations return thex-amz-restore header, whichprovides information about the restoration status, in the response.You can use Amazon S3 event notifications to notify you when arestore is initiated or completed. For more information, seeConfiguring Amazon S3 Event Notifications in theAmazon S3User Guide.

After restoring an archived object, you can update the restorationperiod by reissuing the request with a new period. Amazon S3 updatesthe restoration period relative to the current time and charges onlyfor the request-there are no data transfer charges. You cannotupdate the restoration period when Amazon S3 is actively processingyour current restore request for the object.

If your bucket has a lifecycle configuration with a rule thatincludes an expiration action, the object expiration overrides thelife span that you specify in a restore request. For example, if yourestore an object copy for 10 days, but the object is scheduled toexpire in 3 days, Amazon S3 deletes the object in 3 days. For moreinformation about lifecycle configuration, seePutBucketLifecycleConfiguration andObject LifecycleManagement inAmazon S3 User Guide.

Responses

A successful action returns either the200 OK or202 Acceptedstatus code.

  • If the object is not previously restored, then Amazon S3 returns202 Accepted in the response.

  • If the object is previously restored, Amazon S3 returns200 OKin the response.^

  • Special errors:

    • Code: RestoreAlreadyInProgress

    • Cause: Object restore is already in progress.

    • HTTP Status Code: 409 Conflict

    • SOAP Fault Code Prefix: Client

    • Code: GlacierExpeditedRetrievalNotAvailable

    • Cause: expedited retrievals are currently not available. Tryagain later. (Returned if there is insufficient capacity toprocess the Expedited request. This error applies only toExpedited retrievals and not to S3 Standard or Bulkretrievals.)

    • HTTP Status Code: 503

    • SOAP Fault Code Prefix: N/A

The following operations are related toRestoreObject:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To restore an archived object

# The following example restores for one day an archived copy of an object back into Amazon S3 bucket.resp=client.restore_object({bucket:"examplebucket",key:"archivedobjectkey",restore_request:{days:1,glacier_job_parameters:{tier:"Expedited",},},})resp.to_houtputsthefollowing:{}

Request syntax with placeholder values

resp=client.restore_object({bucket:"BucketName",# requiredkey:"ObjectKey",# requiredversion_id:"ObjectVersionId",restore_request:{days:1,glacier_job_parameters:{tier:"Standard",# required, accepts Standard, Bulk, Expedited},type:"SELECT",# accepts SELECTtier:"Standard",# accepts Standard, Bulk, Expediteddescription:"Description",select_parameters:{input_serialization:{# requiredcsv:{file_header_info:"USE",# accepts USE, IGNORE, NONEcomments:"Comments",quote_escape_character:"QuoteEscapeCharacter",record_delimiter:"RecordDelimiter",field_delimiter:"FieldDelimiter",quote_character:"QuoteCharacter",allow_quoted_record_delimiter:false,},compression_type:"NONE",# accepts NONE, GZIP, BZIP2json:{type:"DOCUMENT",# accepts DOCUMENT, LINES},parquet:{},},expression_type:"SQL",# required, accepts SQLexpression:"Expression",# requiredoutput_serialization:{# requiredcsv:{quote_fields:"ALWAYS",# accepts ALWAYS, ASNEEDEDquote_escape_character:"QuoteEscapeCharacter",record_delimiter:"RecordDelimiter",field_delimiter:"FieldDelimiter",quote_character:"QuoteCharacter",},json:{record_delimiter:"RecordDelimiter",},},},output_location:{s3:{bucket_name:"BucketName",# requiredprefix:"LocationPrefix",# requiredencryption:{encryption_type:"AES256",# required, accepts AES256, aws:fsx, aws:kms, aws:kms:dssekms_key_id:"SSEKMSKeyId",kms_context:"KMSContext",},canned_acl:"private",# accepts private, public-read, public-read-write, authenticated-read, aws-exec-read, bucket-owner-read, bucket-owner-full-controlaccess_control_list:[{grantee:{display_name:"DisplayName",email_address:"EmailAddress",id:"ID",type:"CanonicalUser",# required, accepts CanonicalUser, AmazonCustomerByEmail, Groupuri:"URI",},permission:"FULL_CONTROL",# accepts FULL_CONTROL, WRITE, WRITE_ACP, READ, READ_ACP},],tagging:{tag_set:[# required{key:"ObjectKey",# requiredvalue:"Value",# required},],},user_metadata:[{name:"MetadataKey",value:"MetadataValue",},],storage_class:"STANDARD",# accepts STANDARD, REDUCED_REDUNDANCY, STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING, GLACIER, DEEP_ARCHIVE, OUTPOSTS, GLACIER_IR, SNOW, EXPRESS_ONEZONE, FSX_OPENZFS, FSX_ONTAP},},},request_payer:"requester",# accepts requesterchecksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVMEexpected_bucket_owner:"AccountId",})

Response structure

resp.request_charged#=> String, one of "requester"resp.restore_output_path#=> String

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket name containing the object to restore.

    Access points - When you use this action with an access point forgeneral purpose buckets, you must provide the alias of the accesspoint in place of the bucket name or specify the access point ARN.When you use this action with an access point for directory buckets,you must provide the access point name in place of the bucket name.When using the access point ARN, you must direct requests to theaccess point hostname. The access point hostname takes the formAccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.When using this action with an access point through the Amazon WebServices SDKs, you provide the access point ARN in place of the bucketname. For more information about access point ARNs, seeUsing accesspoints in theAmazon S3 User Guide.

    S3 on Outposts - When you use this action with S3 on Outposts, youmust direct requests to the S3 on Outposts hostname. The S3 onOutposts hostname takes the formAccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com.When you use this action with S3 on Outposts, the destination bucketmust be the Outposts access point ARN or the access point alias. Formore information about S3 on Outposts, seeWhat is S3 onOutposts? in theAmazon S3 User Guide.

  • :key(required,String)

    Object key for which the action was initiated.

  • :version_id(String)

    VersionId used to reference a specific version of the object.

  • :restore_request(Types::RestoreRequest)

    Container for restore job parameters.

  • :request_payer(String)

    Confirms that the requester knows that they will be charged for therequest. Bucket owners need not specify this parameter in theirrequests. If either the source or destination S3 bucket has RequesterPays enabled, the requester will pay for corresponding charges to copythe object. For information about downloading objects from RequesterPays buckets, seeDownloading Objects in Requester Pays Bucketsin theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :checksum_algorithm(String)

    Indicates the algorithm used to create the checksum for the objectwhen you use the SDK. This header will not provide any additionalfunctionality if you don't use the SDK. When you send this header,there must be a correspondingx-amz-checksum orx-amz-trailerheader sent. Otherwise, Amazon S3 fails the request with the HTTPstatus code400 Bad Request. For more information, seeCheckingobject integrity in theAmazon S3 User Guide.

    If you provide an individual checksum, Amazon S3 ignores any providedChecksumAlgorithm parameter.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

20151201522015320154
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 20151defrestore_object(params={},options={})req=build_request(:restore_object,params)req.send_request(options)end

#select_object_content(params = {}) ⇒Types::SelectObjectContentOutput

This operation is not supported for directory buckets.

This action filters the contents of an Amazon S3 object based on asimple structured query language (SQL) statement. In the request,along with the SQL expression, you must also specify a dataserialization format (JSON, CSV, or Apache Parquet) of the object.Amazon S3 uses this format to parse object data into records, andreturns only records that match the specified SQL expression. You mustalso specify the data serialization format for the response.

This functionality is not supported for Amazon S3 on Outposts.

For more information about Amazon S3 Select, seeSelecting Contentfrom Objects andSELECT Command in theAmazon S3 UserGuide.

Permissions

You must have thes3:GetObject permission for thisoperation. Amazon S3 Select does not support anonymous access. Formore information about permissions, seeSpecifying Permissions in aPolicy in theAmazon S3 User Guide.

Object Data Formats

You can use Amazon S3 Select to query objects that have thefollowing format properties:

  • CSV, JSON, and Parquet - Objects must be in CSV, JSON, orParquet format.

  • UTF-8 - UTF-8 is the only encoding type Amazon S3 Selectsupports.

  • GZIP or BZIP2 - CSV and JSON files can be compressed using GZIPor BZIP2. GZIP and BZIP2 are the only compression formats thatAmazon S3 Select supports for CSV and JSON files. Amazon S3 Selectsupports columnar compression for Parquet using GZIP or Snappy.Amazon S3 Select does not support whole-object compression forParquet objects.

  • Server-side encryption - Amazon S3 Select supports queryingobjects that are protected with server-side encryption.

    For objects that are encrypted with customer-provided encryptionkeys (SSE-C), you must use HTTPS, and you must use the headersthat are documented in theGetObject. For more informationabout SSE-C, seeServer-Side Encryption (Using Customer-ProvidedEncryption Keys) in theAmazon S3 User Guide.

    For objects that are encrypted with Amazon S3 managed keys(SSE-S3) and Amazon Web Services KMS keys (SSE-KMS), server-sideencryption is handled transparently, so you don't need to specifyanything. For more information about server-side encryption,including SSE-S3 and SSE-KMS, seeProtecting Data UsingServer-Side Encryption in theAmazon S3 User Guide.

Working with the Response Body

Given the response size is unknown, Amazon S3 Select streams theresponse as a series of messages and includes aTransfer-Encodingheader withchunked as its value in the response. For moreinformation, seeAppendix: SelectObjectContent Response.

GetObject Support

TheSelectObjectContent action does not support the followingGetObject functionality. For more information, seeGetObject.

  • Range: Although you can specify a scan range for an Amazon S3Select request (seeSelectObjectContentRequest - ScanRange inthe request parameters), you cannot specify the range of bytes ofan object to return.

  • TheGLACIER,DEEP_ARCHIVE, andREDUCED_REDUNDANCY storageclasses, or theARCHIVE_ACCESS andDEEP_ARCHIVE_ACCESS accesstiers of theINTELLIGENT_TIERING storage class: You cannot queryobjects in theGLACIER,DEEP_ARCHIVE, orREDUCED_REDUNDANCYstorage classes, nor objects in theARCHIVE_ACCESS orDEEP_ARCHIVE_ACCESS access tiers of theINTELLIGENT_TIERINGstorage class. For more information about storage classes, seeUsing Amazon S3 storage classes in theAmazon S3 UserGuide.

Special Errors

For a list of special errors for this operation, seeList of SELECTObject Content Error Codes

The following operations are related toSelectObjectContent:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

EventStream Operation Example

# You can process the event once it arrives immediately, or wait until the# full response is complete and iterate through the eventstream enumerator.# To interact with event immediately, you need to register select_object_content# with callbacks. Callbacks can be registered for specific events or for all# events, including error events.# Callbacks can be passed into the `:event_stream_handler` option or within a# block statement attached to the #select_object_content call directly. Hybrid# pattern of both is also supported.# `:event_stream_handler` option takes in either a Proc object or# Aws::S3::EventStreams::SelectObjectContentEventStream object.# Usage pattern a): Callbacks with a block attached to #select_object_content# Example for registering callbacks for all event types and an error eventclient.select_object_content(# params input)do|stream|stream.on_error_eventdo|event|# catch unmodeled error event in the streamraiseevent# => Aws::Errors::EventError# event.event_type => :error# event.error_code => String# event.error_message => Stringendstream.on_eventdo|event|# process all events arriveputsevent.event_type# ...endend# Usage pattern b): Pass in `:event_stream_handler` for #select_object_content#  1) Create a Aws::S3::EventStreams::SelectObjectContentEventStream object#  Example for registering callbacks with specific eventshandler=Aws::S3::EventStreams::SelectObjectContentEventStream.newhandler.on_records_eventdo|event|event# => Aws::S3::Types::Recordsendhandler.on_stats_eventdo|event|event# => Aws::S3::Types::Statsendhandler.on_progress_eventdo|event|event# => Aws::S3::Types::Progressendhandler.on_cont_eventdo|event|event# => Aws::S3::Types::Contendhandler.on_end_eventdo|event|event# => Aws::S3::Types::Endendclient.select_object_content(# params inputsevent_stream_handler:handler)#  2) Use a Ruby Proc object#  Example for registering callbacks with specific eventshandler=Proc.newdo|stream|stream.on_records_eventdo|event|event# => Aws::S3::Types::Recordsendstream.on_stats_eventdo|event|event# => Aws::S3::Types::Statsendstream.on_progress_eventdo|event|event# => Aws::S3::Types::Progressendstream.on_cont_eventdo|event|event# => Aws::S3::Types::Contendstream.on_end_eventdo|event|event# => Aws::S3::Types::Endendendclient.select_object_content(# params inputsevent_stream_handler:handler)#  Usage pattern c): Hybrid pattern of a) and b)handler=Aws::S3::EventStreams::SelectObjectContentEventStream.newhandler.on_records_eventdo|event|event# => Aws::S3::Types::Recordsendhandler.on_stats_eventdo|event|event# => Aws::S3::Types::Statsendhandler.on_progress_eventdo|event|event# => Aws::S3::Types::Progressendhandler.on_cont_eventdo|event|event# => Aws::S3::Types::Contendhandler.on_end_eventdo|event|event# => Aws::S3::Types::Endendclient.select_object_content(# params inputevent_stream_handler:handler)do|stream|stream.on_error_eventdo|event|# catch unmodeled error event in the streamraiseevent# => Aws::Errors::EventError# event.event_type => :error# event.error_code => String# event.error_message => Stringendend# You can also iterate through events after the response complete.# Events are available atresp.payload# => Enumerator# For parameter input example, please refer to following request syntax.

Request syntax with placeholder values

resp=client.select_object_content({bucket:"BucketName",# requiredkey:"ObjectKey",# requiredsse_customer_algorithm:"SSECustomerAlgorithm",sse_customer_key:"SSECustomerKey",sse_customer_key_md5:"SSECustomerKeyMD5",expression:"Expression",# requiredexpression_type:"SQL",# required, accepts SQLrequest_progress:{enabled:false,},input_serialization:{# requiredcsv:{file_header_info:"USE",# accepts USE, IGNORE, NONEcomments:"Comments",quote_escape_character:"QuoteEscapeCharacter",record_delimiter:"RecordDelimiter",field_delimiter:"FieldDelimiter",quote_character:"QuoteCharacter",allow_quoted_record_delimiter:false,},compression_type:"NONE",# accepts NONE, GZIP, BZIP2json:{type:"DOCUMENT",# accepts DOCUMENT, LINES},parquet:{},},output_serialization:{# requiredcsv:{quote_fields:"ALWAYS",# accepts ALWAYS, ASNEEDEDquote_escape_character:"QuoteEscapeCharacter",record_delimiter:"RecordDelimiter",field_delimiter:"FieldDelimiter",quote_character:"QuoteCharacter",},json:{record_delimiter:"RecordDelimiter",},},scan_range:{start:1,end:1,},expected_bucket_owner:"AccountId",})

Response structure

# All events are available at resp.payload:resp.payload#=> Enumeratorresp.payload.event_types#=> [:records, :stats, :progress, :cont, :end]# For :records event available at #on_records_event callback and response eventstream enumerator:event.payload#=> IO# For :stats event available at #on_stats_event callback and response eventstream enumerator:event.details.bytes_scanned#=> Integerevent.details.bytes_processed#=> Integerevent.details.bytes_returned#=> Integer# For :progress event available at #on_progress_event callback and response eventstream enumerator:event.details.bytes_scanned#=> Integerevent.details.bytes_processed#=> Integerevent.details.bytes_returned#=> Integer# For :cont event available at #on_cont_event callback and response eventstream enumerator:#=> EmptyStruct# For :end event available at #on_end_event callback and response eventstream enumerator:#=> EmptyStruct

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The S3 bucket.

  • :key(required,String)

    The object key.

  • :sse_customer_algorithm(String)

    The server-side encryption (SSE) algorithm used to encrypt the object.This parameter is needed only when the object was created using achecksum algorithm. For more information, seeProtecting data usingSSE-C keys in theAmazon S3 User Guide.

  • :sse_customer_key(String)

    The server-side encryption (SSE) customer managed key. This parameteris needed only when the object was created using a checksum algorithm.For more information, seeProtecting data using SSE-C keys in theAmazon S3 User Guide.

  • :sse_customer_key_md5(String)

    The MD5 server-side encryption (SSE) customer managed key. Thisparameter is needed only when the object was created using a checksumalgorithm. For more information, seeProtecting data using SSE-Ckeys in theAmazon S3 User Guide.

  • :expression(required,String)

    The expression that is used to query the object.

  • :expression_type(required,String)

    The type of the provided expression (for example, SQL).

  • :request_progress(Types::RequestProgress)

    Specifies if periodic request progress information should be enabled.

  • :input_serialization(required,Types::InputSerialization)

    Describes the format of the data in the object that is being queried.

  • :output_serialization(required,Types::OutputSerialization)

    Describes the format of the data that you want Amazon S3 to return inresponse.

  • :scan_range(Types::ScanRange)

    Specifies the byte range of the object to get the records from. Arecord is processed when its first byte is contained by the range.This parameter is optional, but when specified, it must not be empty.See RFC 2616, Section 14.35.1 about how to specify the start and endof the range.

    ScanRangemay be used in the following ways:

    • <scanrange><start>50</start><end>100</end></scanrange> - processonly the records starting between the bytes 50 and 100 (inclusive,counting from zero)

    • <scanrange><start>50</start></scanrange> - process only therecords starting after the byte 50

    • <scanrange><end>50</end></scanrange> - process only the recordswithin the last 50 bytes of the file.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Yields:

  • (event_stream_handler)

Returns:

See Also:

20558205592056020561205622056320564205652056620567205682056920570205712057220573205742057520576205772057820579
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 20558defselect_object_content(params={},options={},&block)params=params.dupevent_stream_handler=casehandler=params.delete(:event_stream_handler)whenEventStreams::SelectObjectContentEventStreamthenhandlerwhenProcthenEventStreams::SelectObjectContentEventStream.new.tap(&handler)whennilthenEventStreams::SelectObjectContentEventStream.newelsemsg="expected :event_stream_handler to be a block or"\"instance of Aws::S3::EventStreams::SelectObjectContentEventStream"\", got `#{handler.inspect}` instead"raiseArgumentError,msgendyield(event_stream_handler)ifblock_given?req=build_request(:select_object_content,params)req.context[:event_stream_handler]=event_stream_handlerreq.handlers.add(Aws::Binary::DecodeHandler,priority:95)req.send_request(options,&block)end

#update_bucket_metadata_inventory_table_configuration(params = {}) ⇒Struct

Enables or disables a live inventory table for an S3 Metadataconfiguration on a general purpose bucket. For more information, seeAccelerating data discovery with S3 Metadata in theAmazon S3User Guide.

Permissions

To use this operation, you must have the following permissions. Formore information, seeSetting up permissions for configuringmetadata tables in theAmazon S3 User Guide.

If you want to encrypt your inventory table with server-sideencryption with Key Management Service (KMS) keys (SSE-KMS), youneed additional permissions in your KMS key policy. For moreinformation, see Setting up permissions for configuring metadatatables in theAmazon S3 User Guide.

  • s3:UpdateBucketMetadataInventoryTableConfiguration

  • s3tables:CreateTableBucket

  • s3tables:CreateNamespace

  • s3tables:GetTable

  • s3tables:CreateTable

  • s3tables:PutTablePolicy

  • s3tables:PutTableEncryption

  • kms:DescribeKey

The following operations are related toUpdateBucketMetadataInventoryTableConfiguration:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.({bucket:"BucketName",# requiredcontent_md5:"ContentMD5",checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVMEinventory_table_configuration:{# requiredconfiguration_state:"ENABLED",# required, accepts ENABLED, DISABLEDencryption_configuration:{sse_algorithm:"aws:kms",# required, accepts aws:kms, AES256kms_key_arn:"KmsKeyArn",},},expected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The general purpose bucket that corresponds to the metadataconfiguration that you want to enable or disable an inventory tablefor.

  • :content_md5(String)

    TheContent-MD5 header for the inventory table configuration.

  • :checksum_algorithm(String)

    The checksum algorithm to use with your inventory table configuration.

  • :inventory_table_configuration(required,Types::InventoryTableConfigurationUpdates)

    The contents of your inventory table configuration.

  • :expected_bucket_owner(String)

    The expected owner of the general purpose bucket that corresponds tothe metadata table configuration that you want to enable or disable aninventory table for.

Returns:

See Also:

20679206802068120682
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 20679def(params={},options={})req=build_request(:update_bucket_metadata_inventory_table_configuration,params)req.send_request(options)end

#update_bucket_metadata_journal_table_configuration(params = {}) ⇒Struct

Enables or disables journal table record expiration for an S3 Metadataconfiguration on a general purpose bucket. For more information, seeAccelerating data discovery with S3 Metadata in theAmazon S3User Guide.

Permissions

To use this operation, you must have thes3:UpdateBucketMetadataJournalTableConfiguration permission. Formore information, seeSetting up permissions for configuringmetadata tables in theAmazon S3 User Guide.

The following operations are related toUpdateBucketMetadataJournalTableConfiguration:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.({bucket:"BucketName",# requiredcontent_md5:"ContentMD5",checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVMEjournal_table_configuration:{# requiredrecord_expiration:{# requiredexpiration:"ENABLED",# required, accepts ENABLED, DISABLEDdays:1,},},expected_bucket_owner:"AccountId",})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The general purpose bucket that corresponds to the metadataconfiguration that you want to enable or disable journal table recordexpiration for.

  • :content_md5(String)

    TheContent-MD5 header for the journal table configuration.

  • :checksum_algorithm(String)

    The checksum algorithm to use with your journal table configuration.

  • :journal_table_configuration(required,Types::JournalTableConfigurationUpdates)

    The contents of your journal table configuration.

  • :expected_bucket_owner(String)

    The expected owner of the general purpose bucket that corresponds tothe metadata table configuration that you want to enable or disablejournal table record expiration for.

Returns:

See Also:

20760207612076220763
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 20760def(params={},options={})req=build_request(:update_bucket_metadata_journal_table_configuration,params)req.send_request(options)end

#upload_part(params = {}) ⇒Types::UploadPartOutput

Uploads a part in a multipart upload.

In this operation, you provide new data as a part of an object in yourrequest. However, you have an option to specify your existing AmazonS3 object as a data source for the part you are uploading. To upload apart from an existing object, you use theUploadPartCopyoperation.

You must initiate a multipart upload (seeCreateMultipartUpload)before you can upload any part. In response to your initiate request,Amazon S3 returns an upload ID, a unique identifier that you mustinclude in your upload part request.

Part numbers can be any number from 1 to 10,000, inclusive. A partnumber uniquely identifies a part and also defines its position withinthe object being created. If you upload a new part using the same partnumber that was used with a previous part, the previously uploadedpart is overwritten.

For information about maximum and minimum part sizes and othermultipart upload specifications, seeMultipart upload limits intheAmazon S3 User Guide.

After you initiate multipart upload and upload one or more parts, youmust either complete or abort multipart upload in order to stopgetting charged for storage of the uploaded parts. Only after youeither complete or abort multipart upload, Amazon S3 frees up theparts storage and stops charging you for the parts storage.

For more information on multipart uploads, go toMultipart UploadOverview in theAmazon S3 User Guide.

Directory buckets - For directory buckets, you must make requestsfor this API operation to the Zonal endpoint. These endpoints supportvirtual-hosted-style requests in the formathttps://amzn-s3-demo-bucket.s3express-zone-id.region-code.amazonaws.com/key-name. Path-style requests are not supported. For more information aboutendpoints in Availability Zones, seeRegional and Zonal endpoints fordirectory buckets in Availability Zones in theAmazon S3 UserGuide. For more information about endpoints in Local Zones, seeConcepts for directory buckets in Local Zones in theAmazon S3User Guide.

Permissions
  • General purpose bucket permissions - To perform a multipartupload with encryption using an Key Management Service key, therequester must have permission to thekms:Decrypt andkms:GenerateDataKey actions on the key. The requester must alsohave permissions for thekms:GenerateDataKey action for theCreateMultipartUpload API. Then, the requester needs permissionsfor thekms:Decrypt action on theUploadPart andUploadPartCopy APIs.

    These permissions are required because Amazon S3 must decrypt andread data from the encrypted file parts before it completes themultipart upload. For more information about KMS permissions, seeProtecting data using server-side encryption with KMS in theAmazon S3 User Guide. For information about the permissionsrequired to use the multipart upload API, seeMultipart uploadand permissions andMultipart upload API and permissionsin theAmazon S3 User Guide.

  • Directory bucket permissions - To grant access to this APIoperation on a directory bucket, we recommend that you use theCreateSession API operation for session-basedauthorization. Specifically, you grant thes3express:CreateSession permission to the directory bucket in abucket policy or an IAM identity-based policy. Then, you make theCreateSession API call on the bucket to obtain a session token.With the session token in your request header, you can make APIrequests to this operation. After the session token expires, youmake anotherCreateSession API call to generate a new sessiontoken for use. Amazon Web Services CLI or SDKs create session andrefresh the session token automatically to avoid serviceinterruptions when a session expires. For more information aboutauthorization, seeCreateSession.

    If the object is encrypted with SSE-KMS, you must also have thekms:GenerateDataKey andkms:Decrypt permissions in IAMidentity-based policies and KMS key policies for the KMS key.

Data integrity

General purpose bucket - To ensure that data is not corruptedtraversing the network, specify theContent-MD5 header in theupload part request. Amazon S3 checks the part data against theprovided MD5 value. If they do not match, Amazon S3 returns anerror. If the upload request is signed with Signature Version 4,then Amazon Web Services S3 uses thex-amz-content-sha256 headeras a checksum instead ofContent-MD5. For more information seeAuthenticating Requests: Using the Authorization Header (Amazon WebServices Signature Version 4).

Directory buckets - MD5 is not supported by directory buckets.You can use checksum algorithms to check object integrity.

Encryption
  • General purpose bucket - Server-side encryption is for dataencryption at rest. Amazon S3 encrypts your data as it writes itto disks in its data centers and decrypts it when you access it.You have mutually exclusive options to protect data usingserver-side encryption in Amazon S3, depending on how you chooseto manage the encryption keys. Specifically, the encryption keyoptions are Amazon S3 managed keys (SSE-S3), Amazon Web ServicesKMS keys (SSE-KMS), and Customer-Provided Keys (SSE-C). Amazon S3encrypts data with server-side encryption using Amazon S3 managedkeys (SSE-S3) by default. You can optionally tell Amazon S3 toencrypt data at rest using server-side encryption with other keyoptions. The option you use depends on whether you want to use KMSkeys (SSE-KMS) or provide your own encryption key (SSE-C).

    Server-side encryption is supported by the S3 Multipart Uploadoperations. Unless you are using a customer-provided encryptionkey (SSE-C), you don't need to specify the encryption parametersin each UploadPart request. Instead, you only need to specify theserver-side encryption parameters in the initial InitiateMultipart request. For more information, seeCreateMultipartUpload.

    If you have server-side encryption with customer-provided keys(SSE-C) blocked for your general purpose bucket, you will get anHTTP 403 Access Denied error when you specify the SSE-C requestheaders while writing new data to your bucket. For moreinformation, seeBlocking or unblocking SSE-C for a generalpurpose bucket.

    If you request server-side encryption using a customer-providedencryption key (SSE-C) in your initiate multipart upload request,you must provide identical encryption information in each partupload using the following request headers.

    • x-amz-server-side-encryption-customer-algorithm

    • x-amz-server-side-encryption-customer-key

    • x-amz-server-side-encryption-customer-key-MD5For more information, seeUsing Server-Side Encryption intheAmazon S3 User Guide.

  • Directory buckets - For directory buckets, there are onlytwo supported options for server-side encryption: server-sideencryption with Amazon S3 managed keys (SSE-S3) (AES256) andserver-side encryption with KMS keys (SSE-KMS) (aws:kms).

Special errors
  • Error Code:NoSuchUpload

    • Description: The specified multipart upload does not exist. Theupload ID might be invalid, or the multipart upload might havebeen aborted or completed.

    • HTTP Status Code: 404 Not Found

    • SOAP Fault Code Prefix: Client

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax isBucket-name.s3express-zone-id.region-code.amazonaws.com.

The following operations are related toUploadPart:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To upload a part

# The following example uploads part 1 of a multipart upload. The example specifies a file name for the part data. The# Upload ID is same that is returned by the initiate multipart upload.resp=client.upload_part({body:"fileToUpload",bucket:"examplebucket",key:"examplelargeobject",part_number:1,upload_id:"xadcOB_7YPBOJuoFiQ9cz4P3Pe6FIZwO4f7wN93uHsNBEw97pl5eNwzExg0LAT2dUN91cOmrEQHDsP3WA60CEg--",})resp.to_houtputsthefollowing:{etag:"\"d8c2eafd90c266e19ab9dcacc479f8af\"",}

Request syntax with placeholder values

resp=client.upload_part({body:source_file,bucket:"BucketName",# requiredcontent_length:1,content_md5:"ContentMD5",checksum_algorithm:"CRC32",# accepts CRC32, CRC32C, SHA1, SHA256, CRC64NVMEchecksum_crc32:"ChecksumCRC32",checksum_crc32c:"ChecksumCRC32C",checksum_crc64nvme:"ChecksumCRC64NVME",checksum_sha1:"ChecksumSHA1",checksum_sha256:"ChecksumSHA256",key:"ObjectKey",# requiredpart_number:1,# requiredupload_id:"MultipartUploadId",# requiredsse_customer_algorithm:"SSECustomerAlgorithm",sse_customer_key:"SSECustomerKey",sse_customer_key_md5:"SSECustomerKeyMD5",request_payer:"requester",# accepts requesterexpected_bucket_owner:"AccountId",})

Response structure

resp.server_side_encryption#=> String, one of "AES256", "aws:fsx", "aws:kms", "aws:kms:dsse"resp.etag#=> Stringresp.checksum_crc32#=> Stringresp.checksum_crc32c#=> Stringresp.checksum_crc64nvme#=> Stringresp.checksum_sha1#=> Stringresp.checksum_sha256#=> Stringresp.sse_customer_algorithm#=> Stringresp.sse_customer_key_md5#=> Stringresp.ssekms_key_id#=> Stringresp.bucket_key_enabled#=> Booleanresp.request_charged#=> String, one of "requester"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :body(String,StringIO,File)

    Object data.

  • :bucket(required,String)

    The name of the bucket to which the multipart upload was initiated.

    Directory buckets - When you use this operation with a directorybucket, you must use virtual-hosted-style requests in the formatBucket-name.s3express-zone-id.region-code.amazonaws.com. Path-stylerequests are not supported. Directory bucket names must be unique inthe chosen Zone (Availability Zone or Local Zone). Bucket names mustfollow the formatbucket-base-name--zone-id--x-s3 (for example,amzn-s3-demo-bucket--usw2-az1--x-s3). For information about bucketnaming restrictions, seeDirectory bucket naming rules in theAmazon S3 User Guide.

    Access points - When you use this action with an access point forgeneral purpose buckets, you must provide the alias of the accesspoint in place of the bucket name or specify the access point ARN.When you use this action with an access point for directory buckets,you must provide the access point name in place of the bucket name.When using the access point ARN, you must direct requests to theaccess point hostname. The access point hostname takes the formAccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.When using this action with an access point through the Amazon WebServices SDKs, you provide the access point ARN in place of the bucketname. For more information about access point ARNs, seeUsing accesspoints in theAmazon S3 User Guide.

    Object Lambda access points are not supported by directory buckets.

    S3 on Outposts - When you use this action with S3 on Outposts, youmust direct requests to the S3 on Outposts hostname. The S3 onOutposts hostname takes the formAccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com.When you use this action with S3 on Outposts, the destination bucketmust be the Outposts access point ARN or the access point alias. Formore information about S3 on Outposts, seeWhat is S3 onOutposts? in theAmazon S3 User Guide.

  • :content_length(Integer)

    Size of the body in bytes. This parameter is useful when the size ofthe body cannot be determined automatically.

  • :content_md5(String)

    The Base64 encoded 128-bit MD5 digest of the part data. This parameteris auto-populated when using the command from the CLI. This parameteris required if object lock parameters are specified.

    This functionality is not supported for directory buckets.

  • :checksum_algorithm(String)

    Indicates the algorithm used to create the checksum for the objectwhen you use the SDK. This header will not provide any additionalfunctionality if you don't use the SDK. When you send this header,there must be a correspondingx-amz-checksum orx-amz-trailerheader sent. Otherwise, Amazon S3 fails the request with the HTTPstatus code400 Bad Request. For more information, seeCheckingobject integrity in theAmazon S3 User Guide.

    If you provide an individual checksum, Amazon S3 ignores any providedChecksumAlgorithm parameter.

    This checksum algorithm must be the same for all parts and it matchthe checksum value supplied in theCreateMultipartUpload request.

  • :checksum_crc32(String)

    This header can be used as a data integrity check to verify that thedata received is the same data that was originally sent. This headerspecifies the Base64 encoded, 32-bitCRC32 checksum of the object.For more information, seeChecking object integrity in theAmazon S3 User Guide.

  • :checksum_crc32c(String)

    This header can be used as a data integrity check to verify that thedata received is the same data that was originally sent. This headerspecifies the Base64 encoded, 32-bitCRC32C checksum of the object.For more information, seeChecking object integrity in theAmazon S3 User Guide.

  • :checksum_crc64nvme(String)

    This header can be used as a data integrity check to verify that thedata received is the same data that was originally sent. This headerspecifies the Base64 encoded, 64-bitCRC64NVME checksum of the part.For more information, seeChecking object integrity in theAmazon S3 User Guide.

  • :checksum_sha1(String)

    This header can be used as a data integrity check to verify that thedata received is the same data that was originally sent. This headerspecifies the Base64 encoded, 160-bitSHA1 digest of the object. Formore information, seeChecking object integrity in theAmazon S3User Guide.

  • :checksum_sha256(String)

    This header can be used as a data integrity check to verify that thedata received is the same data that was originally sent. This headerspecifies the Base64 encoded, 256-bitSHA256 digest of the object.For more information, seeChecking object integrity in theAmazon S3 User Guide.

  • :key(required,String)

    Object key for which the multipart upload was initiated.

  • :part_number(required,Integer)

    Part number of part being uploaded. This is a positive integer between1 and 10,000.

  • :upload_id(required,String)

    Upload ID identifying the multipart upload whose part is beinguploaded.

  • :sse_customer_algorithm(String)

    Specifies the algorithm to use when encrypting the object (forexample, AES256).

    This functionality is not supported for directory buckets.

  • :sse_customer_key(String)

    Specifies the customer-provided encryption key for Amazon S3 to use inencrypting data. This value is used to store the object and then it isdiscarded; Amazon S3 does not store the encryption key. The key mustbe appropriate for use with the algorithm specified in thex-amz-server-side-encryption-customer-algorithm header. This must bethe same encryption key specified in the initiate multipart uploadrequest.

    This functionality is not supported for directory buckets.

  • :sse_customer_key_md5(String)

    Specifies the 128-bit MD5 digest of the encryption key according toRFC 1321. Amazon S3 uses this header for a message integrity check toensure that the encryption key was transmitted without error.

    This functionality is not supported for directory buckets.

  • :request_payer(String)

    Confirms that the requester knows that they will be charged for therequest. Bucket owners need not specify this parameter in theirrequests. If either the source or destination S3 bucket has RequesterPays enabled, the requester will pay for corresponding charges to copythe object. For information about downloading objects from RequesterPays buckets, seeDownloading Objects in Requester Pays Bucketsin theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :expected_bucket_owner(String)

    The account ID of the expected bucket owner. If the account ID thatyou provide does not match the actual owner of the bucket, the requestfails with the HTTP status code403 Forbidden (access denied).

Returns:

See Also:

21245212462124721248
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 21245defupload_part(params={},options={})req=build_request(:upload_part,params)req.send_request(options)end

#upload_part_copy(params = {}) ⇒Types::UploadPartCopyOutput

Uploads a part by copying data from an existing object as data source.To specify the data source, you add the request headerx-amz-copy-source in your request. To specify a byte range, you addthe request headerx-amz-copy-source-range in your request.

For information about maximum and minimum part sizes and othermultipart upload specifications, seeMultipart upload limits intheAmazon S3 User Guide.

Instead of copying data from an existing object as part data, youmight use theUploadPart action to upload new data as a part ofan object in your request.

You must initiate a multipart upload before you can upload any part.In response to your initiate request, Amazon S3 returns the upload ID,a unique identifier that you must include in your upload part request.

For conceptual information about multipart uploads, seeUploadingObjects Using Multipart Upload in theAmazon S3 User Guide. Forinformation about copying objects using a single atomic action vs. amultipart upload, seeOperations on Objects in theAmazon S3User Guide.

Directory buckets - For directory buckets, you must make requestsfor this API operation to the Zonal endpoint. These endpoints supportvirtual-hosted-style requests in the formathttps://amzn-s3-demo-bucket.s3express-zone-id.region-code.amazonaws.com/key-name. Path-style requests are not supported. For more information aboutendpoints in Availability Zones, seeRegional and Zonal endpoints fordirectory buckets in Availability Zones in theAmazon S3 UserGuide. For more information about endpoints in Local Zones, seeConcepts for directory buckets in Local Zones in theAmazon S3User Guide.

Authentication and authorization

AllUploadPartCopy requests must be authenticated and signed byusing IAM credentials (access key ID and secret access key for theIAM identities). All headers with thex-amz- prefix, includingx-amz-copy-source, must be signed. For more information, seeRESTAuthentication.

Directory buckets - You must use IAM credentials to authenticateand authorize your access to theUploadPartCopy API operation,instead of using the temporary security credentials through theCreateSession API operation.

Amazon Web Services CLI or SDKs handles authentication andauthorization on your behalf.

Permissions

You must haveREAD access to the source object andWRITE accessto the destination bucket.

  • General purpose bucket permissions - You must have thepermissions in a policy based on the bucket types of your sourcebucket and destination bucket in anUploadPartCopy operation.

    • If the source object is in a general purpose bucket, you musthave thes3:GetObject permission to readthe source object that is being copied.

    • If the destination bucket is a general purpose bucket, you musthave thes3:PutObject permission to writethe object copy to the destination bucket.

    • To perform a multipart upload with encryption using an KeyManagement Service key, the requester must have permission tothekms:Decrypt andkms:GenerateDataKey actions on the key.The requester must also have permissions for thekms:GenerateDataKey action for theCreateMultipartUploadAPI. Then, the requester needs permissions for thekms:Decryptaction on theUploadPart andUploadPartCopy APIs. Thesepermissions are required because Amazon S3 must decrypt and readdata from the encrypted file parts before it completes themultipart upload. For more information about KMS permissions,seeProtecting data using server-side encryption with KMSin theAmazon S3 User Guide. For information about thepermissions required to use the multipart upload API, seeMultipart upload and permissions andMultipart upload APIand permissions in theAmazon S3 User Guide.

  • Directory bucket permissions - You must have permissions in abucket policy or an IAM identity-based policy based on the sourceand destination bucket types in anUploadPartCopy operation.

    • If the source object that you want to copy is in a directorybucket, you must have thes3express:CreateSession permission in theAction element of a policy to read the object. By default, thesession is in theReadWrite mode. If you want to restrict theaccess, you can explicitly set thes3express:SessionModecondition key toReadOnly on the copy source bucket.

    • If the copy destination is a directory bucket, you must have thes3express:CreateSession permission in theAction element of a policy to write the object to thedestination. Thes3express:SessionMode condition key cannot beset toReadOnly on the copy destination.If the object is encrypted with SSE-KMS, you must also have thekms:GenerateDataKey andkms:Decrypt permissions in IAMidentity-based policies and KMS key policies for the KMS key.

    For example policies, seeExample bucket policies for S3 ExpressOne Zone andAmazon Web Services Identity and AccessManagement (IAM) identity-based policies for S3 Express OneZone in theAmazon S3 User Guide.

Encryption
  • General purpose buckets - For information about usingserver-side encryption with customer-provided encryption keys withtheUploadPartCopy operation, seeCopyObject andUploadPart.

    If you have server-side encryption with customer-provided keys(SSE-C) blocked for your general purpose bucket, you will get anHTTP 403 Access Denied error when you specify the SSE-C requestheaders while writing new data to your bucket. For moreinformation, seeBlocking or unblocking SSE-C for a generalpurpose bucket.

  • Directory buckets - For directory buckets, there are onlytwo supported options for server-side encryption: server-sideencryption with Amazon S3 managed keys (SSE-S3) (AES256) andserver-side encryption with KMS keys (SSE-KMS) (aws:kms). Formore information, seeProtecting data with server-sideencryption in theAmazon S3 User Guide.

    For directory buckets, when you perform aCreateMultipartUploadoperation and anUploadPartCopy operation, the request headersyou provide in theCreateMultipartUpload request must match thedefault encryption configuration of the destination bucket.

    S3 Bucket Keys aren't supported, when you copy SSE-KMS encryptedobjects from general purpose buckets to directory buckets, fromdirectory buckets to general purpose buckets, or between directorybuckets, throughUploadPartCopy. In this case, Amazon S3makes a call to KMS every time a copy request is made for aKMS-encrypted object.

Special errors
  • Error Code:NoSuchUpload

    • Description: The specified multipart upload does not exist. Theupload ID might be invalid, or the multipart upload might havebeen aborted or completed.

    • HTTP Status Code: 404 Not Found

  • Error Code:InvalidRequest

    • Description: The specified copy source is not supported as abyte-range copy source.

    • HTTP Status Code: 400 Bad Request

HTTP Host header syntax

Directory buckets - The HTTP Host header syntax isBucket-name.s3express-zone-id.region-code.amazonaws.com.

The following operations are related toUploadPartCopy:

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Example: To upload a part by copying data from an existing object as data source

# The following example uploads a part of a multipart upload by copying data from an existing object as data source.resp=client.upload_part_copy({bucket:"examplebucket",copy_source:"/bucketname/sourceobjectkey",key:"examplelargeobject",part_number:1,upload_id:"exampleuoh_10OhKhT7YukE9bjzTPRiuaCotmZM_pFngJFir9OZNrSr5cWa3cq3LZSUsfjI4FI7PkP91We7Nrw--",})resp.to_houtputsthefollowing:{copy_part_result:{etag:"\"b0c6f0e7e054ab8fa2536a2677f8734d\"",last_modified:Time.parse("2016-12-29T21:24:43.000Z"),},}

Example: To upload a part by copying byte range from an existing object as data source

# The following example uploads a part of a multipart upload by copying a specified byte range from an existing object as# data source.resp=client.upload_part_copy({bucket:"examplebucket",copy_source:"/bucketname/sourceobjectkey",copy_source_range:"bytes=1-100000",key:"examplelargeobject",part_number:2,upload_id:"exampleuoh_10OhKhT7YukE9bjzTPRiuaCotmZM_pFngJFir9OZNrSr5cWa3cq3LZSUsfjI4FI7PkP91We7Nrw--",})resp.to_houtputsthefollowing:{copy_part_result:{etag:"\"65d16d19e65a7508a51f043180edcc36\"",last_modified:Time.parse("2016-12-29T21:44:28.000Z"),},}

Request syntax with placeholder values

resp=client.upload_part_copy({bucket:"BucketName",# requiredcopy_source:"CopySource",# requiredcopy_source_if_match:"CopySourceIfMatch",copy_source_if_modified_since:Time.now,copy_source_if_none_match:"CopySourceIfNoneMatch",copy_source_if_unmodified_since:Time.now,copy_source_range:"CopySourceRange",key:"ObjectKey",# requiredpart_number:1,# requiredupload_id:"MultipartUploadId",# requiredsse_customer_algorithm:"SSECustomerAlgorithm",sse_customer_key:"SSECustomerKey",sse_customer_key_md5:"SSECustomerKeyMD5",copy_source_sse_customer_algorithm:"CopySourceSSECustomerAlgorithm",copy_source_sse_customer_key:"CopySourceSSECustomerKey",copy_source_sse_customer_key_md5:"CopySourceSSECustomerKeyMD5",request_payer:"requester",# accepts requesterexpected_bucket_owner:"AccountId",expected_source_bucket_owner:"AccountId",})

Response structure

resp.copy_source_version_id#=> Stringresp.copy_part_result.etag#=> Stringresp.copy_part_result.last_modified#=> Timeresp.copy_part_result.checksum_crc32#=> Stringresp.copy_part_result.checksum_crc32c#=> Stringresp.copy_part_result.checksum_crc64nvme#=> Stringresp.copy_part_result.checksum_sha1#=> Stringresp.copy_part_result.checksum_sha256#=> Stringresp.server_side_encryption#=> String, one of "AES256", "aws:fsx", "aws:kms", "aws:kms:dsse"resp.sse_customer_algorithm#=> Stringresp.sse_customer_key_md5#=> Stringresp.ssekms_key_id#=> Stringresp.bucket_key_enabled#=> Booleanresp.request_charged#=> String, one of "requester"

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :bucket(required,String)

    The bucket name.

    Directory buckets - When you use this operation with a directorybucket, you must use virtual-hosted-style requests in the formatBucket-name.s3express-zone-id.region-code.amazonaws.com. Path-stylerequests are not supported. Directory bucket names must be unique inthe chosen Zone (Availability Zone or Local Zone). Bucket names mustfollow the formatbucket-base-name--zone-id--x-s3 (for example,amzn-s3-demo-bucket--usw2-az1--x-s3). For information about bucketnaming restrictions, seeDirectory bucket naming rules in theAmazon S3 User Guide.

    Copying objects across different Amazon Web Services Regions isn'tsupported when the source or destination bucket is in Amazon WebServices Local Zones. The source and destination buckets must have thesame parent Amazon Web Services Region. Otherwise, you get an HTTP400 Bad Request error with the error codeInvalidRequest.

    Access points - When you use this action with an access point forgeneral purpose buckets, you must provide the alias of the accesspoint in place of the bucket name or specify the access point ARN.When you use this action with an access point for directory buckets,you must provide the access point name in place of the bucket name.When using the access point ARN, you must direct requests to theaccess point hostname. The access point hostname takes the formAccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com.When using this action with an access point through the Amazon WebServices SDKs, you provide the access point ARN in place of the bucketname. For more information about access point ARNs, seeUsing accesspoints in theAmazon S3 User Guide.

    Object Lambda access points are not supported by directory buckets.

    S3 on Outposts - When you use this action with S3 on Outposts, youmust direct requests to the S3 on Outposts hostname. The S3 onOutposts hostname takes the formAccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com.When you use this action with S3 on Outposts, the destination bucketmust be the Outposts access point ARN or the access point alias. Formore information about S3 on Outposts, seeWhat is S3 onOutposts? in theAmazon S3 User Guide.

  • :copy_source(required,String)

    Specifies the source object for the copy operation. You specify thevalue in one of two formats, depending on whether you want to accessthe source object through anaccess point:

    • For objects not accessed through an access point, specify the nameof the source bucket and key of the source object, separated by aslash (/). For example, to copy the objectreports/january.pdffrom the bucketawsexamplebucket, useawsexamplebucket/reports/january.pdf. The value must beURL-encoded.

    • For objects accessed through access points, specify the AmazonResource Name (ARN) of the object as accessed through the accesspoint, in the formatarn:aws:s3:<Region>:<account-id>:accesspoint/<access-point-name>/object/<key>.For example, to copy the objectreports/january.pdf through accesspointmy-access-point owned by account123456789012 in Regionus-west-2, use the URL encoding ofarn:aws:s3:us-west-2:123456789012:accesspoint/my-access-point/object/reports/january.pdf.The value must be URL encoded.

      * Amazon S3 supports copy operations using Access points only when the source and destination buckets are in the same Amazon Web Services Region.

      • Access points are not supported by directory buckets.

      Alternatively, for objects accessed through Amazon S3 on Outposts,specify the ARN of the object as accessed in the formatarn:aws:s3-outposts:<Region>:<account-id>:outpost/<outpost-id>/object/<key>.For example, to copy the objectreports/january.pdf throughoutpostmy-outpost owned by account123456789012 in Regionus-west-2, use the URL encoding ofarn:aws:s3-outposts:us-west-2:123456789012:outpost/my-outpost/object/reports/january.pdf.The value must be URL-encoded.

    If your bucket has versioning enabled, you could have multipleversions of the same object. By default,x-amz-copy-sourceidentifies the current version of the source object to copy. To copy aspecific version of the source object to copy, append?versionId=<version-id> to thex-amz-copy-source request header(for example,x-amz-copy-source:/awsexamplebucket/reports/january.pdf?versionId=QUpfdndhfd8438MNFDN93jdnJFkdmqnh893).

    If the current version is a delete marker and you don't specify aversionId in thex-amz-copy-source request header, Amazon S3 returnsa404 Not Found error, because the object does not exist. If youspecify versionId in thex-amz-copy-source and the versionId is adelete marker, Amazon S3 returns an HTTP400 Bad Request error,because you are not allowed to specify a delete marker as a versionfor thex-amz-copy-source.

    Directory buckets - S3 Versioning isn't enabled and supported fordirectory buckets.

  • :copy_source_if_match(String)

    Copies the object if its entity tag (ETag) matches the specified tag.

    If both of thex-amz-copy-source-if-match andx-amz-copy-source-if-unmodified-since headers are present in therequest as follows:

    x-amz-copy-source-if-match condition evaluates totrue, and;

    x-amz-copy-source-if-unmodified-since condition evaluates tofalse;

    Amazon S3 returns200 OK and copies the data.

  • :copy_source_if_modified_since(Time,DateTime,Date,Integer,String)

    Copies the object if it has been modified since the specified time.

    If both of thex-amz-copy-source-if-none-match andx-amz-copy-source-if-modified-since headers are present in therequest as follows:

    x-amz-copy-source-if-none-match condition evaluates tofalse, and;

    x-amz-copy-source-if-modified-since condition evaluates totrue;

    Amazon S3 returns412 Precondition Failed response code.

  • :copy_source_if_none_match(String)

    Copies the object if its entity tag (ETag) is different than thespecified ETag.

    If both of thex-amz-copy-source-if-none-match andx-amz-copy-source-if-modified-since headers are present in therequest as follows:

    x-amz-copy-source-if-none-match condition evaluates tofalse, and;

    x-amz-copy-source-if-modified-since condition evaluates totrue;

    Amazon S3 returns412 Precondition Failed response code.

  • :copy_source_if_unmodified_since(Time,DateTime,Date,Integer,String)

    Copies the object if it hasn't been modified since the specifiedtime.

    If both of thex-amz-copy-source-if-match andx-amz-copy-source-if-unmodified-since headers are present in therequest as follows:

    x-amz-copy-source-if-match condition evaluates totrue, and;

    x-amz-copy-source-if-unmodified-since condition evaluates tofalse;

    Amazon S3 returns200 OK and copies the data.

  • :copy_source_range(String)

    The range of bytes to copy from the source object. The range valuemust use the form bytes=first-last, where the first and last are thezero-based byte offsets to copy. For example, bytes=0-9 indicates thatyou want to copy the first 10 bytes of the source. You can copy arange only if the source object is greater than 5 MB.

  • :key(required,String)

    Object key for which the multipart upload was initiated.

  • :part_number(required,Integer)

    Part number of part being copied. This is a positive integer between 1and 10,000.

  • :upload_id(required,String)

    Upload ID identifying the multipart upload whose part is being copied.

  • :sse_customer_algorithm(String)

    Specifies the algorithm to use when encrypting the object (forexample, AES256).

    This functionality is not supported when the destination bucket is adirectory bucket.

  • :sse_customer_key(String)

    Specifies the customer-provided encryption key for Amazon S3 to use inencrypting data. This value is used to store the object and then it isdiscarded; Amazon S3 does not store the encryption key. The key mustbe appropriate for use with the algorithm specified in thex-amz-server-side-encryption-customer-algorithm header. This must bethe same encryption key specified in the initiate multipart uploadrequest.

    This functionality is not supported when the destination bucket is adirectory bucket.

  • :sse_customer_key_md5(String)

    Specifies the 128-bit MD5 digest of the encryption key according toRFC 1321. Amazon S3 uses this header for a message integrity check toensure that the encryption key was transmitted without error.

    This functionality is not supported when the destination bucket is adirectory bucket.

  • :copy_source_sse_customer_algorithm(String)

    Specifies the algorithm to use when decrypting the source object (forexample,AES256).

    This functionality is not supported when the source object is in adirectory bucket.

  • :copy_source_sse_customer_key(String)

    Specifies the customer-provided encryption key for Amazon S3 to use todecrypt the source object. The encryption key provided in this headermust be one that was used when the source object was created.

    This functionality is not supported when the source object is in adirectory bucket.

  • :copy_source_sse_customer_key_md5(String)

    Specifies the 128-bit MD5 digest of the encryption key according toRFC 1321. Amazon S3 uses this header for a message integrity check toensure that the encryption key was transmitted without error.

    This functionality is not supported when the source object is in adirectory bucket.

  • :request_payer(String)

    Confirms that the requester knows that they will be charged for therequest. Bucket owners need not specify this parameter in theirrequests. If either the source or destination S3 bucket has RequesterPays enabled, the requester will pay for corresponding charges to copythe object. For information about downloading objects from RequesterPays buckets, seeDownloading Objects in Requester Pays Bucketsin theAmazon S3 User Guide.

    This functionality is not supported for directory buckets.

  • :expected_bucket_owner(String)

    The account ID of the expected destination bucket owner. If theaccount ID that you provide does not match the actual owner of thedestination bucket, the request fails with the HTTP status code403Forbidden (access denied).

  • :expected_source_bucket_owner(String)

    The account ID of the expected source bucket owner. If the account IDthat you provide does not match the actual owner of the source bucket,the request fails with the HTTP status code403 Forbidden (accessdenied).

Returns:

See Also:

21840218412184221843
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 21840defupload_part_copy(params={},options={})req=build_request(:upload_part_copy,params)req.send_request(options)end

#wait_until(waiter_name, params = {}, options = {}) {|w.waiter| ... } ⇒Boolean

Polls an API operation until a resource enters a desired state.

Basic Usage

A waiter will call an API operation until:

  • It is successful
  • It enters a terminal state
  • It makes the maximum number of attempts

In between attempts, the waiter will sleep.

# polls in a loop, sleeping between attemptsclient.wait_until(waiter_name,params)

Configuration

You can configure the maximum number of polling attempts, and thedelay (in seconds) between each polling attempt. You can passconfiguration as the final arguments hash.

# poll for ~25 secondsclient.wait_until(waiter_name,params,{max_attempts:5,delay:5,})

Callbacks

You can be notified before each polling attempt and before eachdelay. If you throw:success or:failure from these callbacks,it will terminate the waiter.

started_at=Time.nowclient.wait_until(waiter_name,params,{# disable max attemptsmax_attempts:nil,# poll for 1 hour, instead of a number of attemptsbefore_wait:->(attempts,response)dothrow:failureifTime.now-started_at>3600end})

Handling Errors

When a waiter is unsuccessful, it will raise an error.All of the failure errors extend fromWaiters::Errors::WaiterFailed.

begin  client.wait_until(...)rescue Aws::Waiters::Errors::WaiterFailed  # resource did not enter the desired state in timeend

Valid Waiters

The following table lists the valid waiter names, the operations they call,and the default:delay and:max_attempts values.

waiter_name params :delay :max_attempts
bucket_exists#head_bucket 5 20
bucket_not_exists#head_bucket 5 20
object_exists#head_object 5 20
object_not_exists#head_object 5 20

Parameters:

  • waiter_name(Symbol)
  • params(Hash)(defaults to:{})

    ({})

  • options(Hash)(defaults to:{})

    ({})

Options Hash (options):

  • :max_attempts(Integer)
  • :delay(Integer)
  • :before_attempt(Proc)
  • :before_wait(Proc)

Yields:

  • (w.waiter)

Returns:

  • (Boolean)

    Returnstrue if the waiter was successful.

Raises:

  • (Errors::FailureStateError)

    Raised when the waiter terminatesbecause the waiter has entered a state that it will not transitionout of, preventing success.

  • (Errors::TooManyAttemptsError)

    Raised when the configuredmaximum number of attempts have been made, and the waiter is notyet successful.

  • (Errors::UnexpectedError)

    Raised when an error is encountedwhile polling for a resource that is not expected.

  • (Errors::NoSuchWaiterError)

    Raised when you request to waitfor an unknown state.

2238122382223832238422385
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 22381defwait_until(waiter_name,params={},options={})w=waiter(waiter_name,options)yield(w.waiter)ifblock_given?# deprecatedw.wait(params)end

#write_get_object_response(params = {}) ⇒Struct

This operation is not supported for directory buckets.

Passes transformed objects to aGetObject operation when usingObject Lambda access points. For information about Object Lambdaaccess points, seeTransforming objects with Object Lambda accesspoints in theAmazon S3 User Guide.

This operation supports metadata that can be returned byGetObject, in addition toRequestRoute,RequestToken,StatusCode,ErrorCode, andErrorMessage. TheGetObjectresponse metadata is supported so that theWriteGetObjectResponsecaller, typically an Lambda function, can provide the same metadatawhen it internally invokesGetObject. WhenWriteGetObjectResponseis called by a customer-owned Lambda function, the metadata returnedto the end userGetObject call might differ from what Amazon S3would normally return.

You can include any number of metadata headers. When including ametadata header, it should be prefaced withx-amz-meta. For example,x-amz-meta-my-custom-header: MyCustomValue. The primary use case forthis is to forwardGetObject metadata.

Amazon Web Services provides some prebuilt Lambda functions that youcan use with S3 Object Lambda to detect and redact personallyidentifiable information (PII) and decompress S3 objects. These Lambdafunctions are available in the Amazon Web Services ServerlessApplication Repository, and can be selected through the Amazon WebServices Management Console when you create your Object Lambda accesspoint.

Example 1: PII Access Control - This Lambda function uses AmazonComprehend, a natural language processing (NLP) service using machinelearning to find insights and relationships in text. It automaticallydetects personally identifiable information (PII) such as names,addresses, dates, credit card numbers, and social security numbersfrom documents in your Amazon S3 bucket.

Example 2: PII Redaction - This Lambda function uses AmazonComprehend, a natural language processing (NLP) service using machinelearning to find insights and relationships in text. It automaticallyredacts personally identifiable information (PII) such as names,addresses, dates, credit card numbers, and social security numbersfrom documents in your Amazon S3 bucket.

Example 3: Decompression - The Lambda functionS3ObjectLambdaDecompression, is equipped to decompress objects storedin S3 in one of six compressed file formats including bzip2, gzip,snappy, zlib, zstandard and ZIP.

For information on how to view and use these functions, seeUsingAmazon Web Services built Lambda functions in theAmazon S3 UserGuide.

You must URL encode any signed header values that contain spaces. Forexample, if your header value ismy file.txt, containing two spacesaftermy, you must URL encode this value tomy%20%20file.txt.

Examples:

Request syntax with placeholder values

resp=client.write_get_object_response({request_route:"RequestRoute",# requiredrequest_token:"RequestToken",# requiredbody:source_file,status_code:1,error_code:"ErrorCode",error_message:"ErrorMessage",accept_ranges:"AcceptRanges",cache_control:"CacheControl",content_disposition:"ContentDisposition",content_encoding:"ContentEncoding",content_language:"ContentLanguage",content_length:1,content_range:"ContentRange",content_type:"ContentType",checksum_crc32:"ChecksumCRC32",checksum_crc32c:"ChecksumCRC32C",checksum_crc64nvme:"ChecksumCRC64NVME",checksum_sha1:"ChecksumSHA1",checksum_sha256:"ChecksumSHA256",delete_marker:false,etag:"ETag",expires:Time.now,expiration:"Expiration",last_modified:Time.now,missing_meta:1,metadata:{"MetadataKey"=>"MetadataValue",},object_lock_mode:"GOVERNANCE",# accepts GOVERNANCE, COMPLIANCEobject_lock_legal_hold_status:"ON",# accepts ON, OFFobject_lock_retain_until_date:Time.now,parts_count:1,replication_status:"COMPLETE",# accepts COMPLETE, PENDING, FAILED, REPLICA, COMPLETEDrequest_charged:"requester",# accepts requesterrestore:"Restore",server_side_encryption:"AES256",# accepts AES256, aws:fsx, aws:kms, aws:kms:dssesse_customer_algorithm:"SSECustomerAlgorithm",ssekms_key_id:"SSEKMSKeyId",sse_customer_key_md5:"SSECustomerKeyMD5",storage_class:"STANDARD",# accepts STANDARD, REDUCED_REDUNDANCY, STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING, GLACIER, DEEP_ARCHIVE, OUTPOSTS, GLACIER_IR, SNOW, EXPRESS_ONEZONE, FSX_OPENZFS, FSX_ONTAPtag_count:1,version_id:"ObjectVersionId",bucket_key_enabled:false,})

Parameters:

  • params(Hash)(defaults to:{})

    ({})

Options Hash (params):

  • :request_route(required,String)

    Route prefix to the HTTP URL generated.

  • :request_token(required,String)

    A single use encrypted token that mapsWriteGetObjectResponse to theend userGetObject request.

  • :body(String,IO)

    The object data.

  • :status_code(Integer)

    The integer status code for an HTTP response of a correspondingGetObject request. The following is a list of status codes.

    • 200 - OK

    • 206 - Partial Content

    • 304 - Not Modified

    • 400 - Bad Request

    • 401 - Unauthorized

    • 403 - Forbidden

    • 404 - Not Found

    • 405 - Method Not Allowed

    • 409 - Conflict

    • 411 - Length Required

    • 412 - Precondition Failed

    • 416 - Range Not Satisfiable

    • 500 - Internal Server Error

    • 503 - Service Unavailable

  • :error_code(String)

    A string that uniquely identifies an error condition. Returned in the<Code> tag of the error XML response for a correspondingGetObject call. Cannot be used with a successfulStatusCode headeror when the transformed object is provided in the body. All errorcodes from S3 are sentence-cased. The regular expression (regex) valueis"^[A-Z][a-zA-Z]+$".

  • :error_message(String)

    Contains a generic description of the error condition. Returned in the<Message> tag of the error XML response for a correspondingGetObject call. Cannot be used with a successfulStatusCode headeror when the transformed object is provided in body.

  • :accept_ranges(String)

    Indicates that a range of bytes was specified.

  • :cache_control(String)

    Specifies caching behavior along the request/reply chain.

  • :content_disposition(String)

    Specifies presentational information for the object.

  • :content_encoding(String)

    Specifies what content encodings have been applied to the object andthus what decoding mechanisms must be applied to obtain the media-typereferenced by the Content-Type header field.

  • :content_language(String)

    The language the content is in.

  • :content_length(Integer)

    The size of the content body in bytes.

  • :content_range(String)

    The portion of the object returned in the response.

  • :content_type(String)

    A standard MIME type describing the format of the object data.

  • :checksum_crc32(String)

    This header can be used as a data integrity check to verify that thedata received is the same data that was originally sent. Thisspecifies the Base64 encoded, 32-bitCRC32 checksum of the objectreturned by the Object Lambda function. This may not match thechecksum for the object stored in Amazon S3. Amazon S3 will performvalidation of the checksum values only when the originalGetObjectrequest required checksum validation. For more information aboutchecksums, seeChecking object integrity in theAmazon S3 UserGuide.

    Only one checksum header can be specified at a time. If you supplymultiple checksum headers, this request will fail.

  • :checksum_crc32c(String)

    This header can be used as a data integrity check to verify that thedata received is the same data that was originally sent. Thisspecifies the Base64 encoded, 32-bitCRC32C checksum of the objectreturned by the Object Lambda function. This may not match thechecksum for the object stored in Amazon S3. Amazon S3 will performvalidation of the checksum values only when the originalGetObjectrequest required checksum validation. For more information aboutchecksums, seeChecking object integrity in theAmazon S3 UserGuide.

    Only one checksum header can be specified at a time. If you supplymultiple checksum headers, this request will fail.

  • :checksum_crc64nvme(String)

    This header can be used as a data integrity check to verify that thedata received is the same data that was originally sent. This headerspecifies the Base64 encoded, 64-bitCRC64NVME checksum of the part.For more information, seeChecking object integrity in theAmazon S3 User Guide.

  • :checksum_sha1(String)

    This header can be used as a data integrity check to verify that thedata received is the same data that was originally sent. Thisspecifies the Base64 encoded, 160-bitSHA1 digest of the objectreturned by the Object Lambda function. This may not match thechecksum for the object stored in Amazon S3. Amazon S3 will performvalidation of the checksum values only when the originalGetObjectrequest required checksum validation. For more information aboutchecksums, seeChecking object integrity in theAmazon S3 UserGuide.

    Only one checksum header can be specified at a time. If you supplymultiple checksum headers, this request will fail.

  • :checksum_sha256(String)

    This header can be used as a data integrity check to verify that thedata received is the same data that was originally sent. Thisspecifies the Base64 encoded, 256-bitSHA256 digest of the objectreturned by the Object Lambda function. This may not match thechecksum for the object stored in Amazon S3. Amazon S3 will performvalidation of the checksum values only when the originalGetObjectrequest required checksum validation. For more information aboutchecksums, seeChecking object integrity in theAmazon S3 UserGuide.

    Only one checksum header can be specified at a time. If you supplymultiple checksum headers, this request will fail.

  • :delete_marker(Boolean)

    Specifies whether an object stored in Amazon S3 is (true) or is not(false) a delete marker. To learn more about delete markers, seeWorking with delete markers.

  • :etag(String)

    An opaque identifier assigned by a web server to a specific version ofa resource found at a URL.

  • :expires(Time,DateTime,Date,Integer,String)

    The date and time at which the object is no longer cacheable.

  • :expiration(String)

    If the object expiration is configured (see PUT Bucket lifecycle), theresponse includes this header. It includes theexpiry-date andrule-id key-value pairs that provide the object expirationinformation. The value of therule-id is URL-encoded.

  • :last_modified(Time,DateTime,Date,Integer,String)

    The date and time that the object was last modified.

  • :missing_meta(Integer)

    Set to the number of metadata entries not returned inx-amz-metaheaders. This can happen if you create metadata using an API like SOAPthat supports more flexible metadata than the REST API. For example,using SOAP, you can create metadata whose values are not legal HTTPheaders.

  • :metadata(Hash<String,String>)

    A map of metadata to store with the object in S3.

  • :object_lock_mode(String)

    Indicates whether an object stored in Amazon S3 has Object Lockenabled. For more information about S3 Object Lock, seeObjectLock.

  • :object_lock_legal_hold_status(String)

    Indicates whether an object stored in Amazon S3 has an active legalhold.

  • :object_lock_retain_until_date(Time,DateTime,Date,Integer,String)

    The date and time when Object Lock is configured to expire.

  • :parts_count(Integer)

    The count of parts this object has.

  • :replication_status(String)

    Indicates if request involves bucket that is either a source ordestination in a Replication rule. For more information about S3Replication, seeReplication.

  • :request_charged(String)

    If present, indicates that the requester was successfully charged forthe request. For more information, seeUsing Requester Pays bucketsfor storage transfers and usage in theAmazon Simple StorageService user guide.

    This functionality is not supported for directory buckets.

  • :restore(String)

    Provides information about object restoration operation and expirationtime of the restored object copy.

  • :server_side_encryption(String)

    The server-side encryption algorithm used when storing requestedobject in Amazon S3 or Amazon FSx.

    When accessing data stored in Amazon FSx file systems using S3 accesspoints, the only valid server side encryption option isaws:fsx.

  • :sse_customer_algorithm(String)

    Encryption algorithm used if server-side encryption with acustomer-provided encryption key was specified for object stored inAmazon S3.

  • :ssekms_key_id(String)

    If present, specifies the ID (Key ID, Key ARN, or Key Alias) of theAmazon Web Services Key Management Service (Amazon Web Services KMS)symmetric encryption customer managed key that was used for stored inAmazon S3 object.

  • :sse_customer_key_md5(String)

    128-bit MD5 digest of customer-provided encryption key used in AmazonS3 to encrypt data stored in S3. For more information, seeProtectingdata using server-side encryption with customer-provided encryptionkeys (SSE-C).

  • :storage_class(String)

    Provides storage class information of the object. Amazon S3 returnsthis header for all objects except for S3 Standard storage classobjects.

    For more information, seeStorage Classes.

  • :tag_count(Integer)

    The number of tags, if any, on the object.

  • :version_id(String)

    An ID used to reference a specific version of the object.

  • :bucket_key_enabled(Boolean)

    Indicates whether the object stored in Amazon S3 uses an S3 bucket keyfor server-side encryption with Amazon Web Services KMS (SSE-KMS).

Returns:

See Also:

22263222642226522266
# File 'gems/aws-sdk-s3/lib/aws-sdk-s3/client.rb', line 22263defwrite_get_object_response(params={},options={})req=build_request(:write_get_object_response,params)req.send_request(options)end
Generated on Wed Dec 17 19:33:24 2025 byyard 0.9.38 (ruby-3.4.3).

[8]ページ先頭

©2009-2025 Movatter.jp