Movatterモバイル変換


[0]ホーム

URL:


v0.91

Amazon S3 Storage Providers

The S3 backend can be used with a number of different providers:

Paths are specified asremote:bucket (orremote: for thelsdcommand.) You may put subdirectories in too, e.g.remote:bucket/path/to/dir.

Once you have made a remote (see the provider specific section above)you can use it like this:

See all buckets

rclone lsd remote:

Make a new bucket

rclone mkdir remote:bucket

List the contents of a bucket

rclone ls remote:bucket

Sync/home/local/directory to the remote bucket, deleting any excessfiles in the bucket.

rclone sync --interactive /home/local/directory remote:bucket

Configuration

Here is an example of making an s3 configuration for the AWS S3 provider.Most applies to the other providers as well, any differences are describedbelow.

First run

rclone config

This will guide you through an interactive setup process.

No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nname> remoteType of storage to configure.Choose a number from below, or type in your own value[snip]XX / Amazon S3 Compliant Storage Providers including AWS, ...   \ "s3"[snip]Storage> s3Choose your S3 provider.Choose a number from below, or type in your own value 1 / Amazon Web Services (AWS) S3   \ "AWS" 2 / Ceph Object Storage   \ "Ceph" 3 / DigitalOcean Spaces   \ "DigitalOcean" 4 / Dreamhost DreamObjects   \ "Dreamhost" 5 / IBM COS S3   \ "IBMCOS" 6 / Minio Object Storage   \ "Minio" 7 / Wasabi Object Storage   \ "Wasabi" 8 / Any other S3 compatible provider   \ "Other"provider> 1Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step   \ "false" 2 / Get AWS credentials from the environment (env vars or IAM)   \ "true"env_auth> 1AWS Access Key ID - leave blank for anonymous access or runtime credentials.access_key_id> XXXAWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.secret_access_key> YYYRegion to connect to.Choose a number from below, or type in your own value   / The default endpoint - a good choice if you are unsure. 1 | US Region, Northern Virginia, or Pacific Northwest.   | Leave location constraint empty.   \ "us-east-1"   / US East (Ohio) Region 2 | Needs location constraint us-east-2.   \ "us-east-2"   / US West (Oregon) Region 3 | Needs location constraint us-west-2.   \ "us-west-2"   / US West (Northern California) Region 4 | Needs location constraint us-west-1.   \ "us-west-1"   / Canada (Central) Region 5 | Needs location constraint ca-central-1.   \ "ca-central-1"   / EU (Ireland) Region 6 | Needs location constraint EU or eu-west-1.   \ "eu-west-1"   / EU (London) Region 7 | Needs location constraint eu-west-2.   \ "eu-west-2"   / EU (Frankfurt) Region 8 | Needs location constraint eu-central-1.   \ "eu-central-1"   / Asia Pacific (Singapore) Region 9 | Needs location constraint ap-southeast-1.   \ "ap-southeast-1"   / Asia Pacific (Sydney) Region10 | Needs location constraint ap-southeast-2.   \ "ap-southeast-2"   / Asia Pacific (Tokyo) Region11 | Needs location constraint ap-northeast-1.   \ "ap-northeast-1"   / Asia Pacific (Seoul)12 | Needs location constraint ap-northeast-2.   \ "ap-northeast-2"   / Asia Pacific (Mumbai)13 | Needs location constraint ap-south-1.   \ "ap-south-1"   / Asia Pacific (Hong Kong) Region14 | Needs location constraint ap-east-1.   \ "ap-east-1"   / South America (Sao Paulo) Region15 | Needs location constraint sa-east-1.   \ "sa-east-1"region> 1Endpoint for S3 API.Leave blank if using AWS to use the default endpoint for the region.endpoint>Location constraint - must be set to match the Region. Used when creating buckets only.Choose a number from below, or type in your own value 1 / Empty for US Region, Northern Virginia, or Pacific Northwest.   \ "" 2 / US East (Ohio) Region.   \ "us-east-2" 3 / US West (Oregon) Region.   \ "us-west-2" 4 / US West (Northern California) Region.   \ "us-west-1" 5 / Canada (Central) Region.   \ "ca-central-1" 6 / EU (Ireland) Region.   \ "eu-west-1" 7 / EU (London) Region.   \ "eu-west-2" 8 / EU Region.   \ "EU" 9 / Asia Pacific (Singapore) Region.   \ "ap-southeast-1"10 / Asia Pacific (Sydney) Region.   \ "ap-southeast-2"11 / Asia Pacific (Tokyo) Region.   \ "ap-northeast-1"12 / Asia Pacific (Seoul)   \ "ap-northeast-2"13 / Asia Pacific (Mumbai)   \ "ap-south-1"14 / Asia Pacific (Hong Kong)   \ "ap-east-1"15 / South America (Sao Paulo) Region.   \ "sa-east-1"location_constraint> 1Canned ACL used when creating buckets and/or storing objects in S3.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclChoose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default).   \ "private" 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.   \ "public-read"   / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. 3 | Granting this on a bucket is generally not recommended.   \ "public-read-write" 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.   \ "authenticated-read"   / Object owner gets FULL_CONTROL. Bucket owner gets READ access. 5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.   \ "bucket-owner-read"   / Both the object owner and the bucket owner get FULL_CONTROL over the object. 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.   \ "bucket-owner-full-control"acl> 1The server-side encryption algorithm used when storing this object in S3.Choose a number from below, or type in your own value 1 / None   \ "" 2 / AES256   \ "AES256"server_side_encryption> 1The storage class to use when storing objects in S3.Choose a number from below, or type in your own value 1 / Default   \ "" 2 / Standard storage class   \ "STANDARD" 3 / Reduced redundancy storage class   \ "REDUCED_REDUNDANCY" 4 / Standard Infrequent Access storage class   \ "STANDARD_IA" 5 / One Zone Infrequent Access storage class   \ "ONEZONE_IA" 6 / Glacier Flexible Retrieval storage class   \ "GLACIER" 7 / Glacier Deep Archive storage class   \ "DEEP_ARCHIVE" 8 / Intelligent-Tiering storage class   \ "INTELLIGENT_TIERING" 9 / Glacier Instant Retrieval storage class   \ "GLACIER_IR"storage_class> 1Remote configConfiguration complete.Options:- type: s3- provider: AWS- env_auth: false- access_key_id: XXX- secret_access_key: YYY- region: us-east-1- endpoint:- location_constraint:- acl: private- server_side_encryption:- storage_class:Keep this "remote" remote?y) Yes this is OKe) Edit this remoted) Delete this remotey/e/d>

Modification times and hashes

Modification times

The modified time is stored as metadata on the object asX-Amz-Meta-Mtime as floating point since the epoch, accurate to 1 ns.

If the modification time needs to be updated rclone will attempt to perform a serverside copy to update the modification if the object can be copied in a single part.In the case the object is larger than 5Gb or is in Glacier or Glacier Deep Archivestorage the object will be uploaded rather than copied.

Note that reading this from the object takes an additionalHEADrequest as the metadata isn't returned in object listings.

Hashes

For small objects which weren't uploaded as multipart uploads (objectssized below--s3-upload-cutoff if uploaded with rclone) rclone usestheETag: header as an MD5 checksum.

However for objects which were uploaded as multipart uploads or withserver side encryption (SSE-AWS or SSE-C) theETag header is nolonger the MD5 sum of the data, so rclone adds an additional piece ofmetadataX-Amz-Meta-Md5chksum which is a base64 encoded MD5 hash (inthe same format as is required forContent-MD5). You can use base64 -d andhexdump to check this value manually:

echo 'VWTGdNx3LyXQDfA0e2Edxw==' | base64 -d | hexdump

or you can userclone check to verify the hashes are OK.

For large objects, calculating this hash can take some time so theaddition of this hash can be disabled with--s3-disable-checksum.This will mean that these objects do not have an MD5 checksum.

Note that reading this from the object takes an additionalHEADrequest as the metadata isn't returned in object listings.

Reducing costs

Avoiding HEAD requests to read the modification time

By default, rclone will use the modification time of objects stored inS3 for syncing. This is stored in object metadata which unfortunatelytakes an extra HEAD request to read which can be expensive (in timeand money).

The modification time is used by default for all operations thatrequire checking the time a file was last updated. It allows rclone totreat the remote more like a true filesystem, but it is inefficient onS3 because it requires an extra API call to retrieve the metadata.

The extra API calls can be avoided when syncing (usingrclone syncorrclone copy) in a few different ways, each with its owntradeoffs.

  • --size-only
    • Only checks the size of files.
    • Uses no extra transactions.
    • If the file doesn't change size then rclone won't detect it haschanged.
    • rclone sync --size-only /path/to/source s3:bucket
  • --checksum
    • Checks the size and MD5 checksum of files.
    • Uses no extra transactions.
    • The most accurate detection of changes possible.
    • Will cause the source to read an MD5 checksum which, if it is alocal disk, will cause lots of disk activity.
    • If the source and destination are both S3 this is therecommended flag to use for maximum efficiency.
    • rclone sync --checksum /path/to/source s3:bucket
  • --update --use-server-modtime
    • Uses no extra transactions.
    • Modification time becomes the time the object was uploaded.
    • For many operations this is sufficient to determine if it needsuploading.
    • Using--update along with--use-server-modtime, avoids theextra API call and uploads files whose local modification timeis newer than the time it was last uploaded.
    • Files created with timestamps in the past will be missed by the sync.
    • rclone sync --update --use-server-modtime /path/to/source s3:bucket

These flags can and should be used in combination with--fast-list -see below.

If usingrclone mount or any command using the VFS (egrclone serve) commands then you might want to consider using the VFS flag--no-modtime which will stop rclone reading the modification timefor every object. You could also use--use-server-modtime if you arehappy with the modification times of the objects being the time ofupload.

Avoiding GET requests to read directory listings

Rclone's default directory traversal is to process each directoryindividually. This takes one API call per directory. Using the--fast-list flag will read all info about the objects intomemory first using a smaller number of API calls (one per 1000objects). See therclone docs for more details.

rclone sync --fast-list --checksum /path/to/source s3:bucket

--fast-list trades off API transactions for memory use. As a roughguide rclone uses 1k of memory per object stored, so using--fast-list on a sync of a million objects will use roughly 1 GiB ofRAM.

If you are only copying a small number of files into a big repositorythen using--no-traverse is a good idea. This finds objects directlyinstead of through directory listings. You can do a "top-up" sync verycheaply by using--max-age and--no-traverse to copy only recentfiles, eg

rclone copy --max-age 24h --no-traverse /path/to/source s3:bucket

You'd then do a fullrclone sync less often.

Note that--fast-list isn't required in the top-up sync.

Avoiding HEAD requests after PUT

By default, rclone will HEAD every object it uploads. It does this tocheck the object got uploaded correctly.

You can disable this with the--s3-no-head option - seethere for more details.

Setting this flag increases the chance for undetected upload failures.

Increasing performance

Using server-side copy

If you are copying objects between S3 buckets in the same region, you shoulduse server-side copy. This is much faster than downloading and re-uploadingthe objects, as no data is transferred.

For rclone to use server-side copy, you must use the same remote for thesource and destination.

rclone copy s3:source-bucket s3:destination-bucket

When using server-side copy, the performance is limited by the rate at whichrclone issues API requests to S3. See below for how to increase the number ofAPI requests rclone makes.

Increasing the rate of API requests

You can increase the rate of API requests to S3 by increasing the parallelismusing--transfers and--checkers options.

Rclone uses a very conservative defaults for these settings, as not allproviders support high rates of requests. Depending on your provider, you canincrease significantly the number of transfers and checkers.

For example, with AWS S3, if you can increase the number of checkers to valueslike 200. If you are doing a server-side copy, you can also increase the numberof transfers to 200.

rclone sync --transfers 200 --checkers 200 --checksum s3:source-bucket s3:destination-bucket

You will need to experiment with these values to find the optimal settings foryour setup.

Data integrity

Rclone does its best to verify every part of an upload or download tothe s3 provider using various hashes.

Every HTTP transaction to/from the provider has aX-Amz-Content-Sha256 or aContent-Md5 header to guard againstcorruption of the HTTP body. The HTTP Header is protected by thesignature passed in theAuthorization header.

All communications with the provider is done over https for encryptionand additional error protection.

Single part uploads

  • Rclone uploads single part uploads with aContent-Md5 using theMD5 hash read from the source. The provider checks this is correcton receipt of the data.

  • Rclone then does a HEAD request (disable with--s3-no-head) toread theETag back which is the MD5 of the file and checks that withwhat it sent.

Note that if the source does not have an MD5 then the single partuploads will not have hash protection. In this case it is recommendedto use--s3-upload-cutoff 0 so all files are uploaded as multipartuploads.

Multipart uploads

For files above--s3-upload-cutoff rclone splits the file intomultiple parts for upload.

  • Each part is protected with both anX-Amz-Content-Sha256 and aContent-Md5

When rclone has finished the upload of all the parts it then completesthe upload by sending:

  • The MD5 hash of each part
  • The number of parts
  • This info is all protected with aX-Amz-Content-Sha256

The provider checks the MD5 for all the parts it has received againstwhat rclone sends and if it is good it returns OK.

Rclone then does a HEAD request (disable with--s3-no-head) andchecks the ETag is what it expects (in this case it should be the MD5sum of all the MD5 sums of all the parts with the number of parts onthe end).

If the source has an MD5 sum then rclone will attach theX-Amz-Meta-Md5chksum with it as theETag for a multipart uploadcan't easily be checked against the file as the chunk size must beknown in order to calculate it.

Downloads

Rclone checks the MD5 hash of the data downloaded against either theETag or theX-Amz-Meta-Md5chksum metadata (if present) which rcloneuploads with multipart uploads.

Further checking

At each stage rclone and the provider are sending and checking hashes ofeverything. Rclone deliberately HEADs each object after upload tocheck it arrived safely for extra security. (You can disable this with--s3-no-head).

If you require further assurance that your data is intact you can userclone check to check the hashes locally vs the remote.

And if you are feeling ultimately paranoid userclone check --downloadwhich will download the files and check them against the local copies.(Note that this doesn't use disk to do this - it streams them inmemory).

Versions

When bucket versioning is enabled (this can be done with rclone withtherclone backend versioning command) when rcloneuploads a new version of a file it creates anew version of itLikewise when you delete a file, the old version will be marked hiddenand still be available.

Old versions of files, where available, are visible using the--s3-versions flag.

It is also possible to view a bucket as it was at a certain point intime, using the--s3-version-at flag. This willshow the file versions as they were at that time, showing files thathave been deleted afterwards, and hiding files that were createdsince.

If you wish to remove all the old versions then you can use therclone backend cleanup-hidden remote:bucketcommand which will delete all the old hidden versions of files,leaving the current ones intact. You can also supply a path and onlyold versions under that path will be deleted, e.g.rclone backend cleanup-hidden remote:bucket/path/to/stuff.

When youpurge a bucket, the current and the old versions will bedeleted then the bucket will be deleted.

Howeverdelete will cause the current versions of the files tobecome hidden old versions.

Here is a session showing the listing and retrieval of an oldversion followed by acleanup of the old versions.

Show current version and all the versions with--s3-versions flag.

$ rclone -q ls s3:cleanup-test        9 one.txt$ rclone -q --s3-versions ls s3:cleanup-test        9 one.txt        8 one-v2016-07-04-141032-000.txt       16 one-v2016-07-04-141003-000.txt       15 one-v2016-07-02-155621-000.txt

Retrieve an old version

$ rclone -q --s3-versions copy s3:cleanup-test/one-v2016-07-04-141003-000.txt /tmp$ ls -l /tmp/one-v2016-07-04-141003-000.txt-rw-rw-r-- 1 ncw ncw 16 Jul  2 17:46 /tmp/one-v2016-07-04-141003-000.txt

Clean up all the old versions and show that they've gone.

$ rclone -q backend cleanup-hidden s3:cleanup-test$ rclone -q ls s3:cleanup-test        9 one.txt$ rclone -q --s3-versions ls s3:cleanup-test        9 one.txt

Versions naming caveat

When using--s3-versions flag rclone is relying on the file nameto work out whether the objects are versions or not. Versions' namesare created by inserting timestamp between file name and its extension.

        9 file.txt        8 file-v2023-07-17-161032-000.txt       16 file-v2023-06-15-141003-000.txt

If there are real files present with the same names as versions, thenbehaviour of--s3-versions can be unpredictable.

Cleanup

If you runrclone cleanup s3:bucket then it will remove all pendingmultipart uploads older than 24 hours. You can use the--interactive/ior--dry-run flag to see exactly what it will do. If you want more controlover the expiry date then runrclone backend cleanup s3:bucket -o max-age=1hto expire all uploads older than one hour. You can userclone backend list-multipart-uploads s3:bucket to see the pending multipartuploads.

Restricted filename characters

S3 allows any valid UTF-8 string as a key.

Invalid UTF-8 bytes will bereplaced, asthey can't be used in XML.

The following characters are replaced since these are problematic whendealing with the REST API:

CharacterValueReplacement
NUL0x00
/0x2F

The encoding will also encode these file names as they don't seem towork with the SDK properly:

File nameReplacement
.
....

Multipart uploads

rclone supports multipart uploads with S3 which means that it canupload files bigger than 5 GiB.

Note that files uploadedboth with multipart uploadand throughcrypt remotes do not have MD5 sums.

rclone switches from single part uploads to multipart uploads at thepoint specified by--s3-upload-cutoff. This can be a maximum of 5 GiBand a minimum of 0 (ie always upload multipart files).

The chunk sizes used in the multipart upload are specified by--s3-chunk-size and the number of chunks uploaded concurrently isspecified by--s3-upload-concurrency.

Multipart uploads will use extra memory equal to:--transfers ×--s3-upload-concurrency ×--s3-chunk-size. Single part uploads do notuse extra memory.

Single part transfers can be faster than multipart transfers or slowerdepending on your latency from S3 - the more latency, the more likelysingle part transfers will be faster.

Increasing--s3-upload-concurrency will increase throughput (8 wouldbe a sensible value) and increasing--s3-chunk-size also increasesthroughput (16M would be sensible). Increasing either of these willuse more memory. The default values are high enough to gain most ofthe possible performance without using too much memory.

Buckets and Regions

With Amazon S3 you can list buckets (rclone lsd) using any region,but you can only access the content of a bucket from the region it wascreated in. If you attempt to access a bucket from the wrong region,you will get an error,incorrect region, the bucket is not in 'XXX' region.

Authentication

There are a number of ways to supplyrclone with a set of AWScredentials, with and without using the environment.

The different authentication methods are tried in this order:

  • Directly in the rclone configuration file (env_auth = false in the config file):
    • access_key_id andsecret_access_key are required.
    • session_token can be optionally set when using AWS STS.
  • Runtime configuration (env_auth = true in the config file):
    • Export the following environment variables before runningrclone:
      • Access Key ID:AWS_ACCESS_KEY_ID orAWS_ACCESS_KEY
      • Secret Access Key:AWS_SECRET_ACCESS_KEY orAWS_SECRET_KEY
      • Session Token:AWS_SESSION_TOKEN (optional)
    • Or, use anamed profile:
      • Profile files are standard files used by AWS CLI tools
      • By default it will use the profile in your home directory (e.g.~/.aws/credentialson unix based systems) file and the "default" profile, to change set theseenvironment variables or config keys:
        • AWS_SHARED_CREDENTIALS_FILE to control which file or theshared_credentials_fileconfig key.
        • AWS_PROFILE to control which profile to use or theprofile config key.
    • Or, runrclone in an ECS task with an IAM role (AWS only).
    • Or, runrclone on an EC2 instance with an IAM role (AWS only).
    • Or, runrclone in an EKS pod with an IAM role that is associated with aservice account (AWS only).
    • Or, useprocess credentialsto read config from an external program.

Withenv_auth = true rclone (which uses the SDK for Go v2) should supportall authentication methodsthat theaws CLI tool does and the other AWS SDKs.

If none of these option actually end up providingrclone with AWScredentials then S3 interaction will be non-authenticated (see theanonymous access section for more info).

S3 Permissions

When using thesync subcommand ofrclone the following minimumpermissions are required to be available on the bucket being written to:

  • ListBucket
  • DeleteObject
  • GetObject
  • PutObject
  • PutObjectACL
  • CreateBucket (unless usings3-no-check-bucket)

When using thelsd subcommand, theListAllMyBuckets permission is required.

Example policy:

{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":"arn:aws:iam::USER_SID:user/USER_NAME"},"Action":["s3:ListBucket","s3:DeleteObject","s3:GetObject","s3:PutObject","s3:PutObjectAcl"],"Resource":["arn:aws:s3:::BUCKET_NAME/*","arn:aws:s3:::BUCKET_NAME"]},{"Effect":"Allow","Action":"s3:ListAllMyBuckets","Resource":"arn:aws:s3:::*"}]}

Notes on above:

  1. This is a policy that can be used when creating bucket. It assumesthatUSER_NAME has been created.
  2. The Resource entry must include both resource ARNs, as one impliesthe bucket and the other implies the bucket's objects.
  3. When usings3-no-check-bucket and the bucket alreadyexists, the"arn:aws:s3:::BUCKET_NAME" doesn't have to be included.

For reference,here's an Ansible scriptthat will generate one or more buckets that will work withrclone sync.

Key Management System (KMS)

If you are using server-side encryption with KMS then you must makesure rclone is configured withserver_side_encryption = aws:kmsotherwise you will find you can't transfer small objects - these willcreate checksum errors.

Glacier and Glacier Deep Archive

You can upload objects using the glacier storage class or transition them toglacier using alifecycle policy.The bucket can still be synced or copied into normally, but if rclonetries to access data from the glacier storage class you will see an error like below.

2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file

In this case you need torestorethe object(s) in question before accessing object contents.Therestore section below shows how to do this with rclone.

Note that rclone only speaks the S3 API it does not speak the GlacierVault API, so rclone cannot directly access Glacier Vaults.

Object-lock enabled S3 bucket

According to AWS'sdocumentation on S3 Object Lock:

If you configure a default retention period on a bucket, requests to uploadobjects in such a bucket must include the Content-MD5 header.

As mentioned in theModification times and hashessection, small files that are not uploaded as multipart, use a different tag, causingthe upload to fail. A simple solution is to set the--s3-upload-cutoff 0 and forceall the files to be uploaded as multipart.

Standard options

Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology, TencentCOS, Wasabi, Zata, Other).

--s3-provider

Choose your S3 provider.

Properties:

  • Config: provider
  • Env Var: RCLONE_S3_PROVIDER
  • Type: string
  • Required: false
  • Examples:
    • "AWS"
      • Amazon Web Services (AWS) S3
    • "Alibaba"
      • Alibaba Cloud Object Storage System (OSS) formerly Aliyun
    • "ArvanCloud"
      • Arvan Cloud Object Storage (AOS)
    • "Ceph"
      • Ceph Object Storage
    • "ChinaMobile"
      • China Mobile Ecloud Elastic Object Storage (EOS)
    • "Cloudflare"
      • Cloudflare R2 Storage
    • "Cubbit"
      • Cubbit DS3 Object Storage
    • "DigitalOcean"
      • DigitalOcean Spaces
    • "Dreamhost"
      • Dreamhost DreamObjects
    • "Exaba"
      • Exaba Object Storage
    • "FileLu"
      • FileLu S5 (S3-Compatible Object Storage)
    • "FlashBlade"
      • Pure Storage FlashBlade Object Storage
    • "GCS"
      • Google Cloud Storage
    • "Hetzner"
      • Hetzner Object Storage
    • "HuaweiOBS"
      • Huawei Object Storage Service
    • "IBMCOS"
      • IBM COS S3
    • "IDrive"
      • IDrive e2
    • "Intercolo"
      • Intercolo Object Storage
    • "IONOS"
      • IONOS Cloud
    • "Leviia"
      • Leviia Object Storage
    • "Liara"
      • Liara Object Storage
    • "Linode"
      • Linode Object Storage
    • "LyveCloud"
      • Seagate Lyve Cloud
    • "Magalu"
      • Magalu Object Storage
    • "Mega"
      • MEGA S4 Object Storage
    • "Minio"
      • Minio Object Storage
    • "Netease"
      • Netease Object Storage (NOS)
    • "Outscale"
      • OUTSCALE Object Storage (OOS)
    • "OVHcloud"
      • OVHcloud Object Storage
    • "Petabox"
      • Petabox Object Storage
    • "Qiniu"
      • Qiniu Object Storage (Kodo)
    • "Rabata"
      • Rabata Cloud Storage
    • "RackCorp"
      • RackCorp Object Storage
    • "Rclone"
      • Rclone S3 Server
    • "Scaleway"
      • Scaleway Object Storage
    • "SeaweedFS"
      • SeaweedFS S3
    • "Selectel"
      • Selectel Object Storage
    • "Servercore"
      • Servercore Object Storage
    • "SpectraLogic"
      • Spectra Logic Black Pearl
    • "StackPath"
      • StackPath Object Storage
    • "Storj"
      • Storj (S3 Compatible Gateway)
    • "Synology"
      • Synology C2 Object Storage
    • "TencentCOS"
      • Tencent Cloud Object Storage (COS)
    • "Wasabi"
      • Wasabi Object Storage
    • "Zata"
      • Zata (S3 compatible Gateway)
    • "Other"
      • Any other S3 compatible provider

--s3-env-auth

Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).

Only applies if access_key_id and secret_access_key is blank.

Properties:

  • Config: env_auth
  • Env Var: RCLONE_S3_ENV_AUTH
  • Type: bool
  • Default: false
  • Examples:
    • "false"
      • Enter AWS credentials in the next step.
    • "true"
      • Get AWS credentials from the environment (env vars or IAM).

--s3-access-key-id

AWS Access Key ID.

Leave blank for anonymous access or runtime credentials.

Properties:

  • Config: access_key_id
  • Env Var: RCLONE_S3_ACCESS_KEY_ID
  • Type: string
  • Required: false

--s3-secret-access-key

AWS Secret Access Key (password).

Leave blank for anonymous access or runtime credentials.

Properties:

  • Config: secret_access_key
  • Env Var: RCLONE_S3_SECRET_ACCESS_KEY
  • Type: string
  • Required: false

--s3-region

Region to connect to.

Leave blank if you are using an S3 clone and you don't have a region.

Properties:

  • Config: region
  • Env Var: RCLONE_S3_REGION
  • Provider: AWS,Ceph,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,LyveCloud,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Scaleway,SeaweedFS,Selectel,Servercore,StackPath,Synology,Wasabi,Zata,Other
  • Type: string
  • Required: false
  • Examples:
    • "us-east-1"
      • The default endpoint - a good choice if you are unsure.
      • US Region, Northern Virginia, or Pacific Northwest.
      • Leave location constraint empty.
      • Provider: AWS
    • "us-east-2"
      • US East (Ohio) Region.
      • Needs location constraint us-east-2.
      • Provider: AWS
    • "us-west-1"
      • US West (Northern California) Region.
      • Needs location constraint us-west-1.
      • Provider: AWS
    • "us-west-2"
      • US West (Oregon) Region.
      • Needs location constraint us-west-2.
      • Provider: AWS
    • "ca-central-1"
      • Canada (Central) Region.
      • Needs location constraint ca-central-1.
      • Provider: AWS
    • "eu-west-1"
      • EU (Ireland) Region.
      • Needs location constraint EU or eu-west-1.
      • Provider: AWS
    • "eu-west-2"
      • EU (London) Region.
      • Needs location constraint eu-west-2.
      • Provider: AWS
    • "eu-west-3"
      • EU (Paris) Region.
      • Needs location constraint eu-west-3.
      • Provider: AWS
    • "eu-north-1"
      • EU (Stockholm) Region.
      • Needs location constraint eu-north-1.
      • Provider: AWS
    • "eu-south-1"
      • EU (Milan) Region.
      • Needs location constraint eu-south-1.
      • Provider: AWS
    • "eu-central-1"
      • EU (Frankfurt) Region.
      • Needs location constraint eu-central-1.
      • Provider: AWS
    • "ap-southeast-1"
      • Asia Pacific (Singapore) Region.
      • Needs location constraint ap-southeast-1.
      • Provider: AWS
    • "ap-southeast-2"
      • Asia Pacific (Sydney) Region.
      • Needs location constraint ap-southeast-2.
      • Provider: AWS
    • "ap-northeast-1"
      • Asia Pacific (Tokyo) Region.
      • Needs location constraint ap-northeast-1.
      • Provider: AWS
    • "ap-northeast-2"
      • Asia Pacific (Seoul).
      • Needs location constraint ap-northeast-2.
      • Provider: AWS
    • "ap-northeast-3"
      • Asia Pacific (Osaka-Local).
      • Needs location constraint ap-northeast-3.
      • Provider: AWS
    • "ap-south-1"
      • Asia Pacific (Mumbai).
      • Needs location constraint ap-south-1.
      • Provider: AWS
    • "ap-east-1"
      • Asia Pacific (Hong Kong) Region.
      • Needs location constraint ap-east-1.
      • Provider: AWS
    • "sa-east-1"
      • South America (Sao Paulo) Region.
      • Needs location constraint sa-east-1.
      • Provider: AWS
    • "il-central-1"
      • Israel (Tel Aviv) Region.
      • Needs location constraint il-central-1.
      • Provider: AWS
    • "me-south-1"
      • Middle East (Bahrain) Region.
      • Needs location constraint me-south-1.
      • Provider: AWS
    • "af-south-1"
      • Africa (Cape Town) Region.
      • Needs location constraint af-south-1.
      • Provider: AWS
    • "cn-north-1"
      • China (Beijing) Region.
      • Needs location constraint cn-north-1.
      • Provider: AWS
    • "cn-northwest-1"
      • China (Ningxia) Region.
      • Needs location constraint cn-northwest-1.
      • Provider: AWS
    • "us-gov-east-1"
      • AWS GovCloud (US-East) Region.
      • Needs location constraint us-gov-east-1.
      • Provider: AWS
    • "us-gov-west-1"
      • AWS GovCloud (US) Region.
      • Needs location constraint us-gov-west-1.
      • Provider: AWS
    • ""
      • Use this if unsure.
      • Will use v4 signatures and an empty region.
      • Provider: Ceph,DigitalOcean,Dreamhost,Exaba,GCS,IBMCOS,Leviia,LyveCloud,Minio,Netease,SeaweedFS,StackPath,Wasabi,Other
    • "other-v2-signature"
      • Use this only if v4 signatures don't work.
      • E.g. pre Jewel/v10 CEPH.
      • Provider: Ceph,DigitalOcean,Dreamhost,Exaba,GCS,IBMCOS,Leviia,LyveCloud,Minio,Netease,SeaweedFS,StackPath,Wasabi,Other
    • "auto"
      • R2 buckets are automatically distributed across Cloudflare's data centers for low latency.
      • Provider: Cloudflare
    • "eu-west-1"
      • Europe West
      • Provider: Cubbit
    • "global"
      • Global
      • Provider: FileLu
    • "us-east"
      • North America (US-East)
      • Provider: FileLu
    • "eu-central"
      • Europe (EU-Central)
      • Provider: FileLu
    • "ap-southeast"
      • Asia Pacific (AP-Southeast)
      • Provider: FileLu
    • "me-central"
      • Middle East (ME-Central)
      • Provider: FileLu
    • "hel1"
      • Helsinki
      • Provider: Hetzner
    • "fsn1"
      • Falkenstein
      • Provider: Hetzner
    • "nbg1"
      • Nuremberg
      • Provider: Hetzner
    • "af-south-1"
      • AF-Johannesburg
      • Provider: HuaweiOBS
    • "ap-southeast-2"
      • AP-Bangkok
      • Provider: HuaweiOBS
    • "ap-southeast-3"
      • AP-Singapore
      • Provider: HuaweiOBS
    • "cn-east-3"
      • CN East-Shanghai1
      • Provider: HuaweiOBS
    • "cn-east-2"
      • CN East-Shanghai2
      • Provider: HuaweiOBS
    • "cn-north-1"
      • CN North-Beijing1
      • Provider: HuaweiOBS
    • "cn-north-4"
      • CN North-Beijing4
      • Provider: HuaweiOBS
    • "cn-south-1"
      • CN South-Guangzhou
      • Provider: HuaweiOBS
    • "ap-southeast-1"
      • CN-Hong Kong
      • Provider: HuaweiOBS
    • "sa-argentina-1"
      • LA-Buenos Aires1
      • Provider: HuaweiOBS
    • "sa-peru-1"
      • LA-Lima1
      • Provider: HuaweiOBS
    • "na-mexico-1"
      • LA-Mexico City1
      • Provider: HuaweiOBS
    • "sa-chile-1"
      • LA-Santiago2
      • Provider: HuaweiOBS
    • "sa-brazil-1"
      • LA-Sao Paulo1
      • Provider: HuaweiOBS
    • "ru-northwest-2"
      • RU-Moscow2
      • Provider: HuaweiOBS
    • "de-fra"
      • Frankfurt, Germany
      • Provider: Intercolo
    • "de"
      • Frankfurt, Germany
      • Provider: IONOS,OVHcloud
    • "eu-central-2"
      • Berlin, Germany
      • Provider: IONOS
    • "eu-south-2"
      • Logrono, Spain
      • Provider: IONOS
    • "eu-west-2"
      • Paris, France
      • Provider: Outscale
    • "us-east-2"
      • New Jersey, USA
      • Provider: Outscale
    • "us-west-1"
      • California, USA
      • Provider: Outscale
    • "cloudgouv-eu-west-1"
      • SecNumCloud, Paris, France
      • Provider: Outscale
    • "ap-northeast-1"
      • Tokyo, Japan
      • Provider: Outscale
    • "gra"
      • Gravelines, France
      • Provider: OVHcloud
    • "rbx"
      • Roubaix, France
      • Provider: OVHcloud
    • "sbg"
      • Strasbourg, France
      • Provider: OVHcloud
    • "eu-west-par"
      • Paris, France (3AZ)
      • Provider: OVHcloud
    • "uk"
      • London, United Kingdom
      • Provider: OVHcloud
    • "waw"
      • Warsaw, Poland
      • Provider: OVHcloud
    • "bhs"
      • Beauharnois, Canada
      • Provider: OVHcloud
    • "ca-east-tor"
      • Toronto, Canada
      • Provider: OVHcloud
    • "sgp"
      • Singapore
      • Provider: OVHcloud
    • "ap-southeast-syd"
      • Sydney, Australia
      • Provider: OVHcloud
    • "ap-south-mum"
      • Mumbai, India
      • Provider: OVHcloud
    • "us-east-va"
      • Vint Hill, Virginia, USA
      • Provider: OVHcloud
    • "us-west-or"
      • Hillsboro, Oregon, USA
      • Provider: OVHcloud
    • "rbx-archive"
      • Roubaix, France (Cold Archive)
      • Provider: OVHcloud
    • "us-east-1"
      • US East (N. Virginia)
      • Provider: Petabox,Rabata
    • "eu-central-1"
      • Europe (Frankfurt)
      • Provider: Petabox
    • "ap-southeast-1"
      • Asia Pacific (Singapore)
      • Provider: Petabox
    • "me-south-1"
      • Middle East (Bahrain)
      • Provider: Petabox
    • "sa-east-1"
      • South America (São Paulo)
      • Provider: Petabox
    • "cn-east-1"
      • The default endpoint - a good choice if you are unsure.
      • East China Region 1.
      • Needs location constraint cn-east-1.
      • Provider: Qiniu
    • "cn-east-2"
      • East China Region 2.
      • Needs location constraint cn-east-2.
      • Provider: Qiniu
    • "cn-north-1"
      • North China Region 1.
      • Needs location constraint cn-north-1.
      • Provider: Qiniu
    • "cn-south-1"
      • South China Region 1.
      • Needs location constraint cn-south-1.
      • Provider: Qiniu
    • "us-north-1"
      • North America Region.
      • Needs location constraint us-north-1.
      • Provider: Qiniu
    • "ap-southeast-1"
      • Southeast Asia Region 1.
      • Needs location constraint ap-southeast-1.
      • Provider: Qiniu
    • "ap-northeast-1"
      • Northeast Asia Region 1.
      • Needs location constraint ap-northeast-1.
      • Provider: Qiniu
    • "eu-west-1"
      • EU (Ireland)
      • Provider: Rabata
    • "eu-west-2"
      • EU (London)
      • Provider: Rabata
    • "global"
      • Global CDN (All locations) Region
      • Provider: RackCorp
    • "au"
      • Australia (All states)
      • Provider: RackCorp
    • "au-nsw"
      • NSW (Australia) Region
      • Provider: RackCorp
    • "au-qld"
      • QLD (Australia) Region
      • Provider: RackCorp
    • "au-vic"
      • VIC (Australia) Region
      • Provider: RackCorp
    • "au-wa"
      • Perth (Australia) Region
      • Provider: RackCorp
    • "ph"
      • Manila (Philippines) Region
      • Provider: RackCorp
    • "th"
      • Bangkok (Thailand) Region
      • Provider: RackCorp
    • "hk"
      • HK (Hong Kong) Region
      • Provider: RackCorp
    • "mn"
      • Ulaanbaatar (Mongolia) Region
      • Provider: RackCorp
    • "kg"
      • Bishkek (Kyrgyzstan) Region
      • Provider: RackCorp
    • "id"
      • Jakarta (Indonesia) Region
      • Provider: RackCorp
    • "jp"
      • Tokyo (Japan) Region
      • Provider: RackCorp
    • "sg"
      • SG (Singapore) Region
      • Provider: RackCorp
    • "de"
      • Frankfurt (Germany) Region
      • Provider: RackCorp
    • "us"
      • USA (AnyCast) Region
      • Provider: RackCorp
    • "us-east-1"
      • New York (USA) Region
      • Provider: RackCorp
    • "us-west-1"
      • Freemont (USA) Region
      • Provider: RackCorp
    • "nz"
      • Auckland (New Zealand) Region
      • Provider: RackCorp
    • "nl-ams"
      • Amsterdam, The Netherlands
      • Provider: Scaleway
    • "fr-par"
      • Paris, France
      • Provider: Scaleway
    • "pl-waw"
      • Warsaw, Poland
      • Provider: Scaleway
    • "ru-1"
      • St. Petersburg
      • Provider: Selectel,Servercore
    • "gis-1"
      • Moscow
      • Provider: Servercore
    • "ru-7"
      • Moscow
      • Provider: Servercore
    • "uz-2"
      • Tashkent, Uzbekistan
      • Provider: Servercore
    • "kz-1"
      • Almaty, Kazakhstan
      • Provider: Servercore
    • "eu-001"
      • Europe Region 1
      • Provider: Synology
    • "eu-002"
      • Europe Region 2
      • Provider: Synology
    • "us-001"
      • US Region 1
      • Provider: Synology
    • "us-002"
      • US Region 2
      • Provider: Synology
    • "tw-001"
      • Asia (Taiwan)
      • Provider: Synology
    • "us-east-1"
      • Indore, Madhya Pradesh, India
      • Provider: Zata

--s3-endpoint

Endpoint for S3 API.

Required when using an S3 clone.

Properties:

  • Config: endpoint
  • Env Var: RCLONE_S3_ENDPOINT
  • Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,FlashBlade,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Rclone,Scaleway,SeaweedFS,Selectel,Servercore,SpectraLogic,StackPath,Storj,Synology,TencentCOS,Wasabi,Zata,Other
  • Type: string
  • Required: false
  • Examples:
    • "oss-accelerate.aliyuncs.com"
      • Global Accelerate
      • Provider: Alibaba
    • "oss-accelerate-overseas.aliyuncs.com"
      • Global Accelerate (outside mainland China)
      • Provider: Alibaba
    • "oss-cn-hangzhou.aliyuncs.com"
      • East China 1 (Hangzhou)
      • Provider: Alibaba
    • "oss-cn-shanghai.aliyuncs.com"
      • East China 2 (Shanghai)
      • Provider: Alibaba
    • "oss-cn-qingdao.aliyuncs.com"
      • North China 1 (Qingdao)
      • Provider: Alibaba
    • "oss-cn-beijing.aliyuncs.com"
      • North China 2 (Beijing)
      • Provider: Alibaba
    • "oss-cn-zhangjiakou.aliyuncs.com"
      • North China 3 (Zhangjiakou)
      • Provider: Alibaba
    • "oss-cn-huhehaote.aliyuncs.com"
      • North China 5 (Hohhot)
      • Provider: Alibaba
    • "oss-cn-wulanchabu.aliyuncs.com"
      • North China 6 (Ulanqab)
      • Provider: Alibaba
    • "oss-cn-shenzhen.aliyuncs.com"
      • South China 1 (Shenzhen)
      • Provider: Alibaba
    • "oss-cn-heyuan.aliyuncs.com"
      • South China 2 (Heyuan)
      • Provider: Alibaba
    • "oss-cn-guangzhou.aliyuncs.com"
      • South China 3 (Guangzhou)
      • Provider: Alibaba
    • "oss-cn-chengdu.aliyuncs.com"
      • West China 1 (Chengdu)
      • Provider: Alibaba
    • "oss-cn-hongkong.aliyuncs.com"
      • Hong Kong (Hong Kong)
      • Provider: Alibaba
    • "oss-us-west-1.aliyuncs.com"
      • US West 1 (Silicon Valley)
      • Provider: Alibaba
    • "oss-us-east-1.aliyuncs.com"
      • US East 1 (Virginia)
      • Provider: Alibaba
    • "oss-ap-southeast-1.aliyuncs.com"
      • Southeast Asia Southeast 1 (Singapore)
      • Provider: Alibaba
    • "oss-ap-southeast-2.aliyuncs.com"
      • Asia Pacific Southeast 2 (Sydney)
      • Provider: Alibaba
    • "oss-ap-southeast-3.aliyuncs.com"
      • Southeast Asia Southeast 3 (Kuala Lumpur)
      • Provider: Alibaba
    • "oss-ap-southeast-5.aliyuncs.com"
      • Asia Pacific Southeast 5 (Jakarta)
      • Provider: Alibaba
    • "oss-ap-northeast-1.aliyuncs.com"
      • Asia Pacific Northeast 1 (Japan)
      • Provider: Alibaba
    • "oss-ap-south-1.aliyuncs.com"
      • Asia Pacific South 1 (Mumbai)
      • Provider: Alibaba
    • "oss-eu-central-1.aliyuncs.com"
      • Central Europe 1 (Frankfurt)
      • Provider: Alibaba
    • "oss-eu-west-1.aliyuncs.com"
      • West Europe (London)
      • Provider: Alibaba
    • "oss-me-east-1.aliyuncs.com"
      • Middle East 1 (Dubai)
      • Provider: Alibaba
    • "s3.ir-thr-at1.arvanstorage.ir"
      • The default endpoint - a good choice if you are unsure.
      • Tehran Iran (Simin)
      • Provider: ArvanCloud
    • "s3.ir-tbz-sh1.arvanstorage.ir"
      • Tabriz Iran (Shahriar)
      • Provider: ArvanCloud
    • "eos-wuxi-1.cmecloud.cn"
      • The default endpoint - a good choice if you are unsure.
      • East China (Suzhou)
      • Provider: ChinaMobile
    • "eos-jinan-1.cmecloud.cn"
      • East China (Jinan)
      • Provider: ChinaMobile
    • "eos-ningbo-1.cmecloud.cn"
      • East China (Hangzhou)
      • Provider: ChinaMobile
    • "eos-shanghai-1.cmecloud.cn"
      • East China (Shanghai-1)
      • Provider: ChinaMobile
    • "eos-zhengzhou-1.cmecloud.cn"
      • Central China (Zhengzhou)
      • Provider: ChinaMobile
    • "eos-hunan-1.cmecloud.cn"
      • Central China (Changsha-1)
      • Provider: ChinaMobile
    • "eos-zhuzhou-1.cmecloud.cn"
      • Central China (Changsha-2)
      • Provider: ChinaMobile
    • "eos-guangzhou-1.cmecloud.cn"
      • South China (Guangzhou-2)
      • Provider: ChinaMobile
    • "eos-dongguan-1.cmecloud.cn"
      • South China (Guangzhou-3)
      • Provider: ChinaMobile
    • "eos-beijing-1.cmecloud.cn"
      • North China (Beijing-1)
      • Provider: ChinaMobile
    • "eos-beijing-2.cmecloud.cn"
      • North China (Beijing-2)
      • Provider: ChinaMobile
    • "eos-beijing-4.cmecloud.cn"
      • North China (Beijing-3)
      • Provider: ChinaMobile
    • "eos-huhehaote-1.cmecloud.cn"
      • North China (Huhehaote)
      • Provider: ChinaMobile
    • "eos-chengdu-1.cmecloud.cn"
      • Southwest China (Chengdu)
      • Provider: ChinaMobile
    • "eos-chongqing-1.cmecloud.cn"
      • Southwest China (Chongqing)
      • Provider: ChinaMobile
    • "eos-guiyang-1.cmecloud.cn"
      • Southwest China (Guiyang)
      • Provider: ChinaMobile
    • "eos-xian-1.cmecloud.cn"
      • Nouthwest China (Xian)
      • Provider: ChinaMobile
    • "eos-yunnan.cmecloud.cn"
      • Yunnan China (Kunming)
      • Provider: ChinaMobile
    • "eos-yunnan-2.cmecloud.cn"
      • Yunnan China (Kunming-2)
      • Provider: ChinaMobile
    • "eos-tianjin-1.cmecloud.cn"
      • Tianjin China (Tianjin)
      • Provider: ChinaMobile
    • "eos-jilin-1.cmecloud.cn"
      • Jilin China (Changchun)
      • Provider: ChinaMobile
    • "eos-hubei-1.cmecloud.cn"
      • Hubei China (Xiangyan)
      • Provider: ChinaMobile
    • "eos-jiangxi-1.cmecloud.cn"
      • Jiangxi China (Nanchang)
      • Provider: ChinaMobile
    • "eos-gansu-1.cmecloud.cn"
      • Gansu China (Lanzhou)
      • Provider: ChinaMobile
    • "eos-shanxi-1.cmecloud.cn"
      • Shanxi China (Taiyuan)
      • Provider: ChinaMobile
    • "eos-liaoning-1.cmecloud.cn"
      • Liaoning China (Shenyang)
      • Provider: ChinaMobile
    • "eos-hebei-1.cmecloud.cn"
      • Hebei China (Shijiazhuang)
      • Provider: ChinaMobile
    • "eos-fujian-1.cmecloud.cn"
      • Fujian China (Xiamen)
      • Provider: ChinaMobile
    • "eos-guangxi-1.cmecloud.cn"
      • Guangxi China (Nanning)
      • Provider: ChinaMobile
    • "eos-anhui-1.cmecloud.cn"
      • Anhui China (Huainan)
      • Provider: ChinaMobile
    • "s3.cubbit.eu"
      • Cubbit DS3 Object Storage endpoint
      • Provider: Cubbit
    • "syd1.digitaloceanspaces.com"
      • DigitalOcean Spaces Sydney 1
      • Provider: DigitalOcean
    • "sfo3.digitaloceanspaces.com"
      • DigitalOcean Spaces San Francisco 3
      • Provider: DigitalOcean
    • "sfo2.digitaloceanspaces.com"
      • DigitalOcean Spaces San Francisco 2
      • Provider: DigitalOcean
    • "fra1.digitaloceanspaces.com"
      • DigitalOcean Spaces Frankfurt 1
      • Provider: DigitalOcean
    • "nyc3.digitaloceanspaces.com"
      • DigitalOcean Spaces New York 3
      • Provider: DigitalOcean
    • "ams3.digitaloceanspaces.com"
      • DigitalOcean Spaces Amsterdam 3
      • Provider: DigitalOcean
    • "sgp1.digitaloceanspaces.com"
      • DigitalOcean Spaces Singapore 1
      • Provider: DigitalOcean
    • "lon1.digitaloceanspaces.com"
      • DigitalOcean Spaces London 1
      • Provider: DigitalOcean
    • "tor1.digitaloceanspaces.com"
      • DigitalOcean Spaces Toronto 1
      • Provider: DigitalOcean
    • "blr1.digitaloceanspaces.com"
      • DigitalOcean Spaces Bangalore 1
      • Provider: DigitalOcean
    • "objects-us-east-1.dream.io"
      • Dream Objects endpoint
      • Provider: Dreamhost
    • "s5lu.com"
      • Global FileLu S5 endpoint
      • Provider: FileLu
    • "us.s5lu.com"
      • North America (US-East) region endpoint
      • Provider: FileLu
    • "eu.s5lu.com"
      • Europe (EU-Central) region endpoint
      • Provider: FileLu
    • "ap.s5lu.com"
      • Asia Pacific (AP-Southeast) region endpoint
      • Provider: FileLu
    • "me.s5lu.com"
      • Middle East (ME-Central) region endpoint
      • Provider: FileLu
    • "https://storage.googleapis.com"
      • Google Cloud Storage endpoint
      • Provider: GCS
    • "hel1.your-objectstorage.com"
      • Helsinki
      • Provider: Hetzner
    • "fsn1.your-objectstorage.com"
      • Falkenstein
      • Provider: Hetzner
    • "nbg1.your-objectstorage.com"
      • Nuremberg
      • Provider: Hetzner
    • "obs.af-south-1.myhuaweicloud.com"
      • AF-Johannesburg
      • Provider: HuaweiOBS
    • "obs.ap-southeast-2.myhuaweicloud.com"
      • AP-Bangkok
      • Provider: HuaweiOBS
    • "obs.ap-southeast-3.myhuaweicloud.com"
      • AP-Singapore
      • Provider: HuaweiOBS
    • "obs.cn-east-3.myhuaweicloud.com"
      • CN East-Shanghai1
      • Provider: HuaweiOBS
    • "obs.cn-east-2.myhuaweicloud.com"
      • CN East-Shanghai2
      • Provider: HuaweiOBS
    • "obs.cn-north-1.myhuaweicloud.com"
      • CN North-Beijing1
      • Provider: HuaweiOBS
    • "obs.cn-north-4.myhuaweicloud.com"
      • CN North-Beijing4
      • Provider: HuaweiOBS
    • "obs.cn-south-1.myhuaweicloud.com"
      • CN South-Guangzhou
      • Provider: HuaweiOBS
    • "obs.ap-southeast-1.myhuaweicloud.com"
      • CN-Hong Kong
      • Provider: HuaweiOBS
    • "obs.sa-argentina-1.myhuaweicloud.com"
      • LA-Buenos Aires1
      • Provider: HuaweiOBS
    • "obs.sa-peru-1.myhuaweicloud.com"
      • LA-Lima1
      • Provider: HuaweiOBS
    • "obs.na-mexico-1.myhuaweicloud.com"
      • LA-Mexico City1
      • Provider: HuaweiOBS
    • "obs.sa-chile-1.myhuaweicloud.com"
      • LA-Santiago2
      • Provider: HuaweiOBS
    • "obs.sa-brazil-1.myhuaweicloud.com"
      • LA-Sao Paulo1
      • Provider: HuaweiOBS
    • "obs.ru-northwest-2.myhuaweicloud.com"
      • RU-Moscow2
      • Provider: HuaweiOBS
    • "s3.us.cloud-object-storage.appdomain.cloud"
      • US Cross Region Endpoint
      • Provider: IBMCOS
    • "s3.dal.us.cloud-object-storage.appdomain.cloud"
      • US Cross Region Dallas Endpoint
      • Provider: IBMCOS
    • "s3.wdc.us.cloud-object-storage.appdomain.cloud"
      • US Cross Region Washington DC Endpoint
      • Provider: IBMCOS
    • "s3.sjc.us.cloud-object-storage.appdomain.cloud"
      • US Cross Region San Jose Endpoint
      • Provider: IBMCOS
    • "s3.private.us.cloud-object-storage.appdomain.cloud"
      • US Cross Region Private Endpoint
      • Provider: IBMCOS
    • "s3.private.dal.us.cloud-object-storage.appdomain.cloud"
      • US Cross Region Dallas Private Endpoint
      • Provider: IBMCOS
    • "s3.private.wdc.us.cloud-object-storage.appdomain.cloud"
      • US Cross Region Washington DC Private Endpoint
      • Provider: IBMCOS
    • "s3.private.sjc.us.cloud-object-storage.appdomain.cloud"
      • US Cross Region San Jose Private Endpoint
      • Provider: IBMCOS
    • "s3.us-east.cloud-object-storage.appdomain.cloud"
      • US Region East Endpoint
      • Provider: IBMCOS
    • "s3.private.us-east.cloud-object-storage.appdomain.cloud"
      • US Region East Private Endpoint
      • Provider: IBMCOS
    • "s3.us-south.cloud-object-storage.appdomain.cloud"
      • US Region South Endpoint
      • Provider: IBMCOS
    • "s3.private.us-south.cloud-object-storage.appdomain.cloud"
      • US Region South Private Endpoint
      • Provider: IBMCOS
    • "s3.eu.cloud-object-storage.appdomain.cloud"
      • EU Cross Region Endpoint
      • Provider: IBMCOS
    • "s3.fra.eu.cloud-object-storage.appdomain.cloud"
      • EU Cross Region Frankfurt Endpoint
      • Provider: IBMCOS
    • "s3.mil.eu.cloud-object-storage.appdomain.cloud"
      • EU Cross Region Milan Endpoint
      • Provider: IBMCOS
    • "s3.ams.eu.cloud-object-storage.appdomain.cloud"
      • EU Cross Region Amsterdam Endpoint
      • Provider: IBMCOS
    • "s3.private.eu.cloud-object-storage.appdomain.cloud"
      • EU Cross Region Private Endpoint
      • Provider: IBMCOS
    • "s3.private.fra.eu.cloud-object-storage.appdomain.cloud"
      • EU Cross Region Frankfurt Private Endpoint
      • Provider: IBMCOS
    • "s3.private.mil.eu.cloud-object-storage.appdomain.cloud"
      • EU Cross Region Milan Private Endpoint
      • Provider: IBMCOS
    • "s3.private.ams.eu.cloud-object-storage.appdomain.cloud"
      • EU Cross Region Amsterdam Private Endpoint
      • Provider: IBMCOS
    • "s3.eu-gb.cloud-object-storage.appdomain.cloud"
      • Great Britain Endpoint
      • Provider: IBMCOS
    • "s3.private.eu-gb.cloud-object-storage.appdomain.cloud"
      • Great Britain Private Endpoint
      • Provider: IBMCOS
    • "s3.eu-de.cloud-object-storage.appdomain.cloud"
      • EU Region DE Endpoint
      • Provider: IBMCOS
    • "s3.private.eu-de.cloud-object-storage.appdomain.cloud"
      • EU Region DE Private Endpoint
      • Provider: IBMCOS
    • "s3.ap.cloud-object-storage.appdomain.cloud"
      • APAC Cross Regional Endpoint
      • Provider: IBMCOS
    • "s3.tok.ap.cloud-object-storage.appdomain.cloud"
      • APAC Cross Regional Tokyo Endpoint
      • Provider: IBMCOS
    • "s3.hkg.ap.cloud-object-storage.appdomain.cloud"
      • APAC Cross Regional Hong Kong Endpoint
      • Provider: IBMCOS
    • "s3.seo.ap.cloud-object-storage.appdomain.cloud"
      • APAC Cross Regional Seoul Endpoint
      • Provider: IBMCOS
    • "s3.private.ap.cloud-object-storage.appdomain.cloud"
      • APAC Cross Regional Private Endpoint
      • Provider: IBMCOS
    • "s3.private.tok.ap.cloud-object-storage.appdomain.cloud"
      • APAC Cross Regional Tokyo Private Endpoint
      • Provider: IBMCOS
    • "s3.private.hkg.ap.cloud-object-storage.appdomain.cloud"
      • APAC Cross Regional Hong Kong Private Endpoint
      • Provider: IBMCOS
    • "s3.private.seo.ap.cloud-object-storage.appdomain.cloud"
      • APAC Cross Regional Seoul Private Endpoint
      • Provider: IBMCOS
    • "s3.jp-tok.cloud-object-storage.appdomain.cloud"
      • APAC Region Japan Endpoint
      • Provider: IBMCOS
    • "s3.private.jp-tok.cloud-object-storage.appdomain.cloud"
      • APAC Region Japan Private Endpoint
      • Provider: IBMCOS
    • "s3.au-syd.cloud-object-storage.appdomain.cloud"
      • APAC Region Australia Endpoint
      • Provider: IBMCOS
    • "s3.private.au-syd.cloud-object-storage.appdomain.cloud"
      • APAC Region Australia Private Endpoint
      • Provider: IBMCOS
    • "s3.ams03.cloud-object-storage.appdomain.cloud"
      • Amsterdam Single Site Endpoint
      • Provider: IBMCOS
    • "s3.private.ams03.cloud-object-storage.appdomain.cloud"
      • Amsterdam Single Site Private Endpoint
      • Provider: IBMCOS
    • "s3.che01.cloud-object-storage.appdomain.cloud"
      • Chennai Single Site Endpoint
      • Provider: IBMCOS
    • "s3.private.che01.cloud-object-storage.appdomain.cloud"
      • Chennai Single Site Private Endpoint
      • Provider: IBMCOS
    • "s3.mel01.cloud-object-storage.appdomain.cloud"
      • Melbourne Single Site Endpoint
      • Provider: IBMCOS
    • "s3.private.mel01.cloud-object-storage.appdomain.cloud"
      • Melbourne Single Site Private Endpoint
      • Provider: IBMCOS
    • "s3.osl01.cloud-object-storage.appdomain.cloud"
      • Oslo Single Site Endpoint
      • Provider: IBMCOS
    • "s3.private.osl01.cloud-object-storage.appdomain.cloud"
      • Oslo Single Site Private Endpoint
      • Provider: IBMCOS
    • "s3.tor01.cloud-object-storage.appdomain.cloud"
      • Toronto Single Site Endpoint
      • Provider: IBMCOS
    • "s3.private.tor01.cloud-object-storage.appdomain.cloud"
      • Toronto Single Site Private Endpoint
      • Provider: IBMCOS
    • "s3.seo01.cloud-object-storage.appdomain.cloud"
      • Seoul Single Site Endpoint
      • Provider: IBMCOS
    • "s3.private.seo01.cloud-object-storage.appdomain.cloud"
      • Seoul Single Site Private Endpoint
      • Provider: IBMCOS
    • "s3.mon01.cloud-object-storage.appdomain.cloud"
      • Montreal Single Site Endpoint
      • Provider: IBMCOS
    • "s3.private.mon01.cloud-object-storage.appdomain.cloud"
      • Montreal Single Site Private Endpoint
      • Provider: IBMCOS
    • "s3.mex01.cloud-object-storage.appdomain.cloud"
      • Mexico Single Site Endpoint
      • Provider: IBMCOS
    • "s3.private.mex01.cloud-object-storage.appdomain.cloud"
      • Mexico Single Site Private Endpoint
      • Provider: IBMCOS
    • "s3.sjc04.cloud-object-storage.appdomain.cloud"
      • San Jose Single Site Endpoint
      • Provider: IBMCOS
    • "s3.private.sjc04.cloud-object-storage.appdomain.cloud"
      • San Jose Single Site Private Endpoint
      • Provider: IBMCOS
    • "s3.mil01.cloud-object-storage.appdomain.cloud"
      • Milan Single Site Endpoint
      • Provider: IBMCOS
    • "s3.private.mil01.cloud-object-storage.appdomain.cloud"
      • Milan Single Site Private Endpoint
      • Provider: IBMCOS
    • "s3.hkg02.cloud-object-storage.appdomain.cloud"
      • Hong Kong Single Site Endpoint
      • Provider: IBMCOS
    • "s3.private.hkg02.cloud-object-storage.appdomain.cloud"
      • Hong Kong Single Site Private Endpoint
      • Provider: IBMCOS
    • "s3.par01.cloud-object-storage.appdomain.cloud"
      • Paris Single Site Endpoint
      • Provider: IBMCOS
    • "s3.private.par01.cloud-object-storage.appdomain.cloud"
      • Paris Single Site Private Endpoint
      • Provider: IBMCOS
    • "s3.sng01.cloud-object-storage.appdomain.cloud"
      • Singapore Single Site Endpoint
      • Provider: IBMCOS
    • "s3.private.sng01.cloud-object-storage.appdomain.cloud"
      • Singapore Single Site Private Endpoint
      • Provider: IBMCOS
    • "de-fra.i3storage.com"
      • Frankfurt, Germany
      • Provider: Intercolo
    • "s3-eu-central-1.ionoscloud.com"
      • Frankfurt, Germany
      • Provider: IONOS
    • "s3-eu-central-2.ionoscloud.com"
      • Berlin, Germany
      • Provider: IONOS
    • "s3-eu-south-2.ionoscloud.com"
      • Logrono, Spain
      • Provider: IONOS
    • "s3.leviia.com"
      • The default endpoint
      • Leviia
      • Provider: Leviia
    • "storage.iran.liara.space"
      • The default endpoint
      • Iran
      • Provider: Liara
    • "nl-ams-1.linodeobjects.com"
      • Amsterdam (Netherlands), nl-ams-1
      • Provider: Linode
    • "us-southeast-1.linodeobjects.com"
      • Atlanta, GA (USA), us-southeast-1
      • Provider: Linode
    • "in-maa-1.linodeobjects.com"
      • Chennai (India), in-maa-1
      • Provider: Linode
    • "us-ord-1.linodeobjects.com"
      • Chicago, IL (USA), us-ord-1
      • Provider: Linode
    • "eu-central-1.linodeobjects.com"
      • Frankfurt (Germany), eu-central-1
      • Provider: Linode
    • "id-cgk-1.linodeobjects.com"
      • Jakarta (Indonesia), id-cgk-1
      • Provider: Linode
    • "gb-lon-1.linodeobjects.com"
      • London 2 (Great Britain), gb-lon-1
      • Provider: Linode
    • "us-lax-1.linodeobjects.com"
      • Los Angeles, CA (USA), us-lax-1
      • Provider: Linode
    • "es-mad-1.linodeobjects.com"
      • Madrid (Spain), es-mad-1
      • Provider: Linode
    • "au-mel-1.linodeobjects.com"
      • Melbourne (Australia), au-mel-1
      • Provider: Linode
    • "us-mia-1.linodeobjects.com"
      • Miami, FL (USA), us-mia-1
      • Provider: Linode
    • "it-mil-1.linodeobjects.com"
      • Milan (Italy), it-mil-1
      • Provider: Linode
    • "us-east-1.linodeobjects.com"
      • Newark, NJ (USA), us-east-1
      • Provider: Linode
    • "jp-osa-1.linodeobjects.com"
      • Osaka (Japan), jp-osa-1
      • Provider: Linode
    • "fr-par-1.linodeobjects.com"
      • Paris (France), fr-par-1
      • Provider: Linode
    • "br-gru-1.linodeobjects.com"
      • São Paulo (Brazil), br-gru-1
      • Provider: Linode
    • "us-sea-1.linodeobjects.com"
      • Seattle, WA (USA), us-sea-1
      • Provider: Linode
    • "ap-south-1.linodeobjects.com"
      • Singapore, ap-south-1
      • Provider: Linode
    • "sg-sin-1.linodeobjects.com"
      • Singapore 2, sg-sin-1
      • Provider: Linode
    • "se-sto-1.linodeobjects.com"
      • Stockholm (Sweden), se-sto-1
      • Provider: Linode
    • "us-iad-1.linodeobjects.com"
      • Washington, DC, (USA), us-iad-1
      • Provider: Linode
    • "s3.us-west-1.{account_name}.lyve.seagate.com"
      • US West 1 - California
      • Provider: LyveCloud
    • "s3.eu-west-1.{account_name}.lyve.seagate.com"
      • EU West 1 - Ireland
      • Provider: LyveCloud
    • "br-se1.magaluobjects.com"
      • São Paulo, SP (BR), br-se1
      • Provider: Magalu
    • "br-ne1.magaluobjects.com"
      • Fortaleza, CE (BR), br-ne1
      • Provider: Magalu
    • "s3.eu-central-1.s4.mega.io"
      • Mega S4 eu-central-1 (Amsterdam)
      • Provider: Mega
    • "s3.eu-central-2.s4.mega.io"
      • Mega S4 eu-central-2 (Bettembourg)
      • Provider: Mega
    • "s3.ca-central-1.s4.mega.io"
      • Mega S4 ca-central-1 (Montreal)
      • Provider: Mega
    • "s3.ca-west-1.s4.mega.io"
      • Mega S4 ca-west-1 (Vancouver)
      • Provider: Mega
    • "oos.eu-west-2.outscale.com"
      • Outscale EU West 2 (Paris)
      • Provider: Outscale
    • "oos.us-east-2.outscale.com"
      • Outscale US east 2 (New Jersey)
      • Provider: Outscale
    • "oos.us-west-1.outscale.com"
      • Outscale EU West 1 (California)
      • Provider: Outscale
    • "oos.cloudgouv-eu-west-1.outscale.com"
      • Outscale SecNumCloud (Paris)
      • Provider: Outscale
    • "oos.ap-northeast-1.outscale.com"
      • Outscale AP Northeast 1 (Japan)
      • Provider: Outscale
    • "s3.gra.io.cloud.ovh.net"
      • OVHcloud Gravelines, France
      • Provider: OVHcloud
    • "s3.rbx.io.cloud.ovh.net"
      • OVHcloud Roubaix, France
      • Provider: OVHcloud
    • "s3.sbg.io.cloud.ovh.net"
      • OVHcloud Strasbourg, France
      • Provider: OVHcloud
    • "s3.eu-west-par.io.cloud.ovh.net"
      • OVHcloud Paris, France (3AZ)
      • Provider: OVHcloud
    • "s3.de.io.cloud.ovh.net"
      • OVHcloud Frankfurt, Germany
      • Provider: OVHcloud
    • "s3.uk.io.cloud.ovh.net"
      • OVHcloud London, United Kingdom
      • Provider: OVHcloud
    • "s3.waw.io.cloud.ovh.net"
      • OVHcloud Warsaw, Poland
      • Provider: OVHcloud
    • "s3.bhs.io.cloud.ovh.net"
      • OVHcloud Beauharnois, Canada
      • Provider: OVHcloud
    • "s3.ca-east-tor.io.cloud.ovh.net"
      • OVHcloud Toronto, Canada
      • Provider: OVHcloud
    • "s3.sgp.io.cloud.ovh.net"
      • OVHcloud Singapore
      • Provider: OVHcloud
    • "s3.ap-southeast-syd.io.cloud.ovh.net"
      • OVHcloud Sydney, Australia
      • Provider: OVHcloud
    • "s3.ap-south-mum.io.cloud.ovh.net"
      • OVHcloud Mumbai, India
      • Provider: OVHcloud
    • "s3.us-east-va.io.cloud.ovh.us"
      • OVHcloud Vint Hill, Virginia, USA
      • Provider: OVHcloud
    • "s3.us-west-or.io.cloud.ovh.us"
      • OVHcloud Hillsboro, Oregon, USA
      • Provider: OVHcloud
    • "s3.rbx-archive.io.cloud.ovh.net"
      • OVHcloud Roubaix, France (Cold Archive)
      • Provider: OVHcloud
    • "s3.petabox.io"
      • US East (N. Virginia)
      • Provider: Petabox
    • "s3.us-east-1.petabox.io"
      • US East (N. Virginia)
      • Provider: Petabox
    • "s3.eu-central-1.petabox.io"
      • Europe (Frankfurt)
      • Provider: Petabox
    • "s3.ap-southeast-1.petabox.io"
      • Asia Pacific (Singapore)
      • Provider: Petabox
    • "s3.me-south-1.petabox.io"
      • Middle East (Bahrain)
      • Provider: Petabox
    • "s3.sa-east-1.petabox.io"
      • South America (São Paulo)
      • Provider: Petabox
    • "s3-cn-east-1.qiniucs.com"
      • East China Endpoint 1
      • Provider: Qiniu
    • "s3-cn-east-2.qiniucs.com"
      • East China Endpoint 2
      • Provider: Qiniu
    • "s3-cn-north-1.qiniucs.com"
      • North China Endpoint 1
      • Provider: Qiniu
    • "s3-cn-south-1.qiniucs.com"
      • South China Endpoint 1
      • Provider: Qiniu
    • "s3-us-north-1.qiniucs.com"
      • North America Endpoint 1
      • Provider: Qiniu
    • "s3-ap-southeast-1.qiniucs.com"
      • Southeast Asia Endpoint 1
      • Provider: Qiniu
    • "s3-ap-northeast-1.qiniucs.com"
      • Northeast Asia Endpoint 1
      • Provider: Qiniu
    • "s3.us-east-1.rabata.io"
      • US East (N. Virginia)
      • Provider: Rabata
    • "s3.eu-west-1.rabata.io"
      • EU West (Ireland)
      • Provider: Rabata
    • "s3.eu-west-2.rabata.io"
      • EU West (London)
      • Provider: Rabata
    • "s3.rackcorp.com"
      • Global (AnyCast) Endpoint
      • Provider: RackCorp
    • "au.s3.rackcorp.com"
      • Australia (Anycast) Endpoint
      • Provider: RackCorp
    • "au-nsw.s3.rackcorp.com"
      • Sydney (Australia) Endpoint
      • Provider: RackCorp
    • "au-qld.s3.rackcorp.com"
      • Brisbane (Australia) Endpoint
      • Provider: RackCorp
    • "au-vic.s3.rackcorp.com"
      • Melbourne (Australia) Endpoint
      • Provider: RackCorp
    • "au-wa.s3.rackcorp.com"
      • Perth (Australia) Endpoint
      • Provider: RackCorp
    • "ph.s3.rackcorp.com"
      • Manila (Philippines) Endpoint
      • Provider: RackCorp
    • "th.s3.rackcorp.com"
      • Bangkok (Thailand) Endpoint
      • Provider: RackCorp
    • "hk.s3.rackcorp.com"
      • HK (Hong Kong) Endpoint
      • Provider: RackCorp
    • "mn.s3.rackcorp.com"
      • Ulaanbaatar (Mongolia) Endpoint
      • Provider: RackCorp
    • "kg.s3.rackcorp.com"
      • Bishkek (Kyrgyzstan) Endpoint
      • Provider: RackCorp
    • "id.s3.rackcorp.com"
      • Jakarta (Indonesia) Endpoint
      • Provider: RackCorp
    • "jp.s3.rackcorp.com"
      • Tokyo (Japan) Endpoint
      • Provider: RackCorp
    • "sg.s3.rackcorp.com"
      • SG (Singapore) Endpoint
      • Provider: RackCorp
    • "de.s3.rackcorp.com"
      • Frankfurt (Germany) Endpoint
      • Provider: RackCorp
    • "us.s3.rackcorp.com"
      • USA (AnyCast) Endpoint
      • Provider: RackCorp
    • "us-east-1.s3.rackcorp.com"
      • New York (USA) Endpoint
      • Provider: RackCorp
    • "us-west-1.s3.rackcorp.com"
      • Freemont (USA) Endpoint
      • Provider: RackCorp
    • "nz.s3.rackcorp.com"
      • Auckland (New Zealand) Endpoint
      • Provider: RackCorp
    • "s3.nl-ams.scw.cloud"
      • Amsterdam Endpoint
      • Provider: Scaleway
    • "s3.fr-par.scw.cloud"
      • Paris Endpoint
      • Provider: Scaleway
    • "s3.pl-waw.scw.cloud"
      • Warsaw Endpoint
      • Provider: Scaleway
    • "localhost:8333"
      • SeaweedFS S3 localhost
      • Provider: SeaweedFS
    • "s3.ru-1.storage.selcloud.ru"
      • Saint Petersburg
      • Provider: Selectel,Servercore
    • "s3.gis-1.storage.selcloud.ru"
      • Moscow
      • Provider: Servercore
    • "s3.ru-7.storage.selcloud.ru"
      • Moscow
      • Provider: Servercore
    • "s3.uz-2.srvstorage.uz"
      • Tashkent, Uzbekistan
      • Provider: Servercore
    • "s3.kz-1.srvstorage.kz"
      • Almaty, Kazakhstan
      • Provider: Servercore
    • "s3.us-east-2.stackpathstorage.com"
      • US East Endpoint
      • Provider: StackPath
    • "s3.us-west-1.stackpathstorage.com"
      • US West Endpoint
      • Provider: StackPath
    • "s3.eu-central-1.stackpathstorage.com"
      • EU Endpoint
      • Provider: StackPath
    • "gateway.storjshare.io"
      • Global Hosted Gateway
      • Provider: Storj
    • "eu-001.s3.synologyc2.net"
      • EU Endpoint 1
      • Provider: Synology
    • "eu-002.s3.synologyc2.net"
      • EU Endpoint 2
      • Provider: Synology
    • "us-001.s3.synologyc2.net"
      • US Endpoint 1
      • Provider: Synology
    • "us-002.s3.synologyc2.net"
      • US Endpoint 2
      • Provider: Synology
    • "tw-001.s3.synologyc2.net"
      • TW Endpoint 1
      • Provider: Synology
    • "cos.ap-beijing.myqcloud.com"
      • Beijing Region
      • Provider: TencentCOS
    • "cos.ap-nanjing.myqcloud.com"
      • Nanjing Region
      • Provider: TencentCOS
    • "cos.ap-shanghai.myqcloud.com"
      • Shanghai Region
      • Provider: TencentCOS
    • "cos.ap-guangzhou.myqcloud.com"
      • Guangzhou Region
      • Provider: TencentCOS
    • "cos.ap-chengdu.myqcloud.com"
      • Chengdu Region
      • Provider: TencentCOS
    • "cos.ap-chongqing.myqcloud.com"
      • Chongqing Region
      • Provider: TencentCOS
    • "cos.ap-hongkong.myqcloud.com"
      • Hong Kong (China) Region
      • Provider: TencentCOS
    • "cos.ap-singapore.myqcloud.com"
      • Singapore Region
      • Provider: TencentCOS
    • "cos.ap-mumbai.myqcloud.com"
      • Mumbai Region
      • Provider: TencentCOS
    • "cos.ap-seoul.myqcloud.com"
      • Seoul Region
      • Provider: TencentCOS
    • "cos.ap-bangkok.myqcloud.com"
      • Bangkok Region
      • Provider: TencentCOS
    • "cos.ap-tokyo.myqcloud.com"
      • Tokyo Region
      • Provider: TencentCOS
    • "cos.na-siliconvalley.myqcloud.com"
      • Silicon Valley Region
      • Provider: TencentCOS
    • "cos.na-ashburn.myqcloud.com"
      • Virginia Region
      • Provider: TencentCOS
    • "cos.na-toronto.myqcloud.com"
      • Toronto Region
      • Provider: TencentCOS
    • "cos.eu-frankfurt.myqcloud.com"
      • Frankfurt Region
      • Provider: TencentCOS
    • "cos.eu-moscow.myqcloud.com"
      • Moscow Region
      • Provider: TencentCOS
    • "cos.accelerate.myqcloud.com"
      • Use Tencent COS Accelerate Endpoint
      • Provider: TencentCOS
    • "s3.wasabisys.com"
      • Wasabi US East 1 (N. Virginia)
      • Provider: Wasabi
    • "s3.us-east-2.wasabisys.com"
      • Wasabi US East 2 (N. Virginia)
      • Provider: Wasabi
    • "s3.us-central-1.wasabisys.com"
      • Wasabi US Central 1 (Texas)
      • Provider: Wasabi
    • "s3.us-west-1.wasabisys.com"
      • Wasabi US West 1 (Oregon)
      • Provider: Wasabi
    • "s3.ca-central-1.wasabisys.com"
      • Wasabi CA Central 1 (Toronto)
      • Provider: Wasabi
    • "s3.eu-central-1.wasabisys.com"
      • Wasabi EU Central 1 (Amsterdam)
      • Provider: Wasabi
    • "s3.eu-central-2.wasabisys.com"
      • Wasabi EU Central 2 (Frankfurt)
      • Provider: Wasabi
    • "s3.eu-west-1.wasabisys.com"
      • Wasabi EU West 1 (London)
      • Provider: Wasabi
    • "s3.eu-west-2.wasabisys.com"
      • Wasabi EU West 2 (Paris)
      • Provider: Wasabi
    • "s3.eu-south-1.wasabisys.com"
      • Wasabi EU South 1 (Milan)
      • Provider: Wasabi
    • "s3.ap-northeast-1.wasabisys.com"
      • Wasabi AP Northeast 1 (Tokyo) endpoint
      • Provider: Wasabi
    • "s3.ap-northeast-2.wasabisys.com"
      • Wasabi AP Northeast 2 (Osaka) endpoint
      • Provider: Wasabi
    • "s3.ap-southeast-1.wasabisys.com"
      • Wasabi AP Southeast 1 (Singapore)
      • Provider: Wasabi
    • "s3.ap-southeast-2.wasabisys.com"
      • Wasabi AP Southeast 2 (Sydney)
      • Provider: Wasabi
    • "idr01.zata.ai"
      • South Asia Endpoint
      • Provider: Zata

--s3-location-constraint

Location constraint - must be set to match the Region.

Leave blank if not sure. Used when creating buckets only.

Properties:

  • Config: location_constraint
  • Env Var: RCLONE_S3_LOCATION_CONSTRAINT
  • Provider: AWS,ArvanCloud,Ceph,ChinaMobile,DigitalOcean,Dreamhost,Exaba,GCS,Hetzner,IBMCOS,LyveCloud,Minio,Netease,Qiniu,Rabata,RackCorp,SeaweedFS,Synology,Wasabi,Zata,Other
  • Type: string
  • Required: false
  • Examples:
    • ""
      • Empty for US Region, Northern Virginia, or Pacific Northwest
      • Provider: AWS
    • "us-east-2"
      • US East (Ohio) Region
      • Provider: AWS
    • "us-west-1"
      • US West (Northern California) Region
      • Provider: AWS
    • "us-west-2"
      • US West (Oregon) Region
      • Provider: AWS
    • "ca-central-1"
      • Canada (Central) Region
      • Provider: AWS
    • "eu-west-1"
      • EU (Ireland) Region
      • Provider: AWS
    • "eu-west-2"
      • EU (London) Region
      • Provider: AWS
    • "eu-west-3"
      • EU (Paris) Region
      • Provider: AWS
    • "eu-north-1"
      • EU (Stockholm) Region
      • Provider: AWS
    • "eu-south-1"
      • EU (Milan) Region
      • Provider: AWS
    • "EU"
      • EU Region
      • Provider: AWS
    • "ap-southeast-1"
      • Asia Pacific (Singapore) Region
      • Provider: AWS
    • "ap-southeast-2"
      • Asia Pacific (Sydney) Region
      • Provider: AWS
    • "ap-northeast-1"
      • Asia Pacific (Tokyo) Region
      • Provider: AWS
    • "ap-northeast-2"
      • Asia Pacific (Seoul) Region
      • Provider: AWS
    • "ap-northeast-3"
      • Asia Pacific (Osaka-Local) Region
      • Provider: AWS
    • "ap-south-1"
      • Asia Pacific (Mumbai) Region
      • Provider: AWS
    • "ap-east-1"
      • Asia Pacific (Hong Kong) Region
      • Provider: AWS
    • "sa-east-1"
      • South America (Sao Paulo) Region
      • Provider: AWS
    • "il-central-1"
      • Israel (Tel Aviv) Region
      • Provider: AWS
    • "me-south-1"
      • Middle East (Bahrain) Region
      • Provider: AWS
    • "af-south-1"
      • Africa (Cape Town) Region
      • Provider: AWS
    • "cn-north-1"
      • China (Beijing) Region
      • Provider: AWS
    • "cn-northwest-1"
      • China (Ningxia) Region
      • Provider: AWS
    • "us-gov-east-1"
      • AWS GovCloud (US-East) Region
      • Provider: AWS
    • "us-gov-west-1"
      • AWS GovCloud (US) Region
      • Provider: AWS
    • "ir-thr-at1"
      • Tehran Iran (Simin)
      • Provider: ArvanCloud
    • "ir-tbz-sh1"
      • Tabriz Iran (Shahriar)
      • Provider: ArvanCloud
    • "wuxi1"
      • East China (Suzhou)
      • Provider: ChinaMobile
    • "jinan1"
      • East China (Jinan)
      • Provider: ChinaMobile
    • "ningbo1"
      • East China (Hangzhou)
      • Provider: ChinaMobile
    • "shanghai1"
      • East China (Shanghai-1)
      • Provider: ChinaMobile
    • "zhengzhou1"
      • Central China (Zhengzhou)
      • Provider: ChinaMobile
    • "hunan1"
      • Central China (Changsha-1)
      • Provider: ChinaMobile
    • "zhuzhou1"
      • Central China (Changsha-2)
      • Provider: ChinaMobile
    • "guangzhou1"
      • South China (Guangzhou-2)
      • Provider: ChinaMobile
    • "dongguan1"
      • South China (Guangzhou-3)
      • Provider: ChinaMobile
    • "beijing1"
      • North China (Beijing-1)
      • Provider: ChinaMobile
    • "beijing2"
      • North China (Beijing-2)
      • Provider: ChinaMobile
    • "beijing4"
      • North China (Beijing-3)
      • Provider: ChinaMobile
    • "huhehaote1"
      • North China (Huhehaote)
      • Provider: ChinaMobile
    • "chengdu1"
      • Southwest China (Chengdu)
      • Provider: ChinaMobile
    • "chongqing1"
      • Southwest China (Chongqing)
      • Provider: ChinaMobile
    • "guiyang1"
      • Southwest China (Guiyang)
      • Provider: ChinaMobile
    • "xian1"
      • Northwest China (Xian)
      • Provider: ChinaMobile
    • "yunnan"
      • Yunnan China (Kunming)
      • Provider: ChinaMobile
    • "yunnan2"
      • Yunnan China (Kunming-2)
      • Provider: ChinaMobile
    • "tianjin1"
      • Tianjin China (Tianjin)
      • Provider: ChinaMobile
    • "jilin1"
      • Jilin China (Changchun)
      • Provider: ChinaMobile
    • "hubei1"
      • Hubei China (Xiangyan)
      • Provider: ChinaMobile
    • "jiangxi1"
      • Jiangxi China (Nanchang)
      • Provider: ChinaMobile
    • "gansu1"
      • Gansu China (Lanzhou)
      • Provider: ChinaMobile
    • "shanxi1"
      • Shanxi China (Taiyuan)
      • Provider: ChinaMobile
    • "liaoning1"
      • Liaoning China (Shenyang)
      • Provider: ChinaMobile
    • "hebei1"
      • Hebei China (Shijiazhuang)
      • Provider: ChinaMobile
    • "fujian1"
      • Fujian China (Xiamen)
      • Provider: ChinaMobile
    • "guangxi1"
      • Guangxi China (Nanning)
      • Provider: ChinaMobile
    • "anhui1"
      • Anhui China (Huainan)
      • Provider: ChinaMobile
    • "us-standard"
      • US Cross Region Standard
      • Provider: IBMCOS
    • "us-vault"
      • US Cross Region Vault
      • Provider: IBMCOS
    • "us-cold"
      • US Cross Region Cold
      • Provider: IBMCOS
    • "us-flex"
      • US Cross Region Flex
      • Provider: IBMCOS
    • "us-east-standard"
      • US East Region Standard
      • Provider: IBMCOS
    • "us-east-vault"
      • US East Region Vault
      • Provider: IBMCOS
    • "us-east-cold"
      • US East Region Cold
      • Provider: IBMCOS
    • "us-east-flex"
      • US East Region Flex
      • Provider: IBMCOS
    • "us-south-standard"
      • US South Region Standard
      • Provider: IBMCOS
    • "us-south-vault"
      • US South Region Vault
      • Provider: IBMCOS
    • "us-south-cold"
      • US South Region Cold
      • Provider: IBMCOS
    • "us-south-flex"
      • US South Region Flex
      • Provider: IBMCOS
    • "eu-standard"
      • EU Cross Region Standard
      • Provider: IBMCOS
    • "eu-vault"
      • EU Cross Region Vault
      • Provider: IBMCOS
    • "eu-cold"
      • EU Cross Region Cold
      • Provider: IBMCOS
    • "eu-flex"
      • EU Cross Region Flex
      • Provider: IBMCOS
    • "eu-gb-standard"
      • Great Britain Standard
      • Provider: IBMCOS
    • "eu-gb-vault"
      • Great Britain Vault
      • Provider: IBMCOS
    • "eu-gb-cold"
      • Great Britain Cold
      • Provider: IBMCOS
    • "eu-gb-flex"
      • Great Britain Flex
      • Provider: IBMCOS
    • "ap-standard"
      • APAC Standard
      • Provider: IBMCOS
    • "ap-vault"
      • APAC Vault
      • Provider: IBMCOS
    • "ap-cold"
      • APAC Cold
      • Provider: IBMCOS
    • "ap-flex"
      • APAC Flex
      • Provider: IBMCOS
    • "mel01-standard"
      • Melbourne Standard
      • Provider: IBMCOS
    • "mel01-vault"
      • Melbourne Vault
      • Provider: IBMCOS
    • "mel01-cold"
      • Melbourne Cold
      • Provider: IBMCOS
    • "mel01-flex"
      • Melbourne Flex
      • Provider: IBMCOS
    • "tor01-standard"
      • Toronto Standard
      • Provider: IBMCOS
    • "tor01-vault"
      • Toronto Vault
      • Provider: IBMCOS
    • "tor01-cold"
      • Toronto Cold
      • Provider: IBMCOS
    • "tor01-flex"
      • Toronto Flex
      • Provider: IBMCOS
    • "cn-east-1"
      • East China Region 1
      • Provider: Qiniu
    • "cn-east-2"
      • East China Region 2
      • Provider: Qiniu
    • "cn-north-1"
      • North China Region 1
      • Provider: Qiniu
    • "cn-south-1"
      • South China Region 1
      • Provider: Qiniu
    • "us-north-1"
      • North America Region 1
      • Provider: Qiniu
    • "ap-southeast-1"
      • Southeast Asia Region 1
      • Provider: Qiniu
    • "ap-northeast-1"
      • Northeast Asia Region 1
      • Provider: Qiniu
    • "us-east-1"
      • US East (N. Virginia)
      • Provider: Rabata
    • "eu-west-1"
      • EU (Ireland)
      • Provider: Rabata
    • "eu-west-2"
      • EU (London)
      • Provider: Rabata
    • "global"
      • Global CDN Region
      • Provider: RackCorp
    • "au"
      • Australia (All locations)
      • Provider: RackCorp
    • "au-nsw"
      • NSW (Australia) Region
      • Provider: RackCorp
    • "au-qld"
      • QLD (Australia) Region
      • Provider: RackCorp
    • "au-vic"
      • VIC (Australia) Region
      • Provider: RackCorp
    • "au-wa"
      • Perth (Australia) Region
      • Provider: RackCorp
    • "ph"
      • Manila (Philippines) Region
      • Provider: RackCorp
    • "th"
      • Bangkok (Thailand) Region
      • Provider: RackCorp
    • "hk"
      • HK (Hong Kong) Region
      • Provider: RackCorp
    • "mn"
      • Ulaanbaatar (Mongolia) Region
      • Provider: RackCorp
    • "kg"
      • Bishkek (Kyrgyzstan) Region
      • Provider: RackCorp
    • "id"
      • Jakarta (Indonesia) Region
      • Provider: RackCorp
    • "jp"
      • Tokyo (Japan) Region
      • Provider: RackCorp
    • "sg"
      • SG (Singapore) Region
      • Provider: RackCorp
    • "de"
      • Frankfurt (Germany) Region
      • Provider: RackCorp
    • "us"
      • USA (AnyCast) Region
      • Provider: RackCorp
    • "us-east-1"
      • New York (USA) Region
      • Provider: RackCorp
    • "us-west-1"
      • Fremont (USA) Region
      • Provider: RackCorp
    • "nz"
      • Auckland (New Zealand) Region
      • Provider: RackCorp

--s3-acl

Canned ACL used when creating buckets and storing or copying objects.

This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.

For more info visithttps://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl

Note that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.

If the acl is an empty string then no X-Amz-Acl: header is added andthe default (private) will be used.

Properties:

  • Config: acl
  • Env Var: RCLONE_S3_ACL
  • Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
  • Type: string
  • Required: false
  • Examples:
    • "private"
      • Owner gets FULL_CONTROL.
      • No one else has access rights (default).
      • Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,Wasabi,Zata,Other
    • "public-read"
      • Owner gets FULL_CONTROL.
      • The AllUsers group gets READ access.
      • Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
    • "public-read-write"
      • Owner gets FULL_CONTROL.
      • The AllUsers group gets READ and WRITE access.
      • Granting this on a bucket is generally not recommended.
      • Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
    • "authenticated-read"
      • Owner gets FULL_CONTROL.
      • The AuthenticatedUsers group gets READ access.
      • Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
    • "bucket-owner-read"
      • Object owner gets FULL_CONTROL.
      • Bucket owner gets READ access.
      • If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
      • Provider: AWS,Alibaba,ArvanCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
    • "bucket-owner-full-control"
      • Both the object owner and the bucket owner get FULL_CONTROL over the object.
      • If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
      • Provider: AWS,Alibaba,ArvanCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
    • "private"
      • Owner gets FULL_CONTROL.
      • No one else has access rights (default).
      • This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS.
      • Provider: IBMCOS
    • "public-read"
      • Owner gets FULL_CONTROL.
      • The AllUsers group gets READ access.
      • This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS.
      • Provider: IBMCOS
    • "public-read-write"
      • Owner gets FULL_CONTROL.
      • The AllUsers group gets READ and WRITE access.
      • This acl is available on IBM Cloud (Infra), On-Premise IBM COS.
      • Provider: IBMCOS
    • "authenticated-read"
      • Owner gets FULL_CONTROL.
      • The AuthenticatedUsers group gets READ access.
      • Not supported on Buckets.
      • This acl is available on IBM Cloud (Infra) and On-Premise IBM COS.
      • Provider: IBMCOS
    • "default"
      • Owner gets Full_CONTROL.
      • No one else has access rights (default).
      • Provider: TencentCOS

--s3-server-side-encryption

The server-side encryption algorithm used when storing this object in S3.

Properties:

  • Config: server_side_encryption
  • Env Var: RCLONE_S3_SERVER_SIDE_ENCRYPTION
  • Provider: AWS,Ceph,ChinaMobile,Minio
  • Type: string
  • Required: false
  • Examples:
    • ""
      • None
      • Provider: AWS,Ceph,ChinaMobile,Minio
    • "AES256"
      • AES256
      • Provider: AWS,Ceph,ChinaMobile,Minio
    • "aws:kms"
      • aws:kms
      • Provider: AWS,Ceph,Minio

--s3-sse-kms-key-id

If using KMS ID you must provide the ARN of Key.

Properties:

  • Config: sse_kms_key_id
  • Env Var: RCLONE_S3_SSE_KMS_KEY_ID
  • Provider: AWS,Ceph,Minio
  • Type: string
  • Required: false
  • Examples:
    • ""
      • None
    • "arn:aws:kms:us-east-1:*"
      • arn:aws:kms:*

--s3-storage-class

The storage class to use when storing new objects in S3.

Properties:

  • Config: storage_class
  • Env Var: RCLONE_S3_STORAGE_CLASS
  • Provider: AWS,Alibaba,ArvanCloud,ChinaMobile,Liara,Magalu,Qiniu,Scaleway,TencentCOS
  • Type: string
  • Required: false
  • Examples:
    • ""
      • Default
      • Provider: AWS,Alibaba,ChinaMobile,TencentCOS
    • "STANDARD"
      • Standard storage class
      • Provider: AWS,Alibaba,ArvanCloud,ChinaMobile,Liara,Magalu,Qiniu,TencentCOS
    • "REDUCED_REDUNDANCY"
      • Reduced redundancy storage class
      • Provider: AWS
    • "STANDARD_IA"
      • Standard Infrequent Access storage class
      • Provider: AWS
    • "ONEZONE_IA"
      • One Zone Infrequent Access storage class
      • Provider: AWS
    • "GLACIER"
      • Glacier Flexible Retrieval storage class
      • Provider: AWS
    • "DEEP_ARCHIVE"
      • Glacier Deep Archive storage class
      • Provider: AWS
    • "INTELLIGENT_TIERING"
      • Intelligent-Tiering storage class
      • Provider: AWS
    • "GLACIER_IR"
      • Glacier Instant Retrieval storage class
      • Provider: AWS,Magalu
    • "GLACIER"
      • Archive storage mode
      • Provider: Alibaba,ChinaMobile,Qiniu
    • "STANDARD_IA"
      • Infrequent access storage mode
      • Provider: Alibaba,ChinaMobile,TencentCOS
    • "LINE"
      • Infrequent access storage mode
      • Provider: Qiniu
    • "DEEP_ARCHIVE"
      • Deep archive storage mode
      • Provider: Qiniu
    • ""
      • Default.
      • Provider: Scaleway
    • "STANDARD"
      • The Standard class for any upload.
      • Suitable for on-demand content like streaming or CDN.
      • Available in all regions.
      • Provider: Scaleway
    • "GLACIER"
      • Archived storage.
      • Prices are lower, but it needs to be restored first to be accessed.
      • Available in FR-PAR and NL-AMS regions.
      • Provider: Scaleway
    • "ONEZONE_IA"
      • One Zone - Infrequent Access.
      • A good choice for storing secondary backup copies or easily re-creatable data.
      • Available in the FR-PAR region only.
      • Provider: Scaleway
    • "ARCHIVE"
      • Archive storage mode
      • Provider: TencentCOS

--s3-ibm-api-key

IBM API Key to be used to obtain IAM token

Properties:

  • Config: ibm_api_key
  • Env Var: RCLONE_S3_IBM_API_KEY
  • Provider: IBMCOS
  • Type: string
  • Required: false

--s3-ibm-resource-instance-id

IBM service instance id

Properties:

  • Config: ibm_resource_instance_id
  • Env Var: RCLONE_S3_IBM_RESOURCE_INSTANCE_ID
  • Provider: IBMCOS
  • Type: string
  • Required: false

Advanced options

Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology, TencentCOS, Wasabi, Zata, Other).

--s3-bucket-acl

Canned ACL used when creating buckets.

For more info visithttps://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl

Note that this ACL is applied when only when creating buckets. If itisn't set then "acl" is used instead.

If the "acl" and "bucket_acl" are empty strings then no X-Amz-Acl:header is added and the default (private) will be used.

Properties:

  • Config: bucket_acl
  • Env Var: RCLONE_S3_BUCKET_ACL
  • Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,Servercore,StackPath,TencentCOS,Wasabi,Zata,Other
  • Type: string
  • Required: false
  • Examples:
    • "private"
      • Owner gets FULL_CONTROL.
      • No one else has access rights (default).
    • "public-read"
      • Owner gets FULL_CONTROL.
      • The AllUsers group gets READ access.
    • "public-read-write"
      • Owner gets FULL_CONTROL.
      • The AllUsers group gets READ and WRITE access.
      • Granting this on a bucket is generally not recommended.
    • "authenticated-read"
      • Owner gets FULL_CONTROL.
      • The AuthenticatedUsers group gets READ access.

--s3-requester-pays

Enables requester pays option when interacting with S3 bucket.

Properties:

  • Config: requester_pays
  • Env Var: RCLONE_S3_REQUESTER_PAYS
  • Provider: AWS
  • Type: bool
  • Default: false

--s3-sse-customer-algorithm

If using SSE-C, the server-side encryption algorithm used when storing this object in S3.

Properties:

  • Config: sse_customer_algorithm
  • Env Var: RCLONE_S3_SSE_CUSTOMER_ALGORITHM
  • Provider: AWS,Ceph,ChinaMobile,Minio
  • Type: string
  • Required: false
  • Examples:
    • ""
      • None
    • "AES256"
      • AES256

--s3-sse-customer-key

To use SSE-C you may provide the secret encryption key used to encrypt/decrypt your data.

Alternatively you can provide --sse-customer-key-base64.

Properties:

  • Config: sse_customer_key
  • Env Var: RCLONE_S3_SSE_CUSTOMER_KEY
  • Provider: AWS,Ceph,ChinaMobile,Minio
  • Type: string
  • Required: false
  • Examples:
    • ""
      • None

--s3-sse-customer-key-base64

If using SSE-C you must provide the secret encryption key encoded in base64 format to encrypt/decrypt your data.

Alternatively you can provide --sse-customer-key.

Properties:

  • Config: sse_customer_key_base64
  • Env Var: RCLONE_S3_SSE_CUSTOMER_KEY_BASE64
  • Provider: AWS,Ceph,ChinaMobile,Minio
  • Type: string
  • Required: false
  • Examples:
    • ""
      • None

--s3-sse-customer-key-md5

If using SSE-C you may provide the secret encryption key MD5 checksum (optional).

If you leave it blank, this is calculated automatically from the sse_customer_key provided.

Properties:

  • Config: sse_customer_key_md5
  • Env Var: RCLONE_S3_SSE_CUSTOMER_KEY_MD5
  • Provider: AWS,Ceph,ChinaMobile,Minio
  • Type: string
  • Required: false
  • Examples:
    • ""
      • None

--s3-upload-cutoff

Cutoff for switching to chunked upload.

Any files larger than this will be uploaded in chunks of chunk_size.The minimum is 0 and the maximum is 5 GiB.

Properties:

  • Config: upload_cutoff
  • Env Var: RCLONE_S3_UPLOAD_CUTOFF
  • Type: SizeSuffix
  • Default: 200Mi

--s3-chunk-size

Chunk size to use for uploading.

When uploading files larger than upload_cutoff or files with unknownsize (e.g. from "rclone rcat" or uploaded with "rclone mount" or googlephotos or google docs) they will be uploaded as multipart uploadsusing this chunk size.

Note that "--s3-upload-concurrency" chunks of this size are bufferedin memory per transfer.

If you are transferring large files over high-speed links and you haveenough memory, then increasing this will speed up the transfers.

Rclone will automatically increase the chunk size when uploading alarge file of known size to stay below the 10,000 chunks limit.

Files of unknown size are uploaded with the configuredchunk_size. Since the default chunk size is 5 MiB and there can be atmost 10,000 chunks, this means that by default the maximum size ofa file you can stream upload is 48 GiB. If you wish to stream uploadlarger files then you will need to increase chunk_size.

Increasing the chunk size decreases the accuracy of the progressstatistics displayed with "-P" flag. Rclone treats chunk as sent whenit's buffered by the AWS SDK, when in fact it may still be uploading.A bigger chunk size means a bigger AWS SDK buffer and progressreporting more deviating from the truth.

Properties:

  • Config: chunk_size
  • Env Var: RCLONE_S3_CHUNK_SIZE
  • Type: SizeSuffix
  • Default: 5Mi

--s3-max-upload-parts

Maximum number of parts in a multipart upload.

This option defines the maximum number of multipart chunks to usewhen doing a multipart upload.

This can be useful if a service does not support the AWS S3specification of 10,000 chunks.

Rclone will automatically increase the chunk size when uploading alarge file of a known size to stay below this number of chunks limit.

Properties:

  • Config: max_upload_parts
  • Env Var: RCLONE_S3_MAX_UPLOAD_PARTS
  • Type: int
  • Default: 10000

--s3-copy-cutoff

Cutoff for switching to multipart copy.

Any files larger than this that need to be server-side copied will becopied in chunks of this size.

The minimum is 0 and the maximum is 5 GiB.

Properties:

  • Config: copy_cutoff
  • Env Var: RCLONE_S3_COPY_CUTOFF
  • Type: SizeSuffix
  • Default: 4.656Gi

--s3-disable-checksum

Don't store MD5 checksum with object metadata.

Normally rclone will calculate the MD5 checksum of the input beforeuploading it so it can add it to metadata on the object. This is greatfor data integrity checking but can cause long delays for large filesto start uploading.

Properties:

  • Config: disable_checksum
  • Env Var: RCLONE_S3_DISABLE_CHECKSUM
  • Type: bool
  • Default: false

--s3-shared-credentials-file

Path to the shared credentials file.

If env_auth = true then rclone can use a shared credentials file.

If this variable is empty rclone will look for the"AWS_SHARED_CREDENTIALS_FILE" env variable. If the env value is emptyit will default to the current user's home directory.

Linux/OSX: "$HOME/.aws/credentials"Windows:   "%USERPROFILE%\.aws\credentials"

Properties:

  • Config: shared_credentials_file
  • Env Var: RCLONE_S3_SHARED_CREDENTIALS_FILE
  • Type: string
  • Required: false

--s3-profile

Profile to use in the shared credentials file.

If env_auth = true then rclone can use a shared credentials file. Thisvariable controls which profile is used in that file.

If empty it will default to the environment variable "AWS_PROFILE" or"default" if that environment variable is also not set.

Properties:

  • Config: profile
  • Env Var: RCLONE_S3_PROFILE
  • Type: string
  • Required: false

--s3-session-token

An AWS session token.

Properties:

  • Config: session_token
  • Env Var: RCLONE_S3_SESSION_TOKEN
  • Type: string
  • Required: false

--s3-upload-concurrency

Concurrency for multipart uploads and copies.

This is the number of chunks of the same file that are uploadedconcurrently for multipart uploads and copies.

If you are uploading small numbers of large files over high-speed linksand these uploads do not fully utilize your bandwidth, then increasingthis may help to speed up the transfers.

Properties:

  • Config: upload_concurrency
  • Env Var: RCLONE_S3_UPLOAD_CONCURRENCY
  • Type: int
  • Default: 4

--s3-force-path-style

If true use path style access if false use virtual hosted style.

If this is true (the default) then rclone will use path style access,if false then rclone will use virtual path style. Seethe AWS S3docsfor more info.

Some providers (e.g. AWS, Aliyun OSS, Netease COS, or Tencent COS) require this set tofalse - rclone will do this automatically based on the providersetting.

Note that if your bucket isn't a valid DNS name, i.e. has '.' or '_' in,you'll need to set this to true.

Properties:

  • Config: force_path_style
  • Env Var: RCLONE_S3_FORCE_PATH_STYLE
  • Type: bool
  • Default: true

--s3-v2-auth

If true use v2 authentication.

If this is false (the default) then rclone will use v4 authentication.If it is set then rclone will use v2 authentication.

Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH.

Properties:

  • Config: v2_auth
  • Env Var: RCLONE_S3_V2_AUTH
  • Type: bool
  • Default: false

--s3-use-dual-stack

If true use AWS S3 dual-stack endpoint (IPv6 support).

SeeAWS Docs on Dualstack Endpoints

Properties:

  • Config: use_dual_stack
  • Env Var: RCLONE_S3_USE_DUAL_STACK
  • Type: bool
  • Default: false

--s3-use-accelerate-endpoint

If true use the AWS S3 accelerated endpoint.

See:AWS S3 Transfer acceleration

Properties:

  • Config: use_accelerate_endpoint
  • Env Var: RCLONE_S3_USE_ACCELERATE_ENDPOINT
  • Provider: AWS
  • Type: bool
  • Default: false

--s3-use-arn-region

If true, enables arn region support for the service.

Properties:

  • Config: use_arn_region
  • Env Var: RCLONE_S3_USE_ARN_REGION
  • Type: bool
  • Default: false

--s3-leave-parts-on-error

If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery.

It should be set to true for resuming uploads across different sessions.

WARNING: Storing parts of an incomplete multipart upload counts towards space usage on S3 and will add additional costs if not cleaned up.

Properties:

  • Config: leave_parts_on_error
  • Env Var: RCLONE_S3_LEAVE_PARTS_ON_ERROR
  • Provider: AWS
  • Type: bool
  • Default: false

--s3-list-chunk

Size of listing chunk (response list for each ListObject S3 request).

This option is also known as "MaxKeys", "max-items", or "page-size" from the AWS S3 specification.Most services truncate the response list to 1000 objects even if requested more than that.In AWS S3 this is a global maximum and cannot be changed, seeAWS S3.In Ceph, this can be increased with the "rgw list buckets max chunk" option.

Properties:

  • Config: list_chunk
  • Env Var: RCLONE_S3_LIST_CHUNK
  • Type: int
  • Default: 1000

--s3-list-version

Version of ListObjects to use: 1,2 or 0 for auto.

When S3 originally launched it only provided the ListObjects call toenumerate objects in a bucket.

However in May 2016 the ListObjectsV2 call was introduced. This ismuch higher performance and should be used if at all possible.

If set to the default, 0, rclone will guess according to the providerset which list objects method to call. If it guesses wrong, then itmay be set manually here.

Properties:

  • Config: list_version
  • Env Var: RCLONE_S3_LIST_VERSION
  • Type: int
  • Default: 0

--s3-list-url-encode

Whether to url encode listings: true/false/unset

Some providers support URL encoding listings and where this isavailable this is more reliable when using control characters in filenames. If this is set to unset (the default) then rclone will chooseaccording to the provider setting what to apply, but you can overriderclone's choice here.

Properties:

  • Config: list_url_encode
  • Env Var: RCLONE_S3_LIST_URL_ENCODE
  • Type: Tristate
  • Default: unset

--s3-no-check-bucket

If set, don't attempt to check the bucket exists or create it.

This can be useful when trying to minimise the number of transactionsrclone does if you know the bucket exists already.

It can also be needed if the user you are using does not have bucketcreation permissions. Before v1.52.0 this would have passed silentlydue to a bug.

Properties:

  • Config: no_check_bucket
  • Env Var: RCLONE_S3_NO_CHECK_BUCKET
  • Type: bool
  • Default: false

--s3-no-head

If set, don't HEAD uploaded objects to check integrity.

This can be useful when trying to minimise the number of transactionsrclone does.

Setting it means that if rclone receives a 200 OK message afteruploading an object with PUT then it will assume that it got uploadedproperly.

In particular it will assume:

  • the metadata, including modtime, storage class and content type was as uploaded
  • the size was as uploaded

It reads the following items from the response for a single part PUT:

  • the MD5SUM
  • The uploaded date

For multipart uploads these items aren't read.

If an source object of unknown length is uploaded then rclonewill do aHEAD request.

Setting this flag increases the chance for undetected upload failures,in particular an incorrect size, so it isn't recommended for normaloperation. In practice the chance of an undetected upload failure isvery small even with this flag.

Properties:

  • Config: no_head
  • Env Var: RCLONE_S3_NO_HEAD
  • Type: bool
  • Default: false

--s3-no-head-object

If set, do not do HEAD before GET when getting objects.

Properties:

  • Config: no_head_object
  • Env Var: RCLONE_S3_NO_HEAD_OBJECT
  • Type: bool
  • Default: false

--s3-encoding

The encoding for the backend.

See theencoding section in the overview for more info.

Properties:

  • Config: encoding
  • Env Var: RCLONE_S3_ENCODING
  • Type: Encoding
  • Default: Slash,InvalidUtf8,Dot

--s3-memory-pool-flush-time

How often internal memory buffer pools will be flushed. (no longer used)

Properties:

  • Config: memory_pool_flush_time
  • Env Var: RCLONE_S3_MEMORY_POOL_FLUSH_TIME
  • Type: Duration
  • Default: 1m0s

--s3-memory-pool-use-mmap

Whether to use mmap buffers in internal memory pool. (no longer used)

Properties:

  • Config: memory_pool_use_mmap
  • Env Var: RCLONE_S3_MEMORY_POOL_USE_MMAP
  • Type: bool
  • Default: false

--s3-disable-http2

Disable usage of http2 for S3 backends.

There is currently an unsolved issue with the s3 (specifically minio) backendand HTTP/2. HTTP/2 is enabled by default for the s3 backend but can bedisabled here. When the issue is solved this flag will be removed.

See:https://github.com/rclone/rclone/issues/4673,https://github.com/rclone/rclone/issues/3631

Properties:

  • Config: disable_http2
  • Env Var: RCLONE_S3_DISABLE_HTTP2
  • Type: bool
  • Default: false

--s3-download-url

Custom endpoint for downloads.This is usually set to a CloudFront CDN URL as AWS S3 offerscheaper egress for data downloaded through the CloudFront network.

Properties:

  • Config: download_url
  • Env Var: RCLONE_S3_DOWNLOAD_URL
  • Type: string
  • Required: false

--s3-directory-markers

Upload an empty object with a trailing slash when a new directory is created

Empty folders are unsupported for bucket based remotes, this option creates an emptyobject ending with "/", to persist the folder.

Properties:

  • Config: directory_markers
  • Env Var: RCLONE_S3_DIRECTORY_MARKERS
  • Type: bool
  • Default: false

--s3-use-multipart-etag

Whether to use ETag in multipart uploads for verification

This should be true, false or left unset to use the default for the provider.

Properties:

  • Config: use_multipart_etag
  • Env Var: RCLONE_S3_USE_MULTIPART_ETAG
  • Type: Tristate
  • Default: unset

--s3-use-unsigned-payload

Whether to use an unsigned payload in PutObject

Rclone has to avoid the AWS SDK seeking the body when callingPutObject. The AWS provider can add checksums in the trailer to avoidseeking but other providers can't.

This should be true, false or left unset to use the default for the provider.

Properties:

  • Config: use_unsigned_payload
  • Env Var: RCLONE_S3_USE_UNSIGNED_PAYLOAD
  • Type: Tristate
  • Default: unset

--s3-use-presigned-request

Whether to use a presigned request or PutObject for single part uploads

If this is false rclone will use PutObject from the AWS SDK to uploadan object.

Versions of rclone < 1.59 use presigned requests to upload a singlepart object and setting this flag to true will re-enable thatfunctionality. This shouldn't be necessary except in exceptionalcircumstances or for testing.

Properties:

  • Config: use_presigned_request
  • Env Var: RCLONE_S3_USE_PRESIGNED_REQUEST
  • Type: bool
  • Default: false

--s3-use-data-integrity-protections

If true use AWS S3 data integrity protections.

SeeAWS Docs on Data Integrity Protections

Properties:

  • Config: use_data_integrity_protections
  • Env Var: RCLONE_S3_USE_DATA_INTEGRITY_PROTECTIONS
  • Type: Tristate
  • Default: unset

--s3-versions

Include old versions in directory listings.

Properties:

  • Config: versions
  • Env Var: RCLONE_S3_VERSIONS
  • Type: bool
  • Default: false

--s3-version-at

Show file versions as they were at the specified time.

The parameter should be a date, "2006-01-02", datetime "2006-01-0215:04:05" or a duration for that long ago, eg "100d" or "1h".

Note that when using this no file write operations are permitted,so you can't upload files or delete them.

Seethe time option docs for valid formats.

Properties:

  • Config: version_at
  • Env Var: RCLONE_S3_VERSION_AT
  • Type: Time
  • Default: off

--s3-version-deleted

Show deleted file markers when using versions.

This shows deleted file markers in the listing when using versions. These will appearas 0 size files. The only operation which can be performed on them is deletion.

Deleting a delete marker will reveal the previous version.

Deleted files will always show with a timestamp.

Properties:

  • Config: version_deleted
  • Env Var: RCLONE_S3_VERSION_DELETED
  • Type: bool
  • Default: false

--s3-decompress

If set this will decompress gzip encoded objects.

It is possible to upload objects to S3 with "Content-Encoding: gzip"set. Normally rclone will download these files as compressed objects.

If this flag is set then rclone will decompress these files with"Content-Encoding: gzip" as they are received. This means that rclonecan't check the size and hash but the file contents will be decompressed.

Properties:

  • Config: decompress
  • Env Var: RCLONE_S3_DECOMPRESS
  • Type: bool
  • Default: false

--s3-might-gzip

Set this if the backend might gzip objects.

Normally providers will not alter objects when they are downloaded. Ifan object was not uploaded withContent-Encoding: gzip then it won'tbe set on download.

However some providers may gzip objects even if they weren't uploadedwithContent-Encoding: gzip (eg Cloudflare).

A symptom of this would be receiving errors like

ERROR corrupted on transfer: sizes differ NNN vs MMM

If you set this flag and rclone downloads an object withContent-Encoding: gzip set and chunked transfer encoding, then rclonewill decompress the object on the fly.

If this is set to unset (the default) then rclone will chooseaccording to the provider setting what to apply, but you can overriderclone's choice here.

Properties:

  • Config: might_gzip
  • Env Var: RCLONE_S3_MIGHT_GZIP
  • Type: Tristate
  • Default: unset

--s3-use-accept-encoding-gzip

Whether to sendAccept-Encoding: gzip header.

By default, rclone will appendAccept-Encoding: gzip to the request to downloadcompressed objects whenever possible.

However some providers such as Google Cloud Storage may alter the HTTP headers, breakingthe signature of the request.

A symptom of this would be receiving errors like

SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided.

In this case, you might want to try disabling this option.

Properties:

  • Config: use_accept_encoding_gzip
  • Env Var: RCLONE_S3_USE_ACCEPT_ENCODING_GZIP
  • Type: Tristate
  • Default: unset

--s3-no-system-metadata

Suppress setting and reading of system metadata

Properties:

  • Config: no_system_metadata
  • Env Var: RCLONE_S3_NO_SYSTEM_METADATA
  • Type: bool
  • Default: false

--s3-sts-endpoint

Endpoint for STS (deprecated).

Leave blank if using AWS to use the default endpoint for the region.

Properties:

  • Config: sts_endpoint
  • Env Var: RCLONE_S3_STS_ENDPOINT
  • Provider: AWS
  • Type: string
  • Required: false

--s3-use-already-exists

Set if rclone should report BucketAlreadyExists errors on bucket creation.

At some point during the evolution of the s3 protocol, AWS startedreturning anAlreadyOwnedByYou error when attempting to create abucket that the user already owned, rather than aBucketAlreadyExists error.

Unfortunately exactly what has been implemented by s3 clones is alittle inconsistent, some returnAlreadyOwnedByYou, some returnBucketAlreadyExists and some return no error at all.

This is important to rclone because it ensures the bucket exists bycreating it on quite a lot of operations (unless--s3-no-check-bucket is used).

If rclone knows the provider can returnAlreadyOwnedByYou or returnsno error then it can reportBucketAlreadyExists errors when the userattempts to create a bucket not owned by them. Otherwise rcloneignores theBucketAlreadyExists error which can lead to confusion.

This should be automatically set correctly for all providers rcloneknows about - please make a bug report if not.

Properties:

  • Config: use_already_exists
  • Env Var: RCLONE_S3_USE_ALREADY_EXISTS
  • Type: Tristate
  • Default: unset

--s3-use-multipart-uploads

Set if rclone should use multipart uploads.

You can change this if you want to disable the use of multipart uploads.This shouldn't be necessary in normal operation.

This should be automatically set correctly for all providers rcloneknows about - please make a bug report if not.

Properties:

  • Config: use_multipart_uploads
  • Env Var: RCLONE_S3_USE_MULTIPART_UPLOADS
  • Type: Tristate
  • Default: unset

--s3-use-x-id

Set if rclone should add x-id URL parameters.

You can change this if you want to disable the AWS SDK fromadding x-id URL parameters.

This shouldn't be necessary in normal operation.

This should be automatically set correctly for all providers rcloneknows about - please make a bug report if not.

Properties:

  • Config: use_x_id
  • Env Var: RCLONE_S3_USE_X_ID
  • Type: Tristate
  • Default: unset

--s3-sign-accept-encoding

Set if rclone should include Accept-Encoding as part of the signature.

You can change this if you want to stop rclone includingAccept-Encoding as part of the signature.

This shouldn't be necessary in normal operation.

This should be automatically set correctly for all providers rcloneknows about - please make a bug report if not.

Properties:

  • Config: sign_accept_encoding
  • Env Var: RCLONE_S3_SIGN_ACCEPT_ENCODING
  • Type: Tristate
  • Default: unset

--s3-directory-bucket

Set to use AWS Directory Buckets

If you are using an AWS Directory Bucket then set this flag.

This will ensure noContent-Md5 headers are sent and ensureETagheaders are not interpreted as MD5 sums.X-Amz-Meta-Md5chksum willbe set on all objects whether single or multipart uploaded.

This also setsno_check_bucket = true.

Note that Directory Buckets do not support:

  • Versioning
  • Content-Encoding: gzip

Rclone limitations with Directory Buckets:

  • rclone does not support creating Directory Buckets withrclone mkdir
  • ... or removing them withrclone rmdir yet
  • Directory Buckets do not appear when doingrclone lsf at the top level.
  • Rclone can't remove auto created directories yet. In theory this shouldwork withdirectory_markers = true but it doesn't.
  • Directories don't seem to appear in recursive (ListR) listings.

Properties:

  • Config: directory_bucket
  • Env Var: RCLONE_S3_DIRECTORY_BUCKET
  • Provider: AWS
  • Type: bool
  • Default: false

--s3-sdk-log-mode

Set to debug the SDK

This can be set to a comma separated list of the following functions:

  • Signing
  • Retries
  • Request
  • RequestWithBody
  • Response
  • ResponseWithBody
  • DeprecatedUsage
  • RequestEventMessage
  • ResponseEventMessage

UseOff to disable andAll to set all log levels. You will need touse-vv to see the debug level logs.

Properties:

  • Config: sdk_log_mode
  • Env Var: RCLONE_S3_SDK_LOG_MODE
  • Type: Bits
  • Default: Off

--s3-description

Description of the remote.

Properties:

  • Config: description
  • Env Var: RCLONE_S3_DESCRIPTION
  • Type: string
  • Required: false

Metadata

User metadata is stored as x-amz-meta- keys. S3 metadata keys are case insensitive and are always returned in lower case.

Here are the possible system metadata items for the s3 backend.

NameHelpTypeExampleRead Only
btimeTime of file birth (creation) read from Last-Modified headerRFC 33392006-01-02T15:04:05.999999999Z07:00Y
cache-controlCache-Control headerstringno-cacheN
content-dispositionContent-Disposition headerstringinlineN
content-encodingContent-Encoding headerstringgzipN
content-languageContent-Language headerstringen-USN
content-typeContent-Type headerstringtext/plainN
mtimeTime of last modification, read from rclone metadataRFC 33392006-01-02T15:04:05.999999999Z07:00N
tierTier of the objectstringGLACIERY

See themetadata docs for more info.

Backend commands

Here are the commands specific to the s3 backend.

Run them with:

rclone backend COMMAND remote:

The help below will explain what arguments each command takes.

See thebackend command for moreinfo on how to pass options and arguments.

These can be run on a running backend using the rc commandbackend/command.

restore

Restore objects from GLACIER or INTELLIGENT-TIERING archive tier.

rclone backend restore remote: [options] [<arguments>+]

This command can be used to restore one or more objects from GLACIER to normalstorage or from INTELLIGENT-TIERING Archive Access / Deep Archive Access tierto the Frequent Access tier.

Usage examples:

rclone backend restore s3:bucket/path/to/ --include /object -o priority=PRIORITY -o lifetime=DAYSrclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY -o lifetime=DAYSrclone backend restore s3:bucket -o priority=PRIORITY -o lifetime=DAYSrclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY

This flag also obeys the filters. Test first with --interactive/-i or --dry-runflags.

rclone --interactive backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1

All the objects shown will be marked for restore, then:

rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1

It returns a list of status dictionaries with Remote and Statuskeys. The Status will be OK if it was successful or an error messageif not.

[{"Status":"OK","Remote":"test.txt"},{"Status":"OK","Remote":"test/file4.txt"}]

Options:

  • "description": The optional description for the job.
  • "lifetime": Lifetime of the active copy in days, ignored for INTELLIGENT-TIERINGstorage.
  • "priority": Priority of restore: Standard|Expedited|Bulk

restore-status

Show the status for objects being restored from GLACIER or INTELLIGENT-TIERING.

rclone backend restore-status remote: [options] [<arguments>+]

This command can be used to show the status for objects being restored fromGLACIER to normal storage or from INTELLIGENT-TIERING Archive Access / DeepArchive Access tier to the Frequent Access tier.

Usage examples:

rclone backend restore-status s3:bucket/path/to/objectrclone backend restore-status s3:bucket/path/to/directoryrclone backend restore-status -o all s3:bucket/path/to/directory

This command does not obey the filters.

It returns a list of status dictionaries:

[{"Remote":"file.txt","VersionID":null,"RestoreStatus":{"IsRestoreInProgress":true,"RestoreExpiryDate":"2023-09-06T12:29:19+01:00"},"StorageClass":"GLACIER"},{"Remote":"test.pdf","VersionID":null,"RestoreStatus":{"IsRestoreInProgress":false,"RestoreExpiryDate":"2023-09-06T12:29:19+01:00"},"StorageClass":"DEEP_ARCHIVE"},{"Remote":"test.gz","VersionID":null,"RestoreStatus":{"IsRestoreInProgress":true,"RestoreExpiryDate":"null"},"StorageClass":"INTELLIGENT_TIERING"}]

Options:

  • "all": If set then show all objects, not just ones with restore status.

list-multipart-uploads

List the unfinished multipart uploads.

rclone backend list-multipart-uploads remote: [options] [<arguments>+]

This command lists the unfinished multipart uploads in JSON format.

Usage examples:

rclone backend list-multipart s3:bucket/path/to/object

It returns a dictionary of buckets with values as lists of unfinishedmultipart uploads.

You can call it with no bucket in which case it lists all bucket, witha bucket or with a bucket and path.

{"rclone":[{"Initiated":"2020-06-26T14:20:36Z","Initiator":{"DisplayName":"XXX","ID":"arn:aws:iam::XXX:user/XXX"},"Key":"KEY","Owner":{"DisplayName":null,"ID":"XXX"},"StorageClass":"STANDARD","UploadId":"XXX"}],"rclone-1000files":[],"rclone-dst":[]}

cleanup

Remove unfinished multipart uploads.

rclone backend cleanup remote: [options] [<arguments>+]

This command removes unfinished multipart uploads of age greater thanmax-age which defaults to 24 hours.

Note that you can use --interactive/-i or --dry-run with this command to seewhat it would do.

Usage examples:

rclone backend cleanup s3:bucket/path/to/objectrclone backend cleanup -o max-age=7w s3:bucket/path/to/object

Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.

Options:

  • "max-age": Max age of upload to delete.

cleanup-hidden

Remove old versions of files.

rclone backend cleanup-hidden remote: [options] [<arguments>+]

This command removes any old hidden versions of fileson a versions enabled bucket.

Note that you can use --interactive/-i or --dry-run with this command to seewhat it would do.

Usage example:

rclone backend cleanup-hidden s3:bucket/path/to/dir

versioning

Set/get versioning support for a bucket.

rclone backend versioning remote: [options] [<arguments>+]

This command sets versioning support if a parameter ispassed and then returns the current versioning status for the bucketsupplied.

Usage examples:

rclone backend versioning s3:bucket # read status onlyrclone backend versioning s3:bucket Enabledrclone backend versioning s3:bucket Suspended

It may return "Enabled", "Suspended" or "Unversioned". Note that onceversioning has been enabled the status can't be set back to "Unversioned".

set

Set command for updating the config parameters.

rclone backend set remote: [options] [<arguments>+]

This set command can be used to update the config parametersfor a running s3 backend.

Usage examples:

rclone backend set s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2]rclone rc backend/command command=set fs=s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2]rclone rc backend/command command=set fs=s3: -o session_token=X -o access_key_id=X -o secret_access_key=X

The option keys are named as they are in the config file.

This rebuilds the connection to the s3 backend when it is called withthe new parameters. Only new parameters need be passed as the valueswill default to those currently in use.

It doesn't return anything.

Anonymous access to public buckets

If you want to use rclone to access a public bucket, configure with ablankaccess_key_id andsecret_access_key. Your config should endup looking like this:

[anons3]type=s3provider=AWS

Then use it as normal with the name of the public bucket, e.g.

rclone lsd anons3:1000genomes

You will be able to list and copy data but not upload it.

You can also do this entirely on the command line

rclone lsd :s3,provider=AWS:1000genomes

Providers

AWS S3

This is the provider used as main example and described in theconfigurationsection above.

AWS Directory Buckets

From rclone v1.69Directory Bucketsare supported.

You will need to set thedirectory_buckets = true config parameteror use--s3-directory-buckets.

Note that rclone cannot yet:

  • Create directory buckets
  • List directory buckets

Seethe --s3-directory-buckets flag for more info

AWS Snowball Edge

AWS Snowball is a hardwareappliance used for transferring bulk data back to AWS. Its mainsoftware interface is S3 object storage.

To use rclone with AWS Snowball Edge devices, configure as standardfor an 'S3 Compatible Service'.

If using rclone pre v1.59 be sure to setupload_cutoff = 0 otherwiseyou will run into authentication header issues as the snowball devicedoes not support query parameter based authentication.

With rclone v1.59 or later settingupload_cutoff should not be necessary.

eg.

[snowball]type=s3provider=Otheraccess_key_id=YOUR_ACCESS_KEYsecret_access_key=YOUR_SECRET_KEYendpoint=http://[IP of Snowball]:8080upload_cutoff=0

Alibaba OSS

Here is an example of making anAlibaba Cloud (Aliyun) OSSconfiguration. First run:

rclone config

This will guide you through an interactive setup process.

No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nname> ossType of storage to configure.Enter a string value. Press Enter for the default ("").Choose a number from below, or type in your own value[snip]XX / Amazon S3 Compliant Storage Providers including AWS, ...   \ "s3"[snip]Storage> s3Choose your S3 provider.Enter a string value. Press Enter for the default ("").Choose a number from below, or type in your own value 1 / Amazon Web Services (AWS) S3   \ "AWS" 2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun   \ "Alibaba" 3 / Ceph Object Storage   \ "Ceph"[snip]provider> AlibabaGet AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Enter a boolean value (true or false). Press Enter for the default ("false").Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step   \ "false" 2 / Get AWS credentials from the environment (env vars or IAM)   \ "true"env_auth> 1AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a string value. Press Enter for the default ("").access_key_id> accesskeyidAWS Secret Access Key (password)Leave blank for anonymous access or runtime credentials.Enter a string value. Press Enter for the default ("").secret_access_key> secretaccesskeyEndpoint for OSS API.Enter a string value. Press Enter for the default ("").Choose a number from below, or type in your own value 1 / East China 1 (Hangzhou)   \ "oss-cn-hangzhou.aliyuncs.com" 2 / East China 2 (Shanghai)   \ "oss-cn-shanghai.aliyuncs.com" 3 / North China 1 (Qingdao)   \ "oss-cn-qingdao.aliyuncs.com"[snip]endpoint> 1Canned ACL used when creating buckets and storing or copying objects.Note that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.Enter a string value. Press Enter for the default ("").Choose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default).   \ "private" 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.   \ "public-read"   / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.[snip]acl> 1The storage class to use when storing new objects in OSS.Enter a string value. Press Enter for the default ("").Choose a number from below, or type in your own value 1 / Default   \ "" 2 / Standard storage class   \ "STANDARD" 3 / Archive storage mode.   \ "GLACIER" 4 / Infrequent access storage mode.   \ "STANDARD_IA"storage_class> 1Edit advanced config? (y/n)y) Yesn) Noy/n> nRemote config--------------------[oss]type = s3provider = Alibabaenv_auth = falseaccess_key_id = accesskeyidsecret_access_key = secretaccesskeyendpoint = oss-cn-hangzhou.aliyuncs.comacl = privatestorage_class = Standard--------------------y) Yes this is OKe) Edit this remoted) Delete this remotey/e/d> y

ArvanCloud

ArvanCloud ArvanCloudObject Storage goes beyond the limited traditional file storage.It gives you access to backup and archived files and allows sharing.Files like profile image in the app, images sent by users or scanned documentscan be stored securely and easily in our Object Storage service.

ArvanCloud provides an S3 interface which can be configured for use withrclone like this.

No remotes found, make a new one\?n) New remotes) Set configuration passwordn/s> nname> ArvanCloudType of storage to configure.Choose a number from below, or type in your own value[snip]XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Liara, Minio)   \ "s3"[snip]Storage> s3Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step   \ "false" 2 / Get AWS credentials from the environment (env vars or IAM)   \ "true"env_auth> 1AWS Access Key ID - leave blank for anonymous access or runtime credentials.access_key_id> YOURACCESSKEYAWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.secret_access_key> YOURSECRETACCESSKEYRegion to connect to.Choose a number from below, or type in your own value   / The default endpoint - a good choice if you are unsure. 1 | US Region, Northern Virginia, or Pacific Northwest.   | Leave location constraint empty.   \ "us-east-1"[snip]region>Endpoint for S3 API.Leave blank if using ArvanCloud to use the default endpoint for the region.Specify if using an S3 clone such as Ceph.endpoint> s3.arvanstorage.comLocation constraint - must be set to match the Region. Used when creating buckets only.Choose a number from below, or type in your own value 1 / Empty for Iran-Tehran Region.   \ ""[snip]location_constraint>Canned ACL used when creating buckets and/or storing objects in S3.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclChoose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default).   \ "private"[snip]acl>The server-side encryption algorithm used when storing this object in S3.Choose a number from below, or type in your own value 1 / None   \ "" 2 / AES256   \ "AES256"server_side_encryption>The storage class to use when storing objects in S3.Choose a number from below, or type in your own value 1 / Default   \ "" 2 / Standard storage class   \ "STANDARD"storage_class>Remote config--------------------[ArvanCloud]env_auth = falseaccess_key_id = YOURACCESSKEYsecret_access_key = YOURSECRETACCESSKEYregion = ir-thr-at1endpoint = s3.arvanstorage.comlocation_constraint =acl =server_side_encryption =storage_class =--------------------y) Yes this is OKe) Edit this remoted) Delete this remotey/e/d> y

This will leave the config file looking like this.

[ArvanCloud]type=s3provider=ArvanCloudenv_auth=falseaccess_key_id=YOURACCESSKEYsecret_access_key=YOURSECRETACCESSKEYregion=endpoint=s3.arvanstorage.comlocation_constraint=acl=server_side_encryption=storage_class=

Ceph

Ceph is an open-source, unified, distributedstorage system designed for excellent performance, reliability andscalability. It has an S3 compatible object storage interface.

To use rclone with Ceph, configure as above but leave the region blankand set the endpoint. You should end up with something like this inyour config:

[ceph]type=s3provider=Cephenv_auth=falseaccess_key_id=XXXsecret_access_key=YYYregion=endpoint=https://ceph.endpoint.example.comlocation_constraint=acl=server_side_encryption=storage_class=

If you are using an older version of CEPH (e.g. 10.2.x Jewel) and aversion of rclone before v1.59 then you may need to supply theparameter--s3-upload-cutoff 0 or put this in the config file asupload_cutoff 0 to work around a bug which causes uploading of smallfiles to fail.

Note also that Ceph sometimes puts/ in the passwords it givesusers. If you read the secret access key using the command line toolsyou will get a JSON blob with the/ escaped as\/. Make sure youonly write/ in the secret access key.

Eg the dump from Ceph looks something like this (irrelevant keysremoved).

{"user_id":"xxx","display_name":"xxxx","keys":[{"user":"xxx","access_key":"xxxxxx","secret_key":"xxxxxx\/xxxx"}],}

Because this is a json dump, it is encoding the/ as\/, so if youuse the secret key asxxxxxx/xxxx it will work fine.

China Mobile Ecloud Elastic Object Storage (EOS)

Here is an example of making anChina Mobile Ecloud Elastic Object Storage (EOS)configuration. First run:

rclone config

This will guide you through an interactive setup process.

No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nname> ChinaMobileOption Storage.Type of storage to configure.Choose a number from below, or type in your own value. ...XX / Amazon S3 Compliant Storage Providers including AWS, ...   \ (s3) ...Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty. ... 4 / China Mobile Ecloud Elastic Object Storage (EOS)   \ (ChinaMobile) ...provider> ChinaMobileOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step.   \ (false) 2 / Get AWS credentials from the environment (env vars or IAM).   \ (true)env_auth>Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> accesskeyidOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> secretaccesskeyOption endpoint.Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API.Choose a number from below, or type in your own value.Press Enter to leave empty.   / The default endpoint - a good choice if you are unsure. 1 | East China (Suzhou)   \ (eos-wuxi-1.cmecloud.cn) 2 / East China (Jinan)   \ (eos-jinan-1.cmecloud.cn) 3 / East China (Hangzhou)   \ (eos-ningbo-1.cmecloud.cn) 4 / East China (Shanghai-1)   \ (eos-shanghai-1.cmecloud.cn) 5 / Central China (Zhengzhou)   \ (eos-zhengzhou-1.cmecloud.cn) 6 / Central China (Changsha-1)   \ (eos-hunan-1.cmecloud.cn) 7 / Central China (Changsha-2)   \ (eos-zhuzhou-1.cmecloud.cn) 8 / South China (Guangzhou-2)   \ (eos-guangzhou-1.cmecloud.cn) 9 / South China (Guangzhou-3)   \ (eos-dongguan-1.cmecloud.cn)10 / North China (Beijing-1)   \ (eos-beijing-1.cmecloud.cn)11 / North China (Beijing-2)   \ (eos-beijing-2.cmecloud.cn)12 / North China (Beijing-3)   \ (eos-beijing-4.cmecloud.cn)13 / North China (Huhehaote)   \ (eos-huhehaote-1.cmecloud.cn)14 / Southwest China (Chengdu)   \ (eos-chengdu-1.cmecloud.cn)15 / Southwest China (Chongqing)   \ (eos-chongqing-1.cmecloud.cn)16 / Southwest China (Guiyang)   \ (eos-guiyang-1.cmecloud.cn)17 / Nouthwest China (Xian)   \ (eos-xian-1.cmecloud.cn)18 / Yunnan China (Kunming)   \ (eos-yunnan.cmecloud.cn)19 / Yunnan China (Kunming-2)   \ (eos-yunnan-2.cmecloud.cn)20 / Tianjin China (Tianjin)   \ (eos-tianjin-1.cmecloud.cn)21 / Jilin China (Changchun)   \ (eos-jilin-1.cmecloud.cn)22 / Hubei China (Xiangyan)   \ (eos-hubei-1.cmecloud.cn)23 / Jiangxi China (Nanchang)   \ (eos-jiangxi-1.cmecloud.cn)24 / Gansu China (Lanzhou)   \ (eos-gansu-1.cmecloud.cn)25 / Shanxi China (Taiyuan)   \ (eos-shanxi-1.cmecloud.cn)26 / Liaoning China (Shenyang)   \ (eos-liaoning-1.cmecloud.cn)27 / Hebei China (Shijiazhuang)   \ (eos-hebei-1.cmecloud.cn)28 / Fujian China (Xiamen)   \ (eos-fujian-1.cmecloud.cn)29 / Guangxi China (Nanning)   \ (eos-guangxi-1.cmecloud.cn)30 / Anhui China (Huainan)   \ (eos-anhui-1.cmecloud.cn)endpoint> 1Option location_constraint.Location constraint - must match endpoint.Used when creating buckets only.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / East China (Suzhou)   \ (wuxi1) 2 / East China (Jinan)   \ (jinan1) 3 / East China (Hangzhou)   \ (ningbo1) 4 / East China (Shanghai-1)   \ (shanghai1) 5 / Central China (Zhengzhou)   \ (zhengzhou1) 6 / Central China (Changsha-1)   \ (hunan1) 7 / Central China (Changsha-2)   \ (zhuzhou1) 8 / South China (Guangzhou-2)   \ (guangzhou1) 9 / South China (Guangzhou-3)   \ (dongguan1)10 / North China (Beijing-1)   \ (beijing1)11 / North China (Beijing-2)   \ (beijing2)12 / North China (Beijing-3)   \ (beijing4)13 / North China (Huhehaote)   \ (huhehaote1)14 / Southwest China (Chengdu)   \ (chengdu1)15 / Southwest China (Chongqing)   \ (chongqing1)16 / Southwest China (Guiyang)   \ (guiyang1)17 / Nouthwest China (Xian)   \ (xian1)18 / Yunnan China (Kunming)   \ (yunnan)19 / Yunnan China (Kunming-2)   \ (yunnan2)20 / Tianjin China (Tianjin)   \ (tianjin1)21 / Jilin China (Changchun)   \ (jilin1)22 / Hubei China (Xiangyan)   \ (hubei1)23 / Jiangxi China (Nanchang)   \ (jiangxi1)24 / Gansu China (Lanzhou)   \ (gansu1)25 / Shanxi China (Taiyuan)   \ (shanxi1)26 / Liaoning China (Shenyang)   \ (liaoning1)27 / Hebei China (Shijiazhuang)   \ (hebei1)28 / Fujian China (Xiamen)   \ (fujian1)29 / Guangxi China (Nanning)   \ (guangxi1)30 / Anhui China (Huainan)   \ (anhui1)location_constraint> 1Option acl.Canned ACL used when creating buckets and storing or copying objects.This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclNote that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.Choose a number from below, or type in your own value.Press Enter to leave empty.   / Owner gets FULL_CONTROL. 1 | No one else has access rights (default).   \ (private)   / Owner gets FULL_CONTROL. 2 | The AllUsers group gets READ access.   \ (public-read)   / Owner gets FULL_CONTROL. 3 | The AllUsers group gets READ and WRITE access.   | Granting this on a bucket is generally not recommended.   \ (public-read-write)   / Owner gets FULL_CONTROL. 4 | The AuthenticatedUsers group gets READ access.   \ (authenticated-read)   / Object owner gets FULL_CONTROL.acl> privateOption server_side_encryption.The server-side encryption algorithm used when storing this object in S3.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / None   \ () 2 / AES256   \ (AES256)server_side_encryption>Option storage_class.The storage class to use when storing new objects in ChinaMobile.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Default   \ () 2 / Standard storage class   \ (STANDARD) 3 / Archive storage mode   \ (GLACIER) 4 / Infrequent access storage mode   \ (STANDARD_IA)storage_class>Edit advanced config?y) Yesn) No (default)y/n> n--------------------[ChinaMobile]type = s3provider = ChinaMobileaccess_key_id = accesskeyidsecret_access_key = secretaccesskeyendpoint = eos-wuxi-1.cmecloud.cnlocation_constraint = wuxi1acl = private--------------------y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> y

Cloudflare R2

Cloudflare R2 Storageallows developers to store large amounts of unstructured data withoutthe costly egress bandwidth fees associated with typical cloud storageservices.

Here is an example of making a Cloudflare R2 configuration. First run:

rclone config

This will guide you through an interactive setup process.

Note that all buckets are private, and all are stored in the same"auto" region. It is necessary to use Cloudflare workers to share thecontent of a bucket publicly.

No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nname> r2Option Storage.Type of storage to configure.Choose a number from below, or type in your own value....XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Magalu, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi   \ (s3)...Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty....XX / Cloudflare R2 Storage   \ (Cloudflare)...provider> CloudflareOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step.   \ (false) 2 / Get AWS credentials from the environment (env vars or IAM).   \ (true)env_auth> 1Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> ACCESS_KEYOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> SECRET_ACCESS_KEYOption region.Region to connect to.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / R2 buckets are automatically distributed across Cloudflare's data centers for low latency.   \ (auto)region> 1Option endpoint.Endpoint for S3 API.Required when using an S3 clone.Enter a value. Press Enter to leave empty.endpoint> https://ACCOUNT_ID.r2.cloudflarestorage.comEdit advanced config?y) Yesn) No (default)y/n> n--------------------y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> y

This will leave your config looking something like:

[r2]type=s3provider=Cloudflareaccess_key_id=ACCESS_KEYsecret_access_key=SECRET_ACCESS_KEYregion=autoendpoint=https://ACCOUNT_ID.r2.cloudflarestorage.comacl=private

Now runrclone lsf r2: to see your buckets andrclone lsf r2:bucket to look within a bucket.

For R2 tokens with the "Object Read & Write" permission, you may alsoneed to addno_check_bucket = true for object uploads to workcorrectly.

Note that Cloudflare decompresses files uploaded withContent-Encoding: gzip by default which is a deviation from what AWSdoes. If this is causing a problem then upload the files with--header-upload "Cache-Control: no-transform"

A consequence of this is thatContent-Encoding: gzip will neverappear in the metadata on Cloudflare.

Cubbit DS3

Cubbit Object Storage is a geo-distributedcloud object storage platform.

To connect to Cubbit DS3 you will need an access key and secret key pair. Youcan follow thisguideto retrieve these keys. They will be needed when prompted byrclone config.

Default region will correspond toeu-west-1 and the endpoint has to be specifiedass3.cubbit.eu.

Going through the whole process of creating a new remote by runningrclone config,each prompt should be answered as shown below:

name> cubbit-ds3 (or any name you like)Storage> s3provider> Cubbitenv_auth> falseaccess_key_id> YOUR_ACCESS_KEYsecret_access_key> YOUR_SECRET_KEYregion> eu-west-1 (or leave empty)endpoint> s3.cubbit.euacl>

The resulting configuration file should look like:

[cubbit-ds3]type=s3provider=Cubbitaccess_key_id=ACCESS_KEYsecret_access_key=SECRET_KEYregion=eu-west-1endpoint=s3.cubbit.eu

You can then start using Cubbit DS3 with rclone. For example, to create a newbucket and copy files into it, you can run:

rclone mkdir cubbit-ds3:my-bucketrclone copy /path/to/files cubbit-ds3:my-bucket

DigitalOcean Spaces

Spaces is anS3-interoperableobject storage service from cloud provider DigitalOcean.

To connect to DigitalOcean Spaces you will need an access key and secret key.These can be retrieved on theApplications & APIpage of the DigitalOcean control panel. They will be needed when prompted byrclone config for youraccess_key_id andsecret_access_key.

When prompted for aregion orlocation_constraint, press enter to use thedefault value. The region must be included in theendpoint setting (e.g.nyc3.digitaloceanspaces.com). The default values can be used for other settings.

Going through the whole process of creating a new remote by runningrclone config,each prompt should be answered as shown below:

Storage> s3env_auth> 1access_key_id> YOUR_ACCESS_KEYsecret_access_key> YOUR_SECRET_KEYregion>endpoint> nyc3.digitaloceanspaces.comlocation_constraint>acl>storage_class>

The resulting configuration file should look like:

[spaces]type=s3provider=DigitalOceanenv_auth=falseaccess_key_id=YOUR_ACCESS_KEYsecret_access_key=YOUR_SECRET_KEYregion=endpoint=nyc3.digitaloceanspaces.comlocation_constraint=acl=server_side_encryption=storage_class=

Once configured, you can create a new Space and begin copying files. For example:

rclone mkdir spaces:my-new-spacerclone copy /path/to/files spaces:my-new-space

Dreamhost

DreamhostDreamObjects isan object storage system based on CEPH.

To use rclone with Dreamhost, configure as above but leave the region blankand set the endpoint. You should end up with something like this inyour config:

[dreamobjects]type=s3provider=DreamHostenv_auth=falseaccess_key_id=your_access_keysecret_access_key=your_secret_keyregion=endpoint=objects-us-west-1.dream.iolocation_constraint=acl=privateserver_side_encryption=storage_class=

Exaba

Exaba is an on-premises, S3-compatible storagefor service providers and large enterprises. It is quick to deploy,with dynamic node management, tenant accounting, and a built-in keymanagement system. It delivers secure, high-performance data storagewith flexible, usage-based pricing.

A container versionexaba/exaba is free forend-users and on that page you can find instructions on how to set itup. You will need to log into the admin first (on port 9006 bydefault) to set up the container, then you can use the service on port9000.

You can also join theexaba supportslackif you need more help.

Anrclone config walkthrough might look like this but details mayvary depending exactly on how you have set up the container.

No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.name> exabaOption Storage.Type of storage to configure.Storage> s3Option provider.Choose your S3 provider.provider> ExabaOption env_auth.env_auth> 1Option access_key_id.access_key_id> XXXOption secret_access_key.secret_access_key> YYYOption region.region>Option endpoint.endpoint> http://127.0.0.1:9000/Option location_constraint.location_constraint>Option acl.acl>Edit advanced config?y) Yesn) No (default)y/n> n

And the config generated will end up looking like this:

[exaba]type=s3provider=Exabaaccess_key_id=XXXsecret_access_key=XXXendpoint=http://127.0.0.1:9000/

Google Cloud Storage

GoogleCloudStorage is anS3-interoperable objectstorage service from Google Cloud Platform.

To connect to Google Cloud Storage you will need an access key and secret key.These can be retrieved by creating anHMAC key.

[gs]type=s3provider=GCSaccess_key_id=your_access_keysecret_access_key=your_secret_keyendpoint=https://storage.googleapis.com

Note that--s3-versions does not work with GCS when it needs to dodirectory paging. Rclone will return the error:

s3 protocol error: received versions listing with IsTruncated set with no NextKeyMarker

This is Google bug#312292516.

Hetzner Object Storage

Here is an example of making aHetzner Object Storageconfiguration. First run:

rclone config

This will guide you through an interactive setup process.

No remotes found, make a new one?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.name> my-hetznerOption Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip] XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others   \ (s3)[snip]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]XX / Hetzner Object Storage   \ (Hetzner)[snip]provider> HetznerOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step.   \ (false) 2 / Get AWS credentials from the environment (env vars or IAM).   \ (true)env_auth>Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> ACCESS_KEYOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> SECRET_KEYOption region.Region to connect to.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Helsinki   \ (hel1) 2 / Falkenstein   \ (fsn1) 3 / Nuremberg   \ (nbg1)region>Option endpoint.Endpoint for Hetzner Object StorageChoose a number from below, or type in your own value.Press Enter to leave empty. 1 / Helsinki   \ (hel1.your-objectstorage.com) 2 / Falkenstein   \ (fsn1.your-objectstorage.com) 3 / Nuremberg   \ (nbg1.your-objectstorage.com)endpoint>Option location_constraint.Location constraint - must be set to match the Region.Leave blank if not sure. Used when creating buckets only.Enter a value. Press Enter to leave empty.location_constraint>Option acl.Canned ACL used when creating buckets and storing or copying objects.This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclNote that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.If the acl is an empty string then no X-Amz-Acl: header is added andthe default (private) will be used.Choose a number from below, or type in your own value.Press Enter to leave empty.   / Owner gets FULL_CONTROL. 1 | No one else has access rights (default).   \ (private)   / Owner gets FULL_CONTROL. 2 | The AllUsers group gets READ access.   \ (public-read)acl>Edit advanced config?y) Yesn) No (default)y/n>Configuration complete.Options:- type: s3- provider: Hetzner- access_key_id: ACCESS_KEY- secret_access_key: SECRET_KEYKeep this "my-hetzner" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d>Current remotes:Name                 Type====                 ====my-hetzner           s3e) Edit existing remoten) New remoted) Delete remoter) Rename remotec) Copy remotes) Set configuration passwordq) Quit confige/n/d/r/c/s/q>

This will leave the config file looking like this.

[my-hetzner]type=s3provider=Hetzneraccess_key_id=ACCESS_KEYsecret_access_key=SECRET_KEYregion=hel1endpoint=hel1.your-objectstorage.comacl=private

Huawei OBS

Object Storage Service (OBS) provides stable, secure, efficient, and easy-to-usecloud storage that lets you store virtually any volume of unstructured data inany format and access it from anywhere.

OBS provides an S3 interface, you can copy and modify the following configurationand add it to your rclone configuration file.

[obs]type=s3provider=HuaweiOBSaccess_key_id=your-access-key-idsecret_access_key=your-secret-access-keyregion=af-south-1endpoint=obs.af-south-1.myhuaweicloud.comacl=private

Or you can also configure via the interactive command line:

No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nname> obsOption Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip]XX / Amazon S3 Compliant Storage Providers including AWS, ...   \ (s3)[snip]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip] 9 / Huawei Object Storage Service   \ (HuaweiOBS)[snip]provider> 9Option env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step.   \ (false) 2 / Get AWS credentials from the environment (env vars or IAM).   \ (true)env_auth> 1Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> your-access-key-idOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> your-secret-access-keyOption region.Region to connect to.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / AF-Johannesburg   \ (af-south-1) 2 / AP-Bangkok   \ (ap-southeast-2)[snip]region> 1Option endpoint.Endpoint for OBS API.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / AF-Johannesburg   \ (obs.af-south-1.myhuaweicloud.com) 2 / AP-Bangkok   \ (obs.ap-southeast-2.myhuaweicloud.com)[snip]endpoint> 1Option acl.Canned ACL used when creating buckets and storing or copying objects.This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclNote that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.Choose a number from below, or type in your own value.Press Enter to leave empty.   / Owner gets FULL_CONTROL. 1 | No one else has access rights (default).   \ (private)[snip]acl> 1Edit advanced config?y) Yesn) No (default)y/n>--------------------[obs]type = s3provider = HuaweiOBSaccess_key_id = your-access-key-idsecret_access_key = your-secret-access-keyregion = af-south-1endpoint = obs.af-south-1.myhuaweicloud.comacl = private--------------------y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> yCurrent remotes:Name                 Type====                 ====obs                  s3e) Edit existing remoten) New remoted) Delete remoter) Rename remotec) Copy remotes) Set configuration passwordq) Quit confige/n/d/r/c/s/q> q

IBM COS (S3)

Information stored with IBM Cloud Object Storage is encrypted and dispersed acrossmultiple geographic locations, and accessed through an implementation of the S3 API.This service makes use of the distributed storage technologies provided by IBM’sCloud Object Storage System (formerly Cleversafe). For more information visit:http://www.ibm.com/cloud/object-storage

To configure access to IBM COS S3, follow the steps below:

  1. Run rclone config and select n for a new remote.

    2018/02/14 14:13:11 NOTICE: Config file "C:\\Users\\a\\.config\\rclone\\rclone.conf" not found - using defaultsNo remotes found, make a new one?n) New remotes) Set configuration passwordq) Quit confign/s/q> n
  2. Enter the name for the configuration

    name> <YOUR NAME>
  3. Select "s3" storage.

    Choose a number from below, or type in your own value[snip]XX / Amazon S3 Compliant Storage Providers including AWS, ...   \ "s3"[snip]Storage> s3
  4. Select IBM COS as the S3 Storage Provider.

    Choose the S3 provider.Choose a number from below, or type in your own value   1 / Choose this option to configure Storage to AWS S3     \ "AWS"   2 / Choose this option to configure Storage to Ceph Systems     \ "Ceph"   3 /  Choose this option to configure Storage to Dreamhost     \ "Dreamhost"   4 / Choose this option to the configure Storage to IBM COS S3     \ "IBMCOS"   5 / Choose this option to the configure Storage to Minio     \ "Minio"   Provider>4
  5. Enter the Access Key and Secret.

    AWS Access Key ID - leave blank for anonymous access or runtime credentials.access_key_id> <>AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.secret_access_key> <>
  6. Specify the endpoint for IBM COS. For Public IBM COS, choose from the optionbelow. For On Premise IBM COS, enter an endpoint address.

    Endpoint for IBM COS S3 API.Specify if using an IBM COS On Premise.Choose a number from below, or type in your own value 1 / US Cross Region Endpoint   \ "s3-api.us-geo.objectstorage.softlayer.net" 2 / US Cross Region Dallas Endpoint   \ "s3-api.dal.us-geo.objectstorage.softlayer.net" 3 / US Cross Region Washington DC Endpoint   \ "s3-api.wdc-us-geo.objectstorage.softlayer.net" 4 / US Cross Region San Jose Endpoint   \ "s3-api.sjc-us-geo.objectstorage.softlayer.net" 5 / US Cross Region Private Endpoint   \ "s3-api.us-geo.objectstorage.service.networklayer.com" 6 / US Cross Region Dallas Private Endpoint   \ "s3-api.dal-us-geo.objectstorage.service.networklayer.com" 7 / US Cross Region Washington DC Private Endpoint   \ "s3-api.wdc-us-geo.objectstorage.service.networklayer.com" 8 / US Cross Region San Jose Private Endpoint   \ "s3-api.sjc-us-geo.objectstorage.service.networklayer.com" 9 / US Region East Endpoint   \ "s3.us-east.objectstorage.softlayer.net"10 / US Region East Private Endpoint   \ "s3.us-east.objectstorage.service.networklayer.com"11 / US Region South Endpoint[snip]34 / Toronto Single Site Private Endpoint   \ "s3.tor01.objectstorage.service.networklayer.com"endpoint>1
  7. Specify a IBM COS Location Constraint. The location constraint must matchendpoint when using IBM Cloud Public. For on-prem COS, do not make a selectionfrom this list, hit enter

     1 / US Cross Region Standard   \ "us-standard" 2 / US Cross Region Vault   \ "us-vault" 3 / US Cross Region Cold   \ "us-cold" 4 / US Cross Region Flex   \ "us-flex" 5 / US East Region Standard   \ "us-east-standard" 6 / US East Region Vault   \ "us-east-vault" 7 / US East Region Cold   \ "us-east-cold" 8 / US East Region Flex   \ "us-east-flex" 9 / US South Region Standard   \ "us-south-standard"10 / US South Region Vault   \ "us-south-vault"[snip]32 / Toronto Flex   \ "tor01-flex"location_constraint>1
  8. Specify a canned ACL. IBM Cloud (Storage) supports "public-read" and "private".IBM Cloud(Infra) supports all the canned ACLs. On-Premise COS supports all thecanned ACLs.

    Canned ACL used when creating buckets and/or storing objects in S3.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclChoose a number from below, or type in your own value      1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS      \ "private"      2  / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS      \ "public-read"      3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS      \ "public-read-write"      4  / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS      \ "authenticated-read"acl> 1
  9. Review the displayed configuration and accept to save the "remote" then quit.The config file should look like this

    [xxx]type=s3Provider=IBMCOSaccess_key_id=xxxsecret_access_key=yyyendpoint=s3-api.us-geo.objectstorage.softlayer.netlocation_constraint=us-standardacl=private
  10. Execute rclone commands

1) Create a bucket.   rclone mkdir IBM-COS-XREGION:newbucket2) List available buckets.   rclone lsd IBM-COS-XREGION:   -1 2017-11-08 21:16:22        -1 test   -1 2018-02-14 20:16:39        -1 newbucket3) List contents of a bucket.    rclone ls IBM-COS-XREGION:newbucket    18685952 test.exe4) Copy a file from local to remote.   rclone copy /Users/file.txt IBM-COS-XREGION:newbucket5) Copy a file from remote to local.   rclone copy IBM-COS-XREGION:newbucket/file.txt .6) Delete a file on remote.   rclone delete IBM-COS-XREGION:newbucket/file.txt

IBM IAM authentication

If using IBM IAM authentication with IBM API KEY you need to fill in theseadditional parameters

  1. Select false for env_auth

  2. Leaveaccess_key_id andsecret_access_key blank

  3. Paste youribm_api_key

    Option ibm_api_key.IBM API Key to be used to obtain IAM tokenEnter a value of type string. Press Enter for the default (1).ibm_api_key>
  4. Paste youribm_resource_instance_id

    Option ibm_resource_instance_id.IBM service instance idEnter a value of type string. Press Enter for the default (2).ibm_resource_instance_id>
  5. In advanced settings type true forv2_auth

    Option v2_auth.If true use v2 authentication.If this is false (the default) then rclone will use v4 authentication.If it is set then rclone will use v2 authentication.Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH.Enter a boolean value (true or false). Press Enter for the default (true).v2_auth>

IDrive e2

Here is an example of making anIDrive e2configuration. First run:

rclone config

This will guide you through an interactive setup process.

No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.name> e2Option Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip]XX / Amazon S3 Compliant Storage Providers including AWS, ...   \ (s3)[snip]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]XX / IDrive e2   \ (IDrive)[snip]provider> IDriveOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step.   \ (false) 2 / Get AWS credentials from the environment (env vars or IAM).   \ (true)env_auth>Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> YOUR_ACCESS_KEYOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> YOUR_SECRET_KEYOption acl.Canned ACL used when creating buckets and storing or copying objects.This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclNote that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.Choose a number from below, or type in your own value.Press Enter to leave empty.   / Owner gets FULL_CONTROL. 1 | No one else has access rights (default).   \ (private)   / Owner gets FULL_CONTROL. 2 | The AllUsers group gets READ access.   \ (public-read)   / Owner gets FULL_CONTROL. 3 | The AllUsers group gets READ and WRITE access.   | Granting this on a bucket is generally not recommended.   \ (public-read-write)   / Owner gets FULL_CONTROL. 4 | The AuthenticatedUsers group gets READ access.   \ (authenticated-read)   / Object owner gets FULL_CONTROL. 5 | Bucket owner gets READ access.   | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.   \ (bucket-owner-read)   / Both the object owner and the bucket owner get FULL_CONTROL over the object. 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.   \ (bucket-owner-full-control)acl>Edit advanced config?y) Yesn) No (default)y/n>Configuration complete.Options:- type: s3- provider: IDrive- access_key_id: YOUR_ACCESS_KEY- secret_access_key: YOUR_SECRET_KEY- endpoint: q9d9.la12.idrivee2-5.comKeep this "e2" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> y

Intercolo Object Storage

Intercolo Object Storage offersGDPR-compliant, transparently priced, S3-compatiblecloud storage hosted in Frankfurt, Germany.

Here's an example of making a configuration for Intercolo.

First run:

rclone config

This will guide you through an interactive setup process.

No remotes found, make a new one?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.name> intercoloOption Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip] xx / Amazon S3 Compliant Storage Providers including AWS, ...   \ (s3)[snip]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]xx / Intercolo Object Storage   \ (Intercolo)[snip]provider> IntercoloOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step.   \ (false) 2 / Get AWS credentials from the environment (env vars or IAM).   \ (true)env_auth> falseOption access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> ACCESS_KEYOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> SECRET_KEYOption region.Region where your bucket will be created and your data stored.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Frankfurt, Germany   \ (de-fra)region> 1Option endpoint.Endpoint for Intercolo Object Storage.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Frankfurt, Germany   \ (de-fra.i3storage.com)endpoint> 1Option acl.Canned ACL used when creating buckets and storing or copying objects.This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclNote that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.If the acl is an empty string then no X-Amz-Acl: header is added andthe default (private) will be used.Choose a number from below, or type in your own value.Press Enter to leave empty.   / Owner gets FULL_CONTROL. 1 | No one else has access rights (default).   \ (private) [snip]acl>Edit advanced config?y) Yesn) No (default)y/n> nConfiguration complete.Options:- type: s3- provider: Intercolo- access_key_id: ACCESS_KEY- secret_access_key: SECRET_KEY- region: de-fra- endpoint: de-fra.i3storage.comKeep this "intercolo" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> y

This will leave the config file looking like this.

[intercolo]type=s3provider=Intercoloaccess_key_id=ACCESS_KEYsecret_access_key=SECRET_KEYregion=de-fraendpoint=de-fra.i3storage.com

IONOS Cloud

IONOS S3 Object Storage is aservice offered by IONOS for storing and accessing unstructured data.To connect to the service, you will need an access key and a secret key. Thesecan be found in theData Center Designer, byselectingManager resources >Object Storage Key Manager.

Here is an example of a configuration. First, runrclone config. This willwalk you through an interactive setup process. Typen to add the new remote,and then enter a name:

Enter name for new remote.name> ionos-fra

Types3 to choose the connection type:

Option Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip]XX / Amazon S3 Compliant Storage Providers including AWS, ...   \ (s3)[snip]Storage> s3

TypeIONOS:

Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]XX / IONOS Cloud   \ (IONOS)[snip]provider> IONOS

Press Enter to choose the default optionEnter AWS credentials in the next step:

Option env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step.   \ (false) 2 / Get AWS credentials from the environment (env vars or IAM).   \ (true)env_auth>

Enter your Access Key and Secret key. These can be retrieved in theData Center Designer, click on the menu"Manager resources" / "Object Storage Key Manager".

Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> YOUR_ACCESS_KEYOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> YOUR_SECRET_KEY

Choose the region where your bucket is located:

Option region.Region where your bucket will be created and your data stored.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Frankfurt, Germany   \ (de) 2 / Berlin, Germany   \ (eu-central-2) 3 / Logrono, Spain   \ (eu-south-2)region> 2

Choose the endpoint from the same region:

Option endpoint.Endpoint for IONOS S3 Object Storage.Specify the endpoint from the same region.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Frankfurt, Germany   \ (s3-eu-central-1.ionoscloud.com) 2 / Berlin, Germany   \ (s3-eu-central-2.ionoscloud.com) 3 / Logrono, Spain   \ (s3-eu-south-2.ionoscloud.com)endpoint> 1

Press Enter to choose the default option or choose the desired ACL setting:

Option acl.Canned ACL used when creating buckets and storing or copying objects.This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclNote that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.Choose a number from below, or type in your own value.Press Enter to leave empty.   / Owner gets FULL_CONTROL. 1 | No one else has access rights (default).   \ (private)   / Owner gets FULL_CONTROL.[snip]acl>

Press Enter to skip the advanced config:

Edit advanced config?y) Yesn) No (default)y/n>

Press Enter to save the configuration, and thenq to quit the configuration process:

Configuration complete.Options:- type: s3- provider: IONOS- access_key_id: YOUR_ACCESS_KEY- secret_access_key: YOUR_SECRET_KEY- endpoint: s3-eu-central-1.ionoscloud.comKeep this "ionos-fra" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> y

Done! Now you can try some commands (for macOS, use./rclone instead ofrclone).

  1. Create a bucket (the name must be unique within the whole IONOS S3)

    rclone mkdir ionos-fra:my-bucket
  2. List available buckets

    rclone lsd ionos-fra:
  3. Copy a file from local to remote

    rclone copy /Users/file.txt ionos-fra:my-bucket
  4. List contents of a bucket

    rclone ls ionos-fra:my-bucket
  5. Copy a file from remote to local

    rclone copy ionos-fra:my-bucket/file.txt

Leviia Cloud Object Storage

Leviia Object Storage, backup and secureyour data in a 100% French cloud, independent of GAFAM..

To configure access to Leviia, follow the steps below:

  1. Runrclone config and selectn for a new remote.

    rclone configNo remotes found, make a new one?n) New remotes) Set configuration passwordq) Quit confign/s/q> n
  2. Give the name of the configuration. For example, name it 'leviia'.

    name> leviia
  3. Selects3 storage.

    Choose a number from below, or type in your own value[snip]XX / Amazon S3 Compliant Storage Providers including AWS, ...   \ (s3)[snip]Storage> s3
  4. SelectLeviia provider.

    Choose a number from below, or type in your own value1 / Amazon Web Services (AWS) S3   \ "AWS"[snip]15 / Leviia Object Storage   \ (Leviia)[snip]provider> Leviia
  5. Enter your SecretId and SecretKey of Leviia.

    Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Enter a boolean value (true or false). Press Enter for the default ("false").Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step   \ "false" 2 / Get AWS credentials from the environment (env vars or IAM)   \ "true"env_auth> 1AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a string value. Press Enter for the default ("").access_key_id> ZnIx.xxxxxxxxxxxxxxxAWS Secret Access Key (password)Leave blank for anonymous access or runtime credentials.Enter a string value. Press Enter for the default ("").secret_access_key> xxxxxxxxxxx
  6. Select endpoint for Leviia.

       / The default endpoint 1 | Leviia.   \ (s3.leviia.com)[snip]endpoint> 1
  7. Choose acl.

    Note that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.Enter a string value. Press Enter for the default ("").Choose a number from below, or type in your own value   / Owner gets FULL_CONTROL. 1 | No one else has access rights (default).   \ (private)   / Owner gets FULL_CONTROL. 2 | The AllUsers group gets READ access.   \ (public-read)[snip]acl> 1Edit advanced config? (y/n)y) Yesn) No (default)y/n> nRemote config--------------------[leviia]- type: s3- provider: Leviia- access_key_id: ZnIx.xxxxxxx- secret_access_key: xxxxxxxx- endpoint: s3.leviia.com- acl: private--------------------y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> yCurrent remotes:Name                 Type====                 ====leviia                s3

Liara

Here is an example of making aLiara Object Storageconfiguration. First run:

rclone config

This will guide you through an interactive setup process.

No remotes found, make a new one\?n) New remotes) Set configuration passwordn/s> nname> LiaraType of storage to configure.Choose a number from below, or type in your own value[snip]XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Liara, Minio)   \ "s3"[snip]Storage> s3Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step   \ "false" 2 / Get AWS credentials from the environment (env vars or IAM)   \ "true"env_auth> 1AWS Access Key ID - leave blank for anonymous access or runtime credentials.access_key_id> YOURACCESSKEYAWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.secret_access_key> YOURSECRETACCESSKEYRegion to connect to.Choose a number from below, or type in your own value   / The default endpoint 1 | US Region, Northern Virginia, or Pacific Northwest.   | Leave location constraint empty.   \ "us-east-1"[snip]region>Endpoint for S3 API.Leave blank if using Liara to use the default endpoint for the region.Specify if using an S3 clone such as Ceph.endpoint> storage.iran.liara.spaceCanned ACL used when creating buckets and/or storing objects in S3.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclChoose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default).   \ "private"[snip]acl>The server-side encryption algorithm used when storing this object in S3.Choose a number from below, or type in your own value 1 / None   \ "" 2 / AES256   \ "AES256"server_side_encryption>The storage class to use when storing objects in S3.Choose a number from below, or type in your own value 1 / Default   \ "" 2 / Standard storage class   \ "STANDARD"storage_class>Remote config--------------------[Liara]env_auth = falseaccess_key_id = YOURACCESSKEYsecret_access_key = YOURSECRETACCESSKEYendpoint = storage.iran.liara.spacelocation_constraint =acl =server_side_encryption =storage_class =--------------------y) Yes this is OKe) Edit this remoted) Delete this remotey/e/d> y

This will leave the config file looking like this.

[Liara]type=s3provider=Liaraenv_auth=falseaccess_key_id=YOURACCESSKEYsecret_access_key=YOURSECRETACCESSKEYregion=endpoint=storage.iran.liara.spacelocation_constraint=acl=server_side_encryption=storage_class=

Linode

Here is an example of making aLinode Object Storageconfiguration. First run:

rclone config

This will guide you through an interactive setup process.

No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.name> linodeOption Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip]XX / Amazon S3 Compliant Storage Providers including AWS, ...Linode, ...and others   \ (s3)[snip]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]XX / Linode Object Storage   \ (Linode)[snip]provider> LinodeOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step.   \ (false) 2 / Get AWS credentials from the environment (env vars or IAM).   \ (true)env_auth>Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> ACCESS_KEYOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> SECRET_ACCESS_KEYOption endpoint.Endpoint for Linode Object Storage API.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Amsterdam (Netherlands), nl-ams-1   \ (nl-ams-1.linodeobjects.com) 2 / Atlanta, GA (USA), us-southeast-1   \ (us-southeast-1.linodeobjects.com) 3 / Chennai (India), in-maa-1   \ (in-maa-1.linodeobjects.com) 4 / Chicago, IL (USA), us-ord-1   \ (us-ord-1.linodeobjects.com) 5 / Frankfurt (Germany), eu-central-1   \ (eu-central-1.linodeobjects.com) 6 / Jakarta (Indonesia), id-cgk-1   \ (id-cgk-1.linodeobjects.com) 7 / London 2 (Great Britain), gb-lon-1   \ (gb-lon-1.linodeobjects.com) 8 / Los Angeles, CA (USA), us-lax-1   \ (us-lax-1.linodeobjects.com) 9 / Madrid (Spain), es-mad-1   \ (es-mad-1.linodeobjects.com)10 / Melbourne (Australia), au-mel-1   \ (au-mel-1.linodeobjects.com)11 / Miami, FL (USA), us-mia-1   \ (us-mia-1.linodeobjects.com)12 / Milan (Italy), it-mil-1   \ (it-mil-1.linodeobjects.com)13 / Newark, NJ (USA), us-east-1   \ (us-east-1.linodeobjects.com)14 / Osaka (Japan), jp-osa-1   \ (jp-osa-1.linodeobjects.com)15 / Paris (France), fr-par-1   \ (fr-par-1.linodeobjects.com)16 / São Paulo (Brazil), br-gru-1   \ (br-gru-1.linodeobjects.com)17 / Seattle, WA (USA), us-sea-1   \ (us-sea-1.linodeobjects.com)18 / Singapore, ap-south-1   \ (ap-south-1.linodeobjects.com)19 / Singapore 2, sg-sin-1   \ (sg-sin-1.linodeobjects.com)20 / Stockholm (Sweden), se-sto-1   \ (se-sto-1.linodeobjects.com)21 / Washington, DC, (USA), us-iad-1   \ (us-iad-1.linodeobjects.com)endpoint> 5Option acl.Canned ACL used when creating buckets and storing or copying objects.This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclNote that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.If the acl is an empty string then no X-Amz-Acl: header is added andthe default (private) will be used.Choose a number from below, or type in your own value.Press Enter to leave empty.   / Owner gets FULL_CONTROL. 1 | No one else has access rights (default).   \ (private)[snip]acl>Edit advanced config?y) Yesn) No (default)y/n> nConfiguration complete.Options:- type: s3- provider: Linode- access_key_id: ACCESS_KEY- secret_access_key: SECRET_ACCESS_KEY- endpoint: eu-central-1.linodeobjects.comKeep this "linode" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> y

This will leave the config file looking like this.

[linode]type=s3provider=Linodeaccess_key_id=ACCESS_KEYsecret_access_key=SECRET_ACCESS_KEYendpoint=eu-central-1.linodeobjects.com

Magalu

Here is an example of making aMagalu Object Storageconfiguration. First run:

rclone config

This will guide you through an interactive setup process.

No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.name> magaluOption Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip]XX / Amazon S3 Compliant Storage Providers including AWS, ...Magalu, ...and others   \ (s3)[snip]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]XX / Magalu Object Storage   \ (Magalu)[snip]provider> MagaluOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step.   \ (false) 2 / Get AWS credentials from the environment (env vars or IAM).   \ (true)env_auth> 1Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> ACCESS_KEYOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> SECRET_ACCESS_KEYOption endpoint.Endpoint for Magalu Object Storage API.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / São Paulo, SP (BR), br-se1   \ (br-se1.magaluobjects.com) 2 / Fortaleza, CE (BR), br-ne1   \ (br-ne1.magaluobjects.com)endpoint> 2Option acl.Canned ACL used when creating buckets and storing or copying objects.This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclNote that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.If the acl is an empty string then no X-Amz-Acl: header is added andthe default (private) will be used.Choose a number from below, or type in your own value.Press Enter to leave empty.   / Owner gets FULL_CONTROL. 1 | No one else has access rights (default).   \ (private)[snip]acl>Edit advanced config?y) Yesn) No (default)y/n> nConfiguration complete.Options:- type: s3- provider: magalu- access_key_id: ACCESS_KEY- secret_access_key: SECRET_ACCESS_KEY- endpoint: br-ne1.magaluobjects.comKeep this "magalu" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> y

This will leave the config file looking like this.

[magalu]type=s3provider=Magaluaccess_key_id=ACCESS_KEYsecret_access_key=SECRET_ACCESS_KEYendpoint=br-ne1.magaluobjects.com

MEGA S4

MEGA S4 Object Storage is an S3compatible object storage system. It has a single pricing tier with noadditional charges for data transfers or API requests and it isincluded in existing Pro plans.

Here is an example of making a configuration. First run:

rclone config

This will guide you through an interactive setup process.

No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.name> megas4Option Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip]XX / Amazon S3 Compliant Storage Providers including AWS,... Mega, ...   \ (s3)[snip]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]XX / MEGA S4 Object Storage   \ (Mega)[snip]provider> MegaOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step.   \ (false) 2 / Get AWS credentials from the environment (env vars or IAM).   \ (true)env_auth>Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> XXXOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> XXXOption endpoint.Endpoint for S3 API.Required when using an S3 clone.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Mega S4 eu-central-1 (Amsterdam)   \ (s3.eu-central-1.s4.mega.io) 2 / Mega S4 eu-central-2 (Bettembourg)   \ (s3.eu-central-2.s4.mega.io) 3 / Mega S4 ca-central-1 (Montreal)   \ (s3.ca-central-1.s4.mega.io) 4 / Mega S4 ca-west-1 (Vancouver)   \ (s3.ca-west-1.s4.mega.io)endpoint> 1Edit advanced config?y) Yesn) No (default)y/n> nConfiguration complete.Options:- type: s3- provider: Mega- access_key_id: XXX- secret_access_key: XXX- endpoint: s3.eu-central-1.s4.mega.ioKeep this "megas4" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> y

This will leave the config file looking like this.

[megas4]type=s3provider=Megaaccess_key_id=XXXsecret_access_key=XXXendpoint=s3.eu-central-1.s4.mega.io

Minio

Minio is an object storage server built for cloud applicationdevelopers and devops.

It is very easy to install and provides an S3 compatible server which can be usedby rclone.

To use it, install Minio following the instructionshere.

When it configures itself Minio will print something like this

Endpoint:  http://192.168.1.106:9000  http://172.23.0.1:9000AccessKey: USWUXHGYZQYFYFFIT3RESecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03Region:    us-east-1SQS ARNs:  arn:minio:sqs:us-east-1:1:redis arn:minio:sqs:us-east-1:2:redisBrowser Access:   http://192.168.1.106:9000  http://172.23.0.1:9000Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide   $ mc config host add myminio http://192.168.1.106:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03Object API (Amazon S3 compatible):   Go:         https://docs.minio.io/docs/golang-client-quickstart-guide   Java:       https://docs.minio.io/docs/java-client-quickstart-guide   Python:     https://docs.minio.io/docs/python-client-quickstart-guide   #"0">env_auth> 1access_key_id> USWUXHGYZQYFYFFIT3REsecret_access_key> MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03region> us-east-1endpoint> http://192.168.1.106:9000location_constraint>server_side_encryption>

Which makes the config file look like this

[minio]type=s3provider=Minioenv_auth=falseaccess_key_id=USWUXHGYZQYFYFFIT3REsecret_access_key=MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03region=us-east-1endpoint=http://192.168.1.106:9000location_constraint=server_side_encryption=

So once set up, for example, to copy files into a bucket

rclone copy /path/to/files minio:bucket

Netease NOS

For Netease NOS configure as per the configuratorrclone configsetting the providerNetease. This will automatically setforce_path_style = false which is necessary for it to run properly.

Outscale

OUTSCALE Object Storage (OOS)is an enterprise-grade, S3-compatible storage service provided by OUTSCALE,a brand of Dassault Systèmes. For more information about OOS, see theofficial documentation.

Here is an example of an OOS configuration that you can paste into your rcloneconfiguration file:

[outscale]type=s3provider=Outscaleenv_auth=falseaccess_key_id=ABCDEFGHIJ0123456789secret_access_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXregion=eu-west-2endpoint=oos.eu-west-2.outscale.comacl=private

You can also runrclone config to go through the interactive setup process:

No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> n
Enter name for new remote.name> outscale
Option Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip] X / Amazon S3 Compliant Storage Providers including AWS, ...Outscale, ...and others   \ (s3)[snip]Storage> outscale
Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]XX / OUTSCALE Object Storage (OOS)   \ (Outscale)[snip]provider> Outscale
Option env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step.   \ (false) 2 / Get AWS credentials from the environment (env vars or IAM).   \ (true)env_auth>
Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> ABCDEFGHIJ0123456789
Option secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Option region.Region where your bucket will be created and your data stored.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Paris, France   \ (eu-west-2) 2 / New Jersey, USA   \ (us-east-2) 3 / California, USA   \ (us-west-1) 4 / SecNumCloud, Paris, France   \ (cloudgouv-eu-west-1) 5 / Tokyo, Japan   \ (ap-northeast-1)region> 1
Option endpoint.Endpoint for S3 API.Required when using an S3 clone.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Outscale EU West 2 (Paris)   \ (oos.eu-west-2.outscale.com) 2 / Outscale US east 2 (New Jersey)   \ (oos.us-east-2.outscale.com) 3 / Outscale EU West 1 (California)   \ (oos.us-west-1.outscale.com) 4 / Outscale SecNumCloud (Paris)   \ (oos.cloudgouv-eu-west-1.outscale.com) 5 / Outscale AP Northeast 1 (Japan)   \ (oos.ap-northeast-1.outscale.com)endpoint> 1
Option acl.Canned ACL used when creating buckets and storing or copying objects.This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclNote that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.If the acl is an empty string then no X-Amz-Acl: header is added andthe default (private) will be used.Choose a number from below, or type in your own value.Press Enter to leave empty.   / Owner gets FULL_CONTROL. 1 | No one else has access rights (default).   \ (private)[snip]acl> 1
Edit advanced config?y) Yesn) No (default)y/n> n
Configuration complete.Options:- type: s3- provider: Outscale- access_key_id: ABCDEFGHIJ0123456789- secret_access_key: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX- endpoint: oos.eu-west-2.outscale.comKeep this "outscale" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> y

OVHcloud

OVHcloud Object Storageis an S3-compatible general-purpose object storage platform available in allOVHcloud regions. To use the platform, you will need an access key and secret key.To know more about it and how to interact with the platform, take a look at thedocumentation.

Here is an example of making an OVHcloud Object Storage configuration withrclone config:

No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.name> ovhcloud-rbxOption Storage.Type of storage to configure.Choose a number from below, or type in your own value.[...] XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, OVHcloud, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others   \ (s3)[...]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[...]XX / OVHcloud Object Storage   \ (OVHcloud)[...]provider> OVHcloudOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step.   \ (false) 2 / Get AWS credentials from the environment (env vars or IAM).   \ (true)env_auth> 1Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> my_accessOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> my_secretOption region.Region where your bucket will be created and your data stored.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Gravelines, France   \ (gra) 2 / Roubaix, France   \ (rbx) 3 / Strasbourg, France   \ (sbg) 4 / Paris, France (3AZ)   \ (eu-west-par) 5 / Frankfurt, Germany   \ (de) 6 / London, United Kingdom   \ (uk) 7 / Warsaw, Poland   \ (waw) 8 / Beauharnois, Canada   \ (bhs) 9 / Toronto, Canada   \ (ca-east-tor)10 / Singapore   \ (sgp)11 / Sydney, Australia   \ (ap-southeast-syd)12 / Mumbai, India   \ (ap-south-mum)13 / Vint Hill, Virginia, USA   \ (us-east-va)14 / Hillsboro, Oregon, USA   \ (us-west-or)15 / Roubaix, France (Cold Archive)   \ (rbx-archive)region> 2Option endpoint.Endpoint for OVHcloud Object Storage.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / OVHcloud Gravelines, France   \ (s3.gra.io.cloud.ovh.net) 2 / OVHcloud Roubaix, France   \ (s3.rbx.io.cloud.ovh.net) 3 / OVHcloud Strasbourg, France   \ (s3.sbg.io.cloud.ovh.net) 4 / OVHcloud Paris, France (3AZ)   \ (s3.eu-west-par.io.cloud.ovh.net) 5 / OVHcloud Frankfurt, Germany   \ (s3.de.io.cloud.ovh.net) 6 / OVHcloud London, United Kingdom   \ (s3.uk.io.cloud.ovh.net) 7 / OVHcloud Warsaw, Poland   \ (s3.waw.io.cloud.ovh.net) 8 / OVHcloud Beauharnois, Canada   \ (s3.bhs.io.cloud.ovh.net) 9 / OVHcloud Toronto, Canada   \ (s3.ca-east-tor.io.cloud.ovh.net)10 / OVHcloud Singapore   \ (s3.sgp.io.cloud.ovh.net)11 / OVHcloud Sydney, Australia   \ (s3.ap-southeast-syd.io.cloud.ovh.net)12 / OVHcloud Mumbai, India   \ (s3.ap-south-mum.io.cloud.ovh.net)13 / OVHcloud Vint Hill, Virginia, USA   \ (s3.us-east-va.io.cloud.ovh.us)14 / OVHcloud Hillsboro, Oregon, USA   \ (s3.us-west-or.io.cloud.ovh.us)15 / OVHcloud Roubaix, France (Cold Archive)   \ (s3.rbx-archive.io.cloud.ovh.net)endpoint> 2Option acl.Canned ACL used when creating buckets and storing or copying objects.This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclNote that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.If the acl is an empty string then no X-Amz-Acl: header is added andthe default (private) will be used.Choose a number from below, or type in your own value.Press Enter to leave empty.   / Owner gets FULL_CONTROL. 1 | No one else has access rights (default).   \ (private)   / Owner gets FULL_CONTROL. 2 | The AllUsers group gets READ access.   \ (public-read)   / Owner gets FULL_CONTROL. 3 | The AllUsers group gets READ and WRITE access.   | Granting this on a bucket is generally not recommended.   \ (public-read-write)   / Owner gets FULL_CONTROL. 4 | The AuthenticatedUsers group gets READ access.   \ (authenticated-read)   / Object owner gets FULL_CONTROL. 5 | Bucket owner gets READ access.   | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.   \ (bucket-owner-read)   / Both the object owner and the bucket owner get FULL_CONTROL over the object. 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.   \ (bucket-owner-full-control)acl> 1Edit advanced config?y) Yesn) No (default)y/n> nConfiguration complete.Options:- type: s3- provider: OVHcloud- access_key_id: my_access- secret_access_key: my_secret- region: rbx- endpoint: s3.rbx.io.cloud.ovh.net- acl: privateKeep this "ovhcloud-rbx" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> y

Your configuration file should now look like this:

[ovhcloud-rbx]type=s3provider=OVHcloudaccess_key_id=my_accesssecret_access_key=my_secretregion=rbxendpoint=s3.rbx.io.cloud.ovh.netacl=private

Petabox

Here is an example of making aPetaboxconfiguration. First run:

rclone config

This will guide you through an interactive setup process.

No remotes found, make a new one\?n) New remotes) Set configuration passwordn/s> nEnter name for new remote.name> My Petabox StorageOption Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip]XX / Amazon S3 Compliant Storage Providers including AWS, ...   \ "s3"[snip]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]XX / Petabox Object Storage   \ (Petabox)[snip]provider> PetaboxOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step.   \ (false) 2 / Get AWS credentials from the environment (env vars or IAM).   \ (true)env_auth> 1Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> YOUR_ACCESS_KEY_IDOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> YOUR_SECRET_ACCESS_KEYOption region.Region where your bucket will be created and your data stored.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / US East (N. Virginia)   \ (us-east-1) 2 / Europe (Frankfurt)   \ (eu-central-1) 3 / Asia Pacific (Singapore)   \ (ap-southeast-1) 4 / Middle East (Bahrain)   \ (me-south-1) 5 / South America (São Paulo)   \ (sa-east-1)region> 1Option endpoint.Endpoint for Petabox S3 Object Storage.Specify the endpoint from the same region.Choose a number from below, or type in your own value. 1 / US East (N. Virginia)   \ (s3.petabox.io) 2 / US East (N. Virginia)   \ (s3.us-east-1.petabox.io) 3 / Europe (Frankfurt)   \ (s3.eu-central-1.petabox.io) 4 / Asia Pacific (Singapore)   \ (s3.ap-southeast-1.petabox.io) 5 / Middle East (Bahrain)   \ (s3.me-south-1.petabox.io) 6 / South America (São Paulo)   \ (s3.sa-east-1.petabox.io)endpoint> 1Option acl.Canned ACL used when creating buckets and storing or copying objects.This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclNote that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.If the acl is an empty string then no X-Amz-Acl: header is added andthe default (private) will be used.Choose a number from below, or type in your own value.Press Enter to leave empty.   / Owner gets FULL_CONTROL. 1 | No one else has access rights (default).   \ (private)   / Owner gets FULL_CONTROL. 2 | The AllUsers group gets READ access.   \ (public-read)   / Owner gets FULL_CONTROL. 3 | The AllUsers group gets READ and WRITE access.   | Granting this on a bucket is generally not recommended.   \ (public-read-write)   / Owner gets FULL_CONTROL. 4 | The AuthenticatedUsers group gets READ access.   \ (authenticated-read)   / Object owner gets FULL_CONTROL. 5 | Bucket owner gets READ access.   | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.   \ (bucket-owner-read)   / Both the object owner and the bucket owner get FULL_CONTROL over the object. 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.   \ (bucket-owner-full-control)acl> 1Edit advanced config?y) Yesn) No (default)y/n> NoConfiguration complete.Options:- type: s3- provider: Petabox- access_key_id: YOUR_ACCESS_KEY_ID- secret_access_key: YOUR_SECRET_ACCESS_KEY- region: us-east-1- endpoint: s3.petabox.ioKeep this "My Petabox Storage" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> y

This will leave the config file looking like this.

[My Petabox Storage]type=s3provider=Petaboxaccess_key_id=YOUR_ACCESS_KEY_IDsecret_access_key=YOUR_SECRET_ACCESS_KEYregion=us-east-1endpoint=s3.petabox.io

Pure Storage FlashBlade

Pure Storage FlashBladeis a high performance S3-compatible object store.

FlashBlade supports most modern S3 features including:

  • ListObjectsV2
  • Multipart uploads with AWS-compatible ETags
  • Advanced checksum algorithms (SHA256, CRC32, CRC32C) with trailer support(Purity//FB 4.4.2+)
  • Object versioning and lifecycle management
  • Virtual hosted-style requests (requires DNS configuration)

To configure rclone for Pure Storage FlashBlade:

First run:

rclone config

This will guide you through an interactive setup process:

No remotes found, make a new one?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.name> flashbladeOption Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip] 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, FlashBlade, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others   \ (s3)[snip]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip] 9 / Pure Storage FlashBlade Object Storage   \ (FlashBlade)[snip]provider> FlashBladeOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step.   \ (false) 2 / Get AWS credentials from the environment (env vars or IAM).   \ (true)env_auth> 1Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> ACCESS_KEY_IDOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> SECRET_ACCESS_KEYOption endpoint.Endpoint for S3 API.Required when using an S3 clone.Enter a value. Press Enter to leave empty.endpoint> https://s3.flashblade.example.comEdit advanced config?y) Yesn) No (default)y/n> nConfiguration complete.Options:- type: s3- provider: FlashBlade- access_key_id: ACCESS_KEY_ID- secret_access_key: SECRET_ACCESS_KEY- endpoint: https://s3.flashblade.example.comKeep this "flashblade" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> y

This results in the following configuration being stored in~/.config/rclone/rclone.conf:

[flashblade]type=s3provider=FlashBladeaccess_key_id=ACCESS_KEY_IDsecret_access_key=SECRET_ACCESS_KEYendpoint=https://s3.flashblade.example.com

Note: The FlashBlade endpoint should be the S3 data VIP. For virtual-hosted stylerequests, ensure proper DNS configuration: subdomains of the endpoint hostname shouldresolve to a FlashBlade data VIP. For example, if your endpoint ishttps://s3.flashblade.example.com,thenbucket-name.s3.flashblade.example.com should also resolve to the data VIP.

Qiniu Cloud Object Storage (Kodo)

Qiniu Cloud Object Storage (Kodo), acompletely independent-researched core technology which is proven by repeatedcustomer experience has occupied absolute leading market leader position. Kodocan be widely applied to mass data management.

To configure access to Qiniu Kodo, follow the steps below:

  1. Runrclone config and selectn for a new remote.

    rclone configNo remotes found, make a new one?n) New remotes) Set configuration passwordq) Quit confign/s/q> n
  2. Give the name of the configuration. For example, name it 'qiniu'.

    name> qiniu
  3. Selects3 storage.

    Choose a number from below, or type in your own value[snip]XX / Amazon S3 Compliant Storage Providers including AWS, ...   \ (s3)[snip]Storage> s3
  4. SelectQiniu provider.

    Choose a number from below, or type in your own value1 / Amazon Web Services (AWS) S3   \ "AWS"[snip]22 / Qiniu Object Storage (Kodo)   \ (Qiniu)[snip]provider> Qiniu
  5. Enter your SecretId and SecretKey of Qiniu Kodo.

    Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Enter a boolean value (true or false). Press Enter for the default ("false").Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step   \ "false" 2 / Get AWS credentials from the environment (env vars or IAM)   \ "true"env_auth> 1AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a string value. Press Enter for the default ("").access_key_id> AKIDxxxxxxxxxxAWS Secret Access Key (password)Leave blank for anonymous access or runtime credentials.Enter a string value. Press Enter for the default ("").secret_access_key> xxxxxxxxxxx
  6. Select endpoint for Qiniu Kodo. This is the standard endpoint for different region.

       / The default endpoint - a good choice if you are unsure. 1 | East China Region 1.   | Needs location constraint cn-east-1.   \ (cn-east-1)   / East China Region 2. 2 | Needs location constraint cn-east-2.   \ (cn-east-2)   / North China Region 1. 3 | Needs location constraint cn-north-1.   \ (cn-north-1)   / South China Region 1. 4 | Needs location constraint cn-south-1.   \ (cn-south-1)   / North America Region. 5 | Needs location constraint us-north-1.   \ (us-north-1)   / Southeast Asia Region 1. 6 | Needs location constraint ap-southeast-1.   \ (ap-southeast-1)   / Northeast Asia Region 1. 7 | Needs location constraint ap-northeast-1.   \ (ap-northeast-1)[snip]endpoint> 1Option endpoint.Endpoint for Qiniu Object Storage.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / East China Endpoint 1   \ (s3-cn-east-1.qiniucs.com) 2 / East China Endpoint 2   \ (s3-cn-east-2.qiniucs.com) 3 / North China Endpoint 1   \ (s3-cn-north-1.qiniucs.com) 4 / South China Endpoint 1   \ (s3-cn-south-1.qiniucs.com) 5 / North America Endpoint 1   \ (s3-us-north-1.qiniucs.com) 6 / Southeast Asia Endpoint 1   \ (s3-ap-southeast-1.qiniucs.com) 7 / Northeast Asia Endpoint 1   \ (s3-ap-northeast-1.qiniucs.com)endpoint> 1Option location_constraint.Location constraint - must be set to match the Region.Used when creating buckets only.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / East China Region 1   \ (cn-east-1) 2 / East China Region 2   \ (cn-east-2) 3 / North China Region 1   \ (cn-north-1) 4 / South China Region 1   \ (cn-south-1) 5 / North America Region 1   \ (us-north-1) 6 / Southeast Asia Region 1   \ (ap-southeast-1) 7 / Northeast Asia Region 1   \ (ap-northeast-1)location_constraint> 1
  7. Choose acl and storage class.

    Note that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.Enter a string value. Press Enter for the default ("").Choose a number from below, or type in your own value   / Owner gets FULL_CONTROL. 1 | No one else has access rights (default).   \ (private)   / Owner gets FULL_CONTROL. 2 | The AllUsers group gets READ access.   \ (public-read)[snip]acl> 2The storage class to use when storing new objects in Tencent COS.Enter a string value. Press Enter for the default ("").Choose a number from below, or type in your own value 1 / Standard storage class   \ (STANDARD) 2 / Infrequent access storage mode   \ (LINE) 3 / Archive storage mode   \ (GLACIER) 4 / Deep archive storage mode   \ (DEEP_ARCHIVE)[snip]storage_class> 1Edit advanced config? (y/n)y) Yesn) No (default)y/n> nRemote config--------------------[qiniu]- type: s3- provider: Qiniu- access_key_id: xxx- secret_access_key: xxx- region: cn-east-1- endpoint: s3-cn-east-1.qiniucs.com- location_constraint: cn-east-1- acl: public-read- storage_class: STANDARD--------------------y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> yCurrent remotes:Name                 Type====                 ====qiniu                s3

FileLu S5

FileLu S5 Object Storage is an S3-compatible object storagesystem. It provides multiple region options (Global, US-East, EU-Central,AP-Southeast, and ME-Central) while using a single endpoint (s5lu.com).FileLu S5 is designed for scalability, security, and simplicity, with predictablepricing and no hidden charges for data transfers or API requests.

Here is an example of making a configuration. First run:

rclone config

This will guide you through an interactive setup process.

No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.name> s5luOption Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip]XX / Amazon S3 Compliant Storage Providers including AWS,... FileLu, ...   \ (s3)[snip]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]XX / FileLu S5 Object Storage   \ (FileLu)[snip]provider> FileLuOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step.   \ (false) 2 / Get AWS credentials from the environment (env vars or IAM).   \ (true)env_auth>Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> XXXOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> XXXOption endpoint.Endpoint for S3 API.Required when using an S3 clone.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Global   \ (global) 2 / North America (US-East)   \ (us-east) 3 / Europe (EU-Central)   \ (eu-central) 4 / Asia Pacific (AP-Southeast)   \ (ap-southeast) 5 / Middle East (ME-Central)   \ (me-central)region> 1Edit advanced config?y) Yesn) No (default)y/n> nConfiguration complete.Options:- type: s3- provider: FileLu- access_key_id: XXX- secret_access_key: XXX- endpoint: s5lu.comKeep this "s5lu" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> y

This will leave the config file looking like this.

[s5lu]type=s3provider=FileLuaccess_key_id=XXXsecret_access_key=XXXendpoint=s5lu.com

Rabata

Rabata is an S3-compatible secure cloud storage service thatoffers flat, transparent pricing (no API request fees) while supporting standardS3 APIs. It is suitable for backup, application storage,media workflows, andarchive use cases.

Server side copy is not implemented with Rabata, also meaning modification timeof objects cannot be updated.

Rclone config:

rclone configNo remotes found, make a new one?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.name> RabataOption Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip]XX / Amazon S3 Compliant Storage Providers including AWS, ...   \ (s3)[snip]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]XX / Rabata Cloud Storage   \ (Rabata)[snip]provider> RabataOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step.   \ (false) 2 / Get AWS credentials from the environment (env vars or IAM).   \ (true)env_auth>Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> ACCESS_KEY_IDOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> SECRET_ACCESS_KEYOption region.Region where your bucket will be created and your data stored.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / US East (N. Virginia)   \ (us-east-1) 2 / EU (Ireland)   \ (eu-west-1) 3 / EU (London)   \ (eu-west-2)region> 3Option endpoint.Endpoint for Rabata Object Storage.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / US East (N. Virginia)   \ (s3.us-east-1.rabata.io) 2 / EU West (Ireland)   \ (s3.eu-west-1.rabata.io) 3 / EU West (London)   \ (s3.eu-west-2.rabata.io)endpoint> 3Option location_constraint.location where your bucket will be created and your data stored.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / US East (N. Virginia)   \ (us-east-1) 2 / EU (Ireland)   \ (eu-west-1) 3 / EU (London)   \ (eu-west-2)location_constraint> 3Edit advanced config?y) Yesn) No (default)y/n> nConfiguration complete.Options:- type: s3- provider: Rabata- access_key_id: ACCESS_KEY_ID- secret_access_key: SECRET_ACCESS_KEY- region: eu-west-2- endpoint: s3.eu-west-2.rabata.io- location_constraint: eu-west-2Keep this "rabata" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> yCurrent remotes:Name                 Type====                 ====rabata               s3

RackCorp

RackCorp Object Storage is an S3compatible object storage platform from your friendly cloud provider RackCorp.The service is fast, reliable, well priced and located in many strategiclocations unserviced by others, to ensure you can maintain data sovereignty.

Before you can use RackCorp Object Storage, you'll need tosign up for an account on ourportal.Next you can create anaccess key, asecret key andbuckets, in yourlocation of choice with ease. These details are required for the next steps ofconfiguration, whenrclone config asks for youraccess_key_id andsecret_access_key.

Your config should end up looking a bit like this:

[RCS3-demo-config]type=s3provider=RackCorpenv_auth=trueaccess_key_id=YOURACCESSKEYsecret_access_key=YOURSECRETACCESSKEYregion=au-nswendpoint=s3.rackcorp.comlocation_constraint=au-nsw

Rclone Serve S3

Rclone can serve any remote over the S3 protocol. For details see therclone serve s3 documentation.

For example, to serveremote:path over s3, run the server like this:

rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path

This will be compatible with an rclone remote which is defined like this:

[serves3]type=s3provider=Rcloneendpoint=http://127.0.0.1:8080/access_key_id=ACCESS_KEY_IDsecret_access_key=SECRET_ACCESS_KEYuse_multipart_uploads=false

Note that settinguse_multipart_uploads = false is to work arounda bug which will be fixed in due course.

Scaleway

Scaleway The Object Storage platformallows you to store anything from backups, logs and web assets to documents and photos.Files can be dropped from the Scaleway console or transferred through our API andCLI or using any S3-compatible tool.

Scaleway provides an S3 interface which can be configured for use with rclonelike this:

[scaleway]type=s3provider=Scalewayenv_auth=falseendpoint=s3.nl-ams.scw.cloudaccess_key_id=SCWXXXXXXXXXXXXXXsecret_access_key=1111111-2222-3333-44444-55555555555555region=nl-amslocation_constraint=nl-amsacl=privateupload_cutoff=5Mchunk_size=5Mcopy_cutoff=5M

Scaleway Glacier is thelow-cost S3 Glacier alternative from Scaleway and it works the same way as onS3 by accepting the "GLACIER"storage_class. So you can configure your remotewith thestorage_class = GLACIER option to upload directly to Scaleway Glacier.Don't forget that in this state you can't read files back after, you will needto restore them to "STANDARD" storage_class first before being able to readthem (see "restore" section above)

Seagate Lyve Cloud

Seagate Lyve Cloud isan S3 compatible object storage platform fromSeagateintended for enterprise use.

Here is a config run through for a remote calledremote - you maychoose a different name of course. Note that to create an access keyand secret key you will need to create a service account first.

$ rclone configNo remotes found, make a new one?n) New remotes) Set configuration passwordq) Quit confign/s/q> nname> remote

Chooses3 backend

Type of storage to configure.Choose a number from below, or type in your own value.[snip]XX / Amazon S3 Compliant Storage Providers including AWS, ...   \ (s3)[snip]Storage> s3

ChooseLyveCloud as S3 provider

Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]XX / Seagate Lyve Cloud   \ (LyveCloud)[snip]provider> LyveCloud

Take the default (just press enter) to enter access key and secret in theconfig file.

Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step.   \ (false) 2 / Get AWS credentials from the environment (env vars or IAM).   \ (true)env_auth>
AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> XXX
AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> YYY

Leave region blank

Region to connect to.Leave blank if you are using an S3 clone and you don't have a region.Choose a number from below, or type in your own value.Press Enter to leave empty.   / Use this if unsure. 1 | Will use v4 signatures and an empty region.   \ ()   / Use this only if v4 signatures don't work. 2 | E.g. pre Jewel/v10 CEPH.   \ (other-v2-signature)region>

Enter your Lyve Cloud endpoint. This field cannot be kept empty.

Endpoint for Lyve Cloud S3 API.Required when using an S3 clone.Please type in your LyveCloud endpoint.Examples:- s3.us-west-1.{account_name}.lyve.seagate.com (US West 1 - California)- s3.eu-west-1.{account_name}.lyve.seagate.com (US West 1 - Ireland)Enter a value.endpoint> s3.us-west-1.global.lyve.seagate.com

Leave location constraint blank

Location constraint - must be set to match the Region.Leave blank if not sure. Used when creating buckets only.Enter a value. Press Enter to leave empty.location_constraint>

Choose default ACL (private).

Canned ACL used when creating buckets and storing or copying objects.This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclNote that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.Choose a number from below, or type in your own value.Press Enter to leave empty.   / Owner gets FULL_CONTROL. 1 | No one else has access rights (default).   \ (private)[snip]acl>

And the config file should end up looking like this:

[remote]type=s3provider=LyveCloudaccess_key_id=XXXsecret_access_key=YYYendpoint=s3.us-east-1.lyvecloud.seagate.com

SeaweedFS

SeaweedFS is a distributed storagesystem for blobs, objects, files, and data lake, with O(1) disk seek and ascalable file metadata store. It has an S3 compatible object storage interface.SeaweedFS can also act as agateway to remote S3 compatible object storeto cache data and metadata with asynchronous write back, for fast local speedand minimize access cost.

Assuming the SeaweedFS are configured withweed shell as such:

> s3.bucket.create -name foo> s3.configure -access_key=any -secret_key=any -buckets=foo -user=me -actions=Read,Write,List,Tagging,Admin -apply{  "identities": [    {      "name": "me",      "credentials": [        {          "accessKey": "any",          "secretKey": "any"        }      ],      "actions": [        "Read:foo",        "Write:foo",        "List:foo",        "Tagging:foo",        "Admin:foo"      ]    }  ]}

To use rclone with SeaweedFS, above configuration should end up with somethinglike this in your config:

[seaweedfs_s3]type=s3provider=SeaweedFSaccess_key_id=anysecret_access_key=anyendpoint=localhost:8333

So once set up, for example to copy files into a bucket

rclone copy /path/to/files seaweedfs_s3:foo

Selectel

Selectel Cloud Storageis an S3 compatible storage system which features triple redundancystorage, automatic scaling, high availability and a comprehensive IAMsystem.

Selectel have a section on their website forconfiguringrclonewhich shows how to make the right API keys.

From rclone v1.69 Selectel is a supported operator - please choose theSelectel provider type.

Note that you should use "vHosted" access for the buckets (which isthe recommended default), not "path style".

You can userclone config to make a new provider like this

No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.name> selectelOption Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip]XX / Amazon S3 Compliant Storage Providers including ..., Selectel, ...   \ (s3)[snip]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]XX / Selectel Object Storage   \ (Selectel)[snip]provider> SelectelOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step.   \ (false) 2 / Get AWS credentials from the environment (env vars or IAM).   \ (true)env_auth> 1Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> ACCESS_KEYOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> SECRET_ACCESS_KEYOption region.Region where your data stored.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / St. Petersburg   \ (ru-1)region> 1Option endpoint.Endpoint for Selectel Object Storage.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Saint Petersburg   \ (s3.ru-1.storage.selcloud.ru)endpoint> 1Edit advanced config?y) Yesn) No (default)y/n> nConfiguration complete.Options:- type: s3- provider: Selectel- access_key_id: ACCESS_KEY- secret_access_key: SECRET_ACCESS_KEY- region: ru-1- endpoint: s3.ru-1.storage.selcloud.ruKeep this "selectel" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> y

And your config should end up looking like this:

[selectel]type=s3provider=Selectelaccess_key_id=ACCESS_KEYsecret_access_key=SECRET_ACCESS_KEYregion=ru-1endpoint=s3.ru-1.storage.selcloud.ru

Servercore

Servercore Object Storage is an S3compatible object storage system that provides scalable and secure storagesolutions for businesses of all sizes.

rclone config example:

No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.name> servercoreOption Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip]XX / Amazon S3 Compliant Storage Providers including ..., Servercore, ...   \ (s3)[snip]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]XX / Servercore Object Storage   \ (Servercore)[snip]provider> ServercoreOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step.   \ (false) 2 / Get AWS credentials from the environment (env vars or IAM).   \ (true)env_auth> 1Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> ACCESS_KEYOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> SECRET_ACCESS_KEYOption region.Region where your is data stored.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / St. Petersburg   \ (ru-1) 2 / Moscow   \ (gis-1) 3 / Moscow   \ (ru-7) 4 / Tashkent, Uzbekistan   \ (uz-2) 5 / Almaty, Kazakhstan   \ (kz-1)region> 1Option endpoint.Endpoint for Servercore Object Storage.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Saint Petersburg   \ (s3.ru-1.storage.selcloud.ru) 2 / Moscow   \ (s3.gis-1.storage.selcloud.ru) 3 / Moscow   \ (s3.ru-7.storage.selcloud.ru) 4 / Tashkent, Uzbekistan   \ (s3.uz-2.srvstorage.uz) 5 / Almaty, Kazakhstan   \ (s3.kz-1.srvstorage.kz)endpoint> 1Edit advanced config?y) Yesn) No (default)y/n> nConfiguration complete.Options:- type: s3- provider: Servercore- access_key_id: ACCESS_KEY- secret_access_key: SECRET_ACCESS_KEY- region: ru-1- endpoint: s3.ru-1.storage.selcloud.ruKeep this "servercore" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> y

Spectra Logic

Spectra Logicis an on-prem S3-compatible object storage gateway that exposes local objectstorage and policy-tiers data to Spectra tape and public clouds under a singlenamespace for backup and archiving.

The S3 compatible gateway is configured usingrclone config with atype ofs3 and with a provider name ofSpectraLogic. Here is an examplerun of the configurator.

No remotes found, make a new one?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.name> spectralogicOption Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip]XX / Amazon S3 Compliant Storage Providers including ..., SpectraLogic, ...   \ (s3)[snip]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]XX / SpectraLogic BlackPearl   \ (SpectraLogic)[snip]provider> SpectraLogicOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step.   \ (false) 2 / Get AWS credentials from the environment (env vars or IAM).   \ (true)env_auth> 1Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> ACCESS_KEYOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> SECRET_ACCESS_KEYOption endpoint.Endpoint for S3 API.Required when using an S3 clone.Enter a value. Press Enter to leave empty.endpoint> https://bp.example.comEdit advanced config?y) Yesn) No (default)y/n> nConfiguration complete.Options:- type: s3- provider: SpectraLogic- access_key_id: ACCESS_KEY- secret_access_key: SECRET_ACCESS_KEY- endpoint: https://bp.example.comKeep this "spectratest" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> y

And your config should end up looking like this:

[spectratest]type=s3provider=SpectraLogicaccess_key_id=ACCESS_KEYsecret_access_key=SECRET_ACCESS_KEYendpoint=https://bp.example.com

Storj

Storj is a decentralized cloud storage which can be used through itsnative protocol or an S3 compatible gateway.

The S3 compatible gateway is configured usingrclone config with atype ofs3 and with a provider name ofStorj. Here is an examplerun of the configurator.

Type of storage to configure.Storage> s3Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step.   \ (false) 2 / Get AWS credentials from the environment (env vars or IAM).   \ (true)env_auth> 1Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> XXXX (as shown when creating the access grant)Option secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> XXXX (as shown when creating the access grant)Option endpoint.Endpoint of the Shared Gateway.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / EU1 Shared Gateway   \ (gateway.eu1.storjshare.io) 2 / US1 Shared Gateway   \ (gateway.us1.storjshare.io) 3 / Asia-Pacific Shared Gateway   \ (gateway.ap1.storjshare.io)endpoint> 1 (as shown when creating the access grant)Edit advanced config?y) Yesn) No (default)y/n> n

Note that s3 credentials are generated when youcreate an accessgrant.

Backend quirks

  • --chunk-size is forced to be 64 MiB or greater. This will use morememory than the default of 5 MiB.
  • Server side copy is disabled as it isn't currently supported in thegateway.
  • GetTier and SetTier are not supported.

Backend bugs

Due toissue #39uploading multipart files via the S3 gateway causes them to lose theirmetadata. For rclone's purpose this means that the modification timeis not stored, nor is any MD5SUM (if one is available from thesource).

This has the following consequences:

  • Usingrclone rcat will fail as the metadata doesn't match after upload
  • Uploading files withrclone mount will fail for the same reason
    • This can worked around by using--vfs-cache-mode writes or--vfs-cache-mode full or setting--s3-upload-cutoff large
  • Files uploaded via a multipart upload won't have their modtimes
    • This will mean thatrclone sync will likely keep trying to uploadfiles bigger than--s3-upload-cutoff
    • This can be worked around with--checksum or--size-only orsetting--s3-upload-cutoff large
    • The maximum value for--s3-upload-cutoff is 5GiB though

One general purpose workaround is to set--s3-upload-cutoff 5G. Thismeans that rclone will upload files smaller than 5GiB as single parts.Note that this can be set in the config file withupload_cutoff = 5Gor configured in the advanced settings. If you regularly transferfiles larger than 5G then using--checksum or--size-only inrclone sync is the recommended workaround.

Comparison with the native protocol

Use thethe native protocol to take advantage ofclient-side encryption as well as to achieve the best possibledownload performance. Uploads will be erasure-coded locally, thus a1gb upload will result in 2.68gb of data being uploaded to storagenodes across the network.

Use this backend and the S3 compatible Hosted Gateway to increaseupload performance and reduce the load on your systems and network.Uploads will be encrypted and erasure-coded server-side, thus a 1GBupload will result in only in 1GB of data being uploaded to storagenodes across the network.

For more detailed comparison please check the documentation of thestorj backend.

Synology C2 Object Storage

Synology C2 Object Storageprovides a secure, S3-compatible, and cost-effective cloud storage solutionwithout API request, download fees, and deletion penalty.

The S3 compatible gateway is configured usingrclone config with atype ofs3 and with a provider name ofSynology. Here is an examplerun of the configurator.

First run:

rclone config

This will guide you through an interactive setup process.

No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.1name> synoType of storage to configure.Enter a string value. Press Enter for the default ("").Choose a number from below, or type in your own valueXX / Amazon S3 Compliant Storage Providers including AWS, ...   \ "s3"Storage> s3Choose your S3 provider.Enter a string value. Press Enter for the default ("").Choose a number from below, or type in your own value 24 / Synology C2 Object Storage   \ (Synology)provider> SynologyGet AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Enter a boolean value (true or false). Press Enter for the default ("false").Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step   \ "false" 2 / Get AWS credentials from the environment (env vars or IAM)   \ "true"env_auth> 1AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a string value. Press Enter for the default ("").access_key_id> accesskeyidAWS Secret Access Key (password)Leave blank for anonymous access or runtime credentials.Enter a string value. Press Enter for the default ("").secret_access_key> secretaccesskeyRegion where your data stored.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Europe Region 1   \ (eu-001) 2 / Europe Region 2   \ (eu-002) 3 / US Region 1   \ (us-001) 4 / US Region 2   \ (us-002) 5 / Asia (Taiwan)   \ (tw-001)region > 1Option endpoint.Endpoint for Synology C2 Object Storage API.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / EU Endpoint 1   \ (eu-001.s3.synologyc2.net) 2 / US Endpoint 1   \ (us-001.s3.synologyc2.net) 3 / TW Endpoint 1   \ (tw-001.s3.synologyc2.net)endpoint> 1Option location_constraint.Location constraint - must be set to match the Region.Leave blank if not sure. Used when creating buckets only.Enter a value. Press Enter to leave empty.location_constraint>Edit advanced config? (y/n)y) Yesn) Noy/n> yOption no_check_bucket.If set, don't attempt to check the bucket exists or create it.This can be useful when trying to minimise the number of transactionsrclone does if you know the bucket exists already.It can also be needed if the user you are using does not have bucketcreation permissions. Before v1.52.0 this would have passed silentlydue to a bug.Enter a boolean value (true or false). Press Enter for the default (true).no_check_bucket> trueConfiguration complete.Options:- type: s3- provider: Synology- region: eu-001- endpoint: eu-001.s3.synologyc2.net- no_check_bucket: trueKeep this "syno" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> y

Tencent COS

Tencent Cloud Object Storage (COS)is a distributed storage service offered by Tencent Cloud for unstructured data.It is secure, stable, massive, convenient, low-delay and low-cost.

To configure access to Tencent COS, follow the steps below:

  1. Runrclone config and selectn for a new remote.

    rclone configNo remotes found, make a new one?n) New remotes) Set configuration passwordq) Quit confign/s/q> n
  2. Give the name of the configuration. For example, name it 'cos'.

    name> cos
  3. Selects3 storage.

    Choose a number from below, or type in your own value[snip]XX / Amazon S3 Compliant Storage Providers including AWS, ...   \ "s3"[snip]Storage> s3
  4. SelectTencentCOS provider.

    Choose a number from below, or type in your own value1 / Amazon Web Services (AWS) S3   \ "AWS"[snip]11 / Tencent Cloud Object Storage (COS)   \ "TencentCOS"[snip]provider> TencentCOS
  5. Enter your SecretId and SecretKey of Tencent Cloud.

    Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Enter a boolean value (true or false). Press Enter for the default ("false").Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step   \ "false" 2 / Get AWS credentials from the environment (env vars or IAM)   \ "true"env_auth> 1AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a string value. Press Enter for the default ("").access_key_id> AKIDxxxxxxxxxxAWS Secret Access Key (password)Leave blank for anonymous access or runtime credentials.Enter a string value. Press Enter for the default ("").secret_access_key> xxxxxxxxxxx
  6. Select endpoint for Tencent COS. This is the standard endpoint for different region.

     1 / Beijing Region.   \ "cos.ap-beijing.myqcloud.com" 2 / Nanjing Region.   \ "cos.ap-nanjing.myqcloud.com" 3 / Shanghai Region.   \ "cos.ap-shanghai.myqcloud.com" 4 / Guangzhou Region.   \ "cos.ap-guangzhou.myqcloud.com"[snip]endpoint> 4
  7. Choose acl and storage class.

    Note that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.Enter a string value. Press Enter for the default ("").Choose a number from below, or type in your own value 1 / Owner gets Full_CONTROL. No one else has access rights (default).   \ "default"[snip]acl> 1The storage class to use when storing new objects in Tencent COS.Enter a string value. Press Enter for the default ("").Choose a number from below, or type in your own value 1 / Default   \ ""[snip]storage_class> 1Edit advanced config? (y/n)y) Yesn) No (default)y/n> nRemote config--------------------[cos]type = s3provider = TencentCOSenv_auth = falseaccess_key_id = xxxsecret_access_key = xxxendpoint = cos.ap-guangzhou.myqcloud.comacl = default--------------------y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> yCurrent remotes:Name                 Type====                 ====cos                  s3

Wasabi

Wasabi is a cloud-based object storage service for abroad range of applications and use cases. Wasabi is designed forindividuals and organizations that require a high-performance,reliable, and secure data storage infrastructure at minimal cost.

Wasabi provides an S3 interface which can be configured for use withrclone like this.

No remotes found, make a new one\?n) New remotes) Set configuration passwordn/s> nname> wasabiType of storage to configure.Choose a number from below, or type in your own value[snip]XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Minio, Liara)   \ "s3"[snip]Storage> s3Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step   \ "false" 2 / Get AWS credentials from the environment (env vars or IAM)   \ "true"env_auth> 1AWS Access Key ID - leave blank for anonymous access or runtime credentials.access_key_id> YOURACCESSKEYAWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.secret_access_key> YOURSECRETACCESSKEYRegion to connect to.Choose a number from below, or type in your own value   / The default endpoint - a good choice if you are unsure. 1 | US Region, Northern Virginia, or Pacific Northwest.   | Leave location constraint empty.   \ "us-east-1"[snip]region> us-east-1Endpoint for S3 API.Leave blank if using AWS to use the default endpoint for the region.Specify if using an S3 clone such as Ceph.endpoint> s3.wasabisys.comLocation constraint - must be set to match the Region. Used when creating buckets only.Choose a number from below, or type in your own value 1 / Empty for US Region, Northern Virginia, or Pacific Northwest.   \ ""[snip]location_constraint>Canned ACL used when creating buckets and/or storing objects in S3.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclChoose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default).   \ "private"[snip]acl>The server-side encryption algorithm used when storing this object in S3.Choose a number from below, or type in your own value 1 / None   \ "" 2 / AES256   \ "AES256"server_side_encryption>The storage class to use when storing objects in S3.Choose a number from below, or type in your own value 1 / Default   \ "" 2 / Standard storage class   \ "STANDARD" 3 / Reduced redundancy storage class   \ "REDUCED_REDUNDANCY" 4 / Standard Infrequent Access storage class   \ "STANDARD_IA"storage_class>Remote config--------------------[wasabi]env_auth = falseaccess_key_id = YOURACCESSKEYsecret_access_key = YOURSECRETACCESSKEYregion = us-east-1endpoint = s3.wasabisys.comlocation_constraint =acl =server_side_encryption =storage_class =--------------------y) Yes this is OKe) Edit this remoted) Delete this remotey/e/d> y

This will leave the config file looking like this.

[wasabi]type=s3provider=Wasabienv_auth=falseaccess_key_id=YOURACCESSKEYsecret_access_key=YOURSECRETACCESSKEYregion=endpoint=s3.wasabisys.comlocation_constraint=acl=server_side_encryption=storage_class=

Zata Object Storage

Zata Object Storage provides a secure, S3-compatible cloudstorage solution designed for scalability and performance, ideal for a varietyof data storage needs.

First run:

rclone config
This will guide you through an interactive setup process:e) Edit existing remoten) New remoted) Delete remoter) Rename remotec) Copy remotes) Set configuration passwordq) Quit confige/n/d/r/c/s/q> nEnter name for new remote.name> my zata storageOption Storage.Type of storage to configure.Choose a number from below, or type in your own value.XX / Amazon S3 Compliant Storage Providers including AWS, ...   \ (s3)Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.XX / Zata (S3 compatible Gateway)   \ (Zata)provider> ZataOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step.   \ (false) 2 / Get AWS credentials from the environment (env vars or IAM).   \ (true)env_auth>Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> "your key"Option secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> "your secret key"Option region.Region where you can connect with.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Indore, Madhya Pradesh, India   \ (us-east-1)region> 1Option endpoint.Endpoint for Zata Object Storage.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / South Asia Endpoint   \ (idr01.zata.ai)endpoint> 1Option location_constraint.Location constraint - must be set to match the Region.Leave blank if not sure. Used when creating buckets only.Enter a value. Press Enter to leave empty.location_constraint>Option acl.Canned ACL used when creating buckets and storing or copying objects.This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclNote that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.If the acl is an empty string then no X-Amz-Acl: header is added andthe default (private) will be used.Choose a number from below, or type in your own value.Press Enter to leave empty.   / Owner gets FULL_CONTROL. 1 | No one else has access rights (default).   \ (private)   / Owner gets FULL_CONTROL. 2 | The AllUsers group gets READ access.   \ (public-read)   / Owner gets FULL_CONTROL. 3 | The AllUsers group gets READ and WRITE access.   | Granting this on a bucket is generally not recommended.   \ (public-read-write)   / Owner gets FULL_CONTROL. 4 | The AuthenticatedUsers group gets READ access.   \ (authenticated-read)   / Object owner gets FULL_CONTROL. 5 | Bucket owner gets READ access.   | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.   \ (bucket-owner-read)   / Both the object owner and the bucket owner get FULL_CONTROL over the object. 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.   \ (bucket-owner-full-control)acl>Edit advanced config?y) Yesn) No (default)y/n>Configuration complete.Options:- type: s3- provider: Zata- access_key_id: xxx- secret_access_key: xxx- region: us-east-1- endpoint: idr01.zata.aiKeep this "my zata storage" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d>

This will leave the config file looking like this.

[my zata storage]type=s3provider=Zataaccess_key_id=xxxsecret_access_key=xxxregion=us-east-1endpoint=idr01.zata.ai

Memory usage

The most common cause of rclone using lots of memory is a singledirectory with millions of files in. Despite s3 not really having theconcepts of directories, rclone does the sync on a directory bydirectory basis to be compatible with normal filing systems.

Rclone loads each directory into memory as rclone objects. Each rcloneobject takes 0.5k-1k of memory, so approximately 1GB per 1,000,000files, and the sync for that directory does not begin until it isentirely loaded in memory. So the sync can take a long time to startfor large directories.

To sync a directory with 100,000,000 files in you would need approximately100 GB of memory. At some point the amount of memory becomes difficultto provide so there isa workaround for thiswhich involves a bit of scripting.

At some point rclone will gain a sync mode which is effectively thisworkaround but built in to rclone.

Limitations

rclone about is not supported by the S3 backend. Backends withoutthis capability cannot determine free space for an rclone mount oruse policymfs (most free space) as a member of an rclone unionremote.

SeeList of backends that do not support rclone aboutandrclone about.

Contents
Platinum Sponsor

Gold Sponsor

Gold Sponsor

Share and Enjoy
Links

[8]ページ先頭

©2009-2025 Movatter.jp