The S3 backend can be used with a number of different providers:
Paths are specified asremote:bucket (orremote: for thelsdcommand.) You may put subdirectories in too, e.g.remote:bucket/path/to/dir.
Once you have made a remote (see the provider specific section above)you can use it like this:
See all buckets
rclone lsd remote:Make a new bucket
rclone mkdir remote:bucketList the contents of a bucket
rclone ls remote:bucketSync/home/local/directory to the remote bucket, deleting any excessfiles in the bucket.
rclone sync --interactive /home/local/directory remote:bucketHere is an example of making an s3 configuration for the AWS S3 provider.Most applies to the other providers as well, any differences are describedbelow.
First run
rclone configThis will guide you through an interactive setup process.
No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nname> remoteType of storage to configure.Choose a number from below, or type in your own value[snip]XX / Amazon S3 Compliant Storage Providers including AWS, ... \ "s3"[snip]Storage> s3Choose your S3 provider.Choose a number from below, or type in your own value 1 / Amazon Web Services (AWS) S3 \ "AWS" 2 / Ceph Object Storage \ "Ceph" 3 / DigitalOcean Spaces \ "DigitalOcean" 4 / Dreamhost DreamObjects \ "Dreamhost" 5 / IBM COS S3 \ "IBMCOS" 6 / Minio Object Storage \ "Minio" 7 / Wasabi Object Storage \ "Wasabi" 8 / Any other S3 compatible provider \ "Other"provider> 1Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step \ "false" 2 / Get AWS credentials from the environment (env vars or IAM) \ "true"env_auth> 1AWS Access Key ID - leave blank for anonymous access or runtime credentials.access_key_id> XXXAWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.secret_access_key> YYYRegion to connect to.Choose a number from below, or type in your own value / The default endpoint - a good choice if you are unsure. 1 | US Region, Northern Virginia, or Pacific Northwest. | Leave location constraint empty. \ "us-east-1" / US East (Ohio) Region 2 | Needs location constraint us-east-2. \ "us-east-2" / US West (Oregon) Region 3 | Needs location constraint us-west-2. \ "us-west-2" / US West (Northern California) Region 4 | Needs location constraint us-west-1. \ "us-west-1" / Canada (Central) Region 5 | Needs location constraint ca-central-1. \ "ca-central-1" / EU (Ireland) Region 6 | Needs location constraint EU or eu-west-1. \ "eu-west-1" / EU (London) Region 7 | Needs location constraint eu-west-2. \ "eu-west-2" / EU (Frankfurt) Region 8 | Needs location constraint eu-central-1. \ "eu-central-1" / Asia Pacific (Singapore) Region 9 | Needs location constraint ap-southeast-1. \ "ap-southeast-1" / Asia Pacific (Sydney) Region10 | Needs location constraint ap-southeast-2. \ "ap-southeast-2" / Asia Pacific (Tokyo) Region11 | Needs location constraint ap-northeast-1. \ "ap-northeast-1" / Asia Pacific (Seoul)12 | Needs location constraint ap-northeast-2. \ "ap-northeast-2" / Asia Pacific (Mumbai)13 | Needs location constraint ap-south-1. \ "ap-south-1" / Asia Pacific (Hong Kong) Region14 | Needs location constraint ap-east-1. \ "ap-east-1" / South America (Sao Paulo) Region15 | Needs location constraint sa-east-1. \ "sa-east-1"region> 1Endpoint for S3 API.Leave blank if using AWS to use the default endpoint for the region.endpoint>Location constraint - must be set to match the Region. Used when creating buckets only.Choose a number from below, or type in your own value 1 / Empty for US Region, Northern Virginia, or Pacific Northwest. \ "" 2 / US East (Ohio) Region. \ "us-east-2" 3 / US West (Oregon) Region. \ "us-west-2" 4 / US West (Northern California) Region. \ "us-west-1" 5 / Canada (Central) Region. \ "ca-central-1" 6 / EU (Ireland) Region. \ "eu-west-1" 7 / EU (London) Region. \ "eu-west-2" 8 / EU Region. \ "EU" 9 / Asia Pacific (Singapore) Region. \ "ap-southeast-1"10 / Asia Pacific (Sydney) Region. \ "ap-southeast-2"11 / Asia Pacific (Tokyo) Region. \ "ap-northeast-1"12 / Asia Pacific (Seoul) \ "ap-northeast-2"13 / Asia Pacific (Mumbai) \ "ap-south-1"14 / Asia Pacific (Hong Kong) \ "ap-east-1"15 / South America (Sao Paulo) Region. \ "sa-east-1"location_constraint> 1Canned ACL used when creating buckets and/or storing objects in S3.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclChoose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default). \ "private" 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. \ "public-read" / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. 3 | Granting this on a bucket is generally not recommended. \ "public-read-write" 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. \ "authenticated-read" / Object owner gets FULL_CONTROL. Bucket owner gets READ access. 5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. \ "bucket-owner-read" / Both the object owner and the bucket owner get FULL_CONTROL over the object. 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. \ "bucket-owner-full-control"acl> 1The server-side encryption algorithm used when storing this object in S3.Choose a number from below, or type in your own value 1 / None \ "" 2 / AES256 \ "AES256"server_side_encryption> 1The storage class to use when storing objects in S3.Choose a number from below, or type in your own value 1 / Default \ "" 2 / Standard storage class \ "STANDARD" 3 / Reduced redundancy storage class \ "REDUCED_REDUNDANCY" 4 / Standard Infrequent Access storage class \ "STANDARD_IA" 5 / One Zone Infrequent Access storage class \ "ONEZONE_IA" 6 / Glacier Flexible Retrieval storage class \ "GLACIER" 7 / Glacier Deep Archive storage class \ "DEEP_ARCHIVE" 8 / Intelligent-Tiering storage class \ "INTELLIGENT_TIERING" 9 / Glacier Instant Retrieval storage class \ "GLACIER_IR"storage_class> 1Remote configConfiguration complete.Options:- type: s3- provider: AWS- env_auth: false- access_key_id: XXX- secret_access_key: YYY- region: us-east-1- endpoint:- location_constraint:- acl: private- server_side_encryption:- storage_class:Keep this "remote" remote?y) Yes this is OKe) Edit this remoted) Delete this remotey/e/d>The modified time is stored as metadata on the object asX-Amz-Meta-Mtime as floating point since the epoch, accurate to 1 ns.
If the modification time needs to be updated rclone will attempt to perform a serverside copy to update the modification if the object can be copied in a single part.In the case the object is larger than 5Gb or is in Glacier or Glacier Deep Archivestorage the object will be uploaded rather than copied.
Note that reading this from the object takes an additionalHEADrequest as the metadata isn't returned in object listings.
For small objects which weren't uploaded as multipart uploads (objectssized below--s3-upload-cutoff if uploaded with rclone) rclone usestheETag: header as an MD5 checksum.
However for objects which were uploaded as multipart uploads or withserver side encryption (SSE-AWS or SSE-C) theETag header is nolonger the MD5 sum of the data, so rclone adds an additional piece ofmetadataX-Amz-Meta-Md5chksum which is a base64 encoded MD5 hash (inthe same format as is required forContent-MD5). You can use base64 -d andhexdump to check this value manually:
echo 'VWTGdNx3LyXQDfA0e2Edxw==' | base64 -d | hexdumpor you can userclone check to verify the hashes are OK.
For large objects, calculating this hash can take some time so theaddition of this hash can be disabled with--s3-disable-checksum.This will mean that these objects do not have an MD5 checksum.
Note that reading this from the object takes an additionalHEADrequest as the metadata isn't returned in object listings.
By default, rclone will use the modification time of objects stored inS3 for syncing. This is stored in object metadata which unfortunatelytakes an extra HEAD request to read which can be expensive (in timeand money).
The modification time is used by default for all operations thatrequire checking the time a file was last updated. It allows rclone totreat the remote more like a true filesystem, but it is inefficient onS3 because it requires an extra API call to retrieve the metadata.
The extra API calls can be avoided when syncing (usingrclone syncorrclone copy) in a few different ways, each with its owntradeoffs.
--size-onlyrclone sync --size-only /path/to/source s3:bucket--checksumrclone sync --checksum /path/to/source s3:bucket--update --use-server-modtime--update along with--use-server-modtime, avoids theextra API call and uploads files whose local modification timeis newer than the time it was last uploaded.rclone sync --update --use-server-modtime /path/to/source s3:bucketThese flags can and should be used in combination with--fast-list -see below.
If usingrclone mount or any command using the VFS (egrclone serve) commands then you might want to consider using the VFS flag--no-modtime which will stop rclone reading the modification timefor every object. You could also use--use-server-modtime if you arehappy with the modification times of the objects being the time ofupload.
Rclone's default directory traversal is to process each directoryindividually. This takes one API call per directory. Using the--fast-list flag will read all info about the objects intomemory first using a smaller number of API calls (one per 1000objects). See therclone docs for more details.
rclone sync --fast-list --checksum /path/to/source s3:bucket--fast-list trades off API transactions for memory use. As a roughguide rclone uses 1k of memory per object stored, so using--fast-list on a sync of a million objects will use roughly 1 GiB ofRAM.
If you are only copying a small number of files into a big repositorythen using--no-traverse is a good idea. This finds objects directlyinstead of through directory listings. You can do a "top-up" sync verycheaply by using--max-age and--no-traverse to copy only recentfiles, eg
rclone copy --max-age 24h --no-traverse /path/to/source s3:bucketYou'd then do a fullrclone sync less often.
Note that--fast-list isn't required in the top-up sync.
By default, rclone will HEAD every object it uploads. It does this tocheck the object got uploaded correctly.
You can disable this with the--s3-no-head option - seethere for more details.
Setting this flag increases the chance for undetected upload failures.
If you are copying objects between S3 buckets in the same region, you shoulduse server-side copy. This is much faster than downloading and re-uploadingthe objects, as no data is transferred.
For rclone to use server-side copy, you must use the same remote for thesource and destination.
rclone copy s3:source-bucket s3:destination-bucketWhen using server-side copy, the performance is limited by the rate at whichrclone issues API requests to S3. See below for how to increase the number ofAPI requests rclone makes.
You can increase the rate of API requests to S3 by increasing the parallelismusing--transfers and--checkers options.
Rclone uses a very conservative defaults for these settings, as not allproviders support high rates of requests. Depending on your provider, you canincrease significantly the number of transfers and checkers.
For example, with AWS S3, if you can increase the number of checkers to valueslike 200. If you are doing a server-side copy, you can also increase the numberof transfers to 200.
rclone sync --transfers 200 --checkers 200 --checksum s3:source-bucket s3:destination-bucketYou will need to experiment with these values to find the optimal settings foryour setup.
Rclone does its best to verify every part of an upload or download tothe s3 provider using various hashes.
Every HTTP transaction to/from the provider has aX-Amz-Content-Sha256 or aContent-Md5 header to guard againstcorruption of the HTTP body. The HTTP Header is protected by thesignature passed in theAuthorization header.
All communications with the provider is done over https for encryptionand additional error protection.
Rclone uploads single part uploads with aContent-Md5 using theMD5 hash read from the source. The provider checks this is correcton receipt of the data.
Rclone then does a HEAD request (disable with--s3-no-head) toread theETag back which is the MD5 of the file and checks that withwhat it sent.
Note that if the source does not have an MD5 then the single partuploads will not have hash protection. In this case it is recommendedto use--s3-upload-cutoff 0 so all files are uploaded as multipartuploads.
For files above--s3-upload-cutoff rclone splits the file intomultiple parts for upload.
X-Amz-Content-Sha256 and aContent-Md5When rclone has finished the upload of all the parts it then completesthe upload by sending:
X-Amz-Content-Sha256The provider checks the MD5 for all the parts it has received againstwhat rclone sends and if it is good it returns OK.
Rclone then does a HEAD request (disable with--s3-no-head) andchecks the ETag is what it expects (in this case it should be the MD5sum of all the MD5 sums of all the parts with the number of parts onthe end).
If the source has an MD5 sum then rclone will attach theX-Amz-Meta-Md5chksum with it as theETag for a multipart uploadcan't easily be checked against the file as the chunk size must beknown in order to calculate it.
Rclone checks the MD5 hash of the data downloaded against either theETag or theX-Amz-Meta-Md5chksum metadata (if present) which rcloneuploads with multipart uploads.
At each stage rclone and the provider are sending and checking hashes ofeverything. Rclone deliberately HEADs each object after upload tocheck it arrived safely for extra security. (You can disable this with--s3-no-head).
If you require further assurance that your data is intact you can userclone check to check the hashes locally vs the remote.
And if you are feeling ultimately paranoid userclone check --downloadwhich will download the files and check them against the local copies.(Note that this doesn't use disk to do this - it streams them inmemory).
When bucket versioning is enabled (this can be done with rclone withtherclone backend versioning command) when rcloneuploads a new version of a file it creates anew version of itLikewise when you delete a file, the old version will be marked hiddenand still be available.
Old versions of files, where available, are visible using the--s3-versions flag.
It is also possible to view a bucket as it was at a certain point intime, using the--s3-version-at flag. This willshow the file versions as they were at that time, showing files thathave been deleted afterwards, and hiding files that were createdsince.
If you wish to remove all the old versions then you can use therclone backend cleanup-hidden remote:bucketcommand which will delete all the old hidden versions of files,leaving the current ones intact. You can also supply a path and onlyold versions under that path will be deleted, e.g.rclone backend cleanup-hidden remote:bucket/path/to/stuff.
When youpurge a bucket, the current and the old versions will bedeleted then the bucket will be deleted.
Howeverdelete will cause the current versions of the files tobecome hidden old versions.
Here is a session showing the listing and retrieval of an oldversion followed by acleanup of the old versions.
Show current version and all the versions with--s3-versions flag.
$ rclone -q ls s3:cleanup-test 9 one.txt$ rclone -q --s3-versions ls s3:cleanup-test 9 one.txt 8 one-v2016-07-04-141032-000.txt 16 one-v2016-07-04-141003-000.txt 15 one-v2016-07-02-155621-000.txtRetrieve an old version
$ rclone -q --s3-versions copy s3:cleanup-test/one-v2016-07-04-141003-000.txt /tmp$ ls -l /tmp/one-v2016-07-04-141003-000.txt-rw-rw-r-- 1 ncw ncw 16 Jul 2 17:46 /tmp/one-v2016-07-04-141003-000.txtClean up all the old versions and show that they've gone.
$ rclone -q backend cleanup-hidden s3:cleanup-test$ rclone -q ls s3:cleanup-test 9 one.txt$ rclone -q --s3-versions ls s3:cleanup-test 9 one.txtWhen using--s3-versions flag rclone is relying on the file nameto work out whether the objects are versions or not. Versions' namesare created by inserting timestamp between file name and its extension.
9 file.txt 8 file-v2023-07-17-161032-000.txt 16 file-v2023-06-15-141003-000.txtIf there are real files present with the same names as versions, thenbehaviour of--s3-versions can be unpredictable.
If you runrclone cleanup s3:bucket then it will remove all pendingmultipart uploads older than 24 hours. You can use the--interactive/ior--dry-run flag to see exactly what it will do. If you want more controlover the expiry date then runrclone backend cleanup s3:bucket -o max-age=1hto expire all uploads older than one hour. You can userclone backend list-multipart-uploads s3:bucket to see the pending multipartuploads.
S3 allows any valid UTF-8 string as a key.
Invalid UTF-8 bytes will bereplaced, asthey can't be used in XML.
The following characters are replaced since these are problematic whendealing with the REST API:
| Character | Value | Replacement |
|---|---|---|
| NUL | 0x00 | ␀ |
| / | 0x2F | / |
The encoding will also encode these file names as they don't seem towork with the SDK properly:
| File name | Replacement |
|---|---|
| . | . |
| .. | .. |
rclone supports multipart uploads with S3 which means that it canupload files bigger than 5 GiB.
Note that files uploadedboth with multipart uploadand throughcrypt remotes do not have MD5 sums.
rclone switches from single part uploads to multipart uploads at thepoint specified by--s3-upload-cutoff. This can be a maximum of 5 GiBand a minimum of 0 (ie always upload multipart files).
The chunk sizes used in the multipart upload are specified by--s3-chunk-size and the number of chunks uploaded concurrently isspecified by--s3-upload-concurrency.
Multipart uploads will use extra memory equal to:--transfers ×--s3-upload-concurrency ×--s3-chunk-size. Single part uploads do notuse extra memory.
Single part transfers can be faster than multipart transfers or slowerdepending on your latency from S3 - the more latency, the more likelysingle part transfers will be faster.
Increasing--s3-upload-concurrency will increase throughput (8 wouldbe a sensible value) and increasing--s3-chunk-size also increasesthroughput (16M would be sensible). Increasing either of these willuse more memory. The default values are high enough to gain most ofthe possible performance without using too much memory.
With Amazon S3 you can list buckets (rclone lsd) using any region,but you can only access the content of a bucket from the region it wascreated in. If you attempt to access a bucket from the wrong region,you will get an error,incorrect region, the bucket is not in 'XXX' region.
There are a number of ways to supplyrclone with a set of AWScredentials, with and without using the environment.
The different authentication methods are tried in this order:
env_auth = false in the config file):access_key_id andsecret_access_key are required.session_token can be optionally set when using AWS STS.env_auth = true in the config file):rclone:AWS_ACCESS_KEY_ID orAWS_ACCESS_KEYAWS_SECRET_ACCESS_KEY orAWS_SECRET_KEYAWS_SESSION_TOKEN (optional)~/.aws/credentialson unix based systems) file and the "default" profile, to change set theseenvironment variables or config keys:AWS_SHARED_CREDENTIALS_FILE to control which file or theshared_credentials_fileconfig key.AWS_PROFILE to control which profile to use or theprofile config key.rclone in an ECS task with an IAM role (AWS only).rclone on an EC2 instance with an IAM role (AWS only).rclone in an EKS pod with an IAM role that is associated with aservice account (AWS only).Withenv_auth = true rclone (which uses the SDK for Go v2) should supportall authentication methodsthat theaws CLI tool does and the other AWS SDKs.
If none of these option actually end up providingrclone with AWScredentials then S3 interaction will be non-authenticated (see theanonymous access section for more info).
When using thesync subcommand ofrclone the following minimumpermissions are required to be available on the bucket being written to:
ListBucketDeleteObjectGetObjectPutObjectPutObjectACLCreateBucket (unless usings3-no-check-bucket)When using thelsd subcommand, theListAllMyBuckets permission is required.
Example policy:
{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":"arn:aws:iam::USER_SID:user/USER_NAME"},"Action":["s3:ListBucket","s3:DeleteObject","s3:GetObject","s3:PutObject","s3:PutObjectAcl"],"Resource":["arn:aws:s3:::BUCKET_NAME/*","arn:aws:s3:::BUCKET_NAME"]},{"Effect":"Allow","Action":"s3:ListAllMyBuckets","Resource":"arn:aws:s3:::*"}]}Notes on above:
USER_NAME has been created."arn:aws:s3:::BUCKET_NAME" doesn't have to be included.For reference,here's an Ansible scriptthat will generate one or more buckets that will work withrclone sync.
If you are using server-side encryption with KMS then you must makesure rclone is configured withserver_side_encryption = aws:kmsotherwise you will find you can't transfer small objects - these willcreate checksum errors.
You can upload objects using the glacier storage class or transition them toglacier using alifecycle policy.The bucket can still be synced or copied into normally, but if rclonetries to access data from the glacier storage class you will see an error like below.
2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/fileIn this case you need torestorethe object(s) in question before accessing object contents.Therestore section below shows how to do this with rclone.
Note that rclone only speaks the S3 API it does not speak the GlacierVault API, so rclone cannot directly access Glacier Vaults.
According to AWS'sdocumentation on S3 Object Lock:
If you configure a default retention period on a bucket, requests to uploadobjects in such a bucket must include the Content-MD5 header.
As mentioned in theModification times and hashessection, small files that are not uploaded as multipart, use a different tag, causingthe upload to fail. A simple solution is to set the--s3-upload-cutoff 0 and forceall the files to be uploaded as multipart.
Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology, TencentCOS, Wasabi, Zata, Other).
Choose your S3 provider.
Properties:
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Properties:
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Properties:
AWS Secret Access Key (password).
Leave blank for anonymous access or runtime credentials.
Properties:
Region to connect to.
Leave blank if you are using an S3 clone and you don't have a region.
Properties:
Endpoint for S3 API.
Required when using an S3 clone.
Properties:
Location constraint - must be set to match the Region.
Leave blank if not sure. Used when creating buckets only.
Properties:
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visithttps://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.
If the acl is an empty string then no X-Amz-Acl: header is added andthe default (private) will be used.
Properties:
The server-side encryption algorithm used when storing this object in S3.
Properties:
If using KMS ID you must provide the ARN of Key.
Properties:
The storage class to use when storing new objects in S3.
Properties:
IBM API Key to be used to obtain IAM token
Properties:
IBM service instance id
Properties:
Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology, TencentCOS, Wasabi, Zata, Other).
Canned ACL used when creating buckets.
For more info visithttps://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when only when creating buckets. If itisn't set then "acl" is used instead.
If the "acl" and "bucket_acl" are empty strings then no X-Amz-Acl:header is added and the default (private) will be used.
Properties:
Enables requester pays option when interacting with S3 bucket.
Properties:
If using SSE-C, the server-side encryption algorithm used when storing this object in S3.
Properties:
To use SSE-C you may provide the secret encryption key used to encrypt/decrypt your data.
Alternatively you can provide --sse-customer-key-base64.
Properties:
If using SSE-C you must provide the secret encryption key encoded in base64 format to encrypt/decrypt your data.
Alternatively you can provide --sse-customer-key.
Properties:
If using SSE-C you may provide the secret encryption key MD5 checksum (optional).
If you leave it blank, this is calculated automatically from the sse_customer_key provided.
Properties:
Cutoff for switching to chunked upload.
Any files larger than this will be uploaded in chunks of chunk_size.The minimum is 0 and the maximum is 5 GiB.
Properties:
Chunk size to use for uploading.
When uploading files larger than upload_cutoff or files with unknownsize (e.g. from "rclone rcat" or uploaded with "rclone mount" or googlephotos or google docs) they will be uploaded as multipart uploadsusing this chunk size.
Note that "--s3-upload-concurrency" chunks of this size are bufferedin memory per transfer.
If you are transferring large files over high-speed links and you haveenough memory, then increasing this will speed up the transfers.
Rclone will automatically increase the chunk size when uploading alarge file of known size to stay below the 10,000 chunks limit.
Files of unknown size are uploaded with the configuredchunk_size. Since the default chunk size is 5 MiB and there can be atmost 10,000 chunks, this means that by default the maximum size ofa file you can stream upload is 48 GiB. If you wish to stream uploadlarger files then you will need to increase chunk_size.
Increasing the chunk size decreases the accuracy of the progressstatistics displayed with "-P" flag. Rclone treats chunk as sent whenit's buffered by the AWS SDK, when in fact it may still be uploading.A bigger chunk size means a bigger AWS SDK buffer and progressreporting more deviating from the truth.
Properties:
Maximum number of parts in a multipart upload.
This option defines the maximum number of multipart chunks to usewhen doing a multipart upload.
This can be useful if a service does not support the AWS S3specification of 10,000 chunks.
Rclone will automatically increase the chunk size when uploading alarge file of a known size to stay below this number of chunks limit.
Properties:
Cutoff for switching to multipart copy.
Any files larger than this that need to be server-side copied will becopied in chunks of this size.
The minimum is 0 and the maximum is 5 GiB.
Properties:
Don't store MD5 checksum with object metadata.
Normally rclone will calculate the MD5 checksum of the input beforeuploading it so it can add it to metadata on the object. This is greatfor data integrity checking but can cause long delays for large filesto start uploading.
Properties:
Path to the shared credentials file.
If env_auth = true then rclone can use a shared credentials file.
If this variable is empty rclone will look for the"AWS_SHARED_CREDENTIALS_FILE" env variable. If the env value is emptyit will default to the current user's home directory.
Linux/OSX: "$HOME/.aws/credentials"Windows: "%USERPROFILE%\.aws\credentials"Properties:
Profile to use in the shared credentials file.
If env_auth = true then rclone can use a shared credentials file. Thisvariable controls which profile is used in that file.
If empty it will default to the environment variable "AWS_PROFILE" or"default" if that environment variable is also not set.
Properties:
An AWS session token.
Properties:
Concurrency for multipart uploads and copies.
This is the number of chunks of the same file that are uploadedconcurrently for multipart uploads and copies.
If you are uploading small numbers of large files over high-speed linksand these uploads do not fully utilize your bandwidth, then increasingthis may help to speed up the transfers.
Properties:
If true use path style access if false use virtual hosted style.
If this is true (the default) then rclone will use path style access,if false then rclone will use virtual path style. Seethe AWS S3docsfor more info.
Some providers (e.g. AWS, Aliyun OSS, Netease COS, or Tencent COS) require this set tofalse - rclone will do this automatically based on the providersetting.
Note that if your bucket isn't a valid DNS name, i.e. has '.' or '_' in,you'll need to set this to true.
Properties:
If true use v2 authentication.
If this is false (the default) then rclone will use v4 authentication.If it is set then rclone will use v2 authentication.
Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH.
Properties:
If true use AWS S3 dual-stack endpoint (IPv6 support).
SeeAWS Docs on Dualstack Endpoints
Properties:
If true use the AWS S3 accelerated endpoint.
See:AWS S3 Transfer acceleration
Properties:
If true, enables arn region support for the service.
Properties:
If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery.
It should be set to true for resuming uploads across different sessions.
WARNING: Storing parts of an incomplete multipart upload counts towards space usage on S3 and will add additional costs if not cleaned up.
Properties:
Size of listing chunk (response list for each ListObject S3 request).
This option is also known as "MaxKeys", "max-items", or "page-size" from the AWS S3 specification.Most services truncate the response list to 1000 objects even if requested more than that.In AWS S3 this is a global maximum and cannot be changed, seeAWS S3.In Ceph, this can be increased with the "rgw list buckets max chunk" option.
Properties:
Version of ListObjects to use: 1,2 or 0 for auto.
When S3 originally launched it only provided the ListObjects call toenumerate objects in a bucket.
However in May 2016 the ListObjectsV2 call was introduced. This ismuch higher performance and should be used if at all possible.
If set to the default, 0, rclone will guess according to the providerset which list objects method to call. If it guesses wrong, then itmay be set manually here.
Properties:
Whether to url encode listings: true/false/unset
Some providers support URL encoding listings and where this isavailable this is more reliable when using control characters in filenames. If this is set to unset (the default) then rclone will chooseaccording to the provider setting what to apply, but you can overriderclone's choice here.
Properties:
If set, don't attempt to check the bucket exists or create it.
This can be useful when trying to minimise the number of transactionsrclone does if you know the bucket exists already.
It can also be needed if the user you are using does not have bucketcreation permissions. Before v1.52.0 this would have passed silentlydue to a bug.
Properties:
If set, don't HEAD uploaded objects to check integrity.
This can be useful when trying to minimise the number of transactionsrclone does.
Setting it means that if rclone receives a 200 OK message afteruploading an object with PUT then it will assume that it got uploadedproperly.
In particular it will assume:
It reads the following items from the response for a single part PUT:
For multipart uploads these items aren't read.
If an source object of unknown length is uploaded then rclonewill do aHEAD request.
Setting this flag increases the chance for undetected upload failures,in particular an incorrect size, so it isn't recommended for normaloperation. In practice the chance of an undetected upload failure isvery small even with this flag.
Properties:
If set, do not do HEAD before GET when getting objects.
Properties:
The encoding for the backend.
See theencoding section in the overview for more info.
Properties:
How often internal memory buffer pools will be flushed. (no longer used)
Properties:
Whether to use mmap buffers in internal memory pool. (no longer used)
Properties:
Disable usage of http2 for S3 backends.
There is currently an unsolved issue with the s3 (specifically minio) backendand HTTP/2. HTTP/2 is enabled by default for the s3 backend but can bedisabled here. When the issue is solved this flag will be removed.
See:https://github.com/rclone/rclone/issues/4673,https://github.com/rclone/rclone/issues/3631
Properties:
Custom endpoint for downloads.This is usually set to a CloudFront CDN URL as AWS S3 offerscheaper egress for data downloaded through the CloudFront network.
Properties:
Upload an empty object with a trailing slash when a new directory is created
Empty folders are unsupported for bucket based remotes, this option creates an emptyobject ending with "/", to persist the folder.
Properties:
Whether to use ETag in multipart uploads for verification
This should be true, false or left unset to use the default for the provider.
Properties:
Whether to use an unsigned payload in PutObject
Rclone has to avoid the AWS SDK seeking the body when callingPutObject. The AWS provider can add checksums in the trailer to avoidseeking but other providers can't.
This should be true, false or left unset to use the default for the provider.
Properties:
Whether to use a presigned request or PutObject for single part uploads
If this is false rclone will use PutObject from the AWS SDK to uploadan object.
Versions of rclone < 1.59 use presigned requests to upload a singlepart object and setting this flag to true will re-enable thatfunctionality. This shouldn't be necessary except in exceptionalcircumstances or for testing.
Properties:
If true use AWS S3 data integrity protections.
SeeAWS Docs on Data Integrity Protections
Properties:
Include old versions in directory listings.
Properties:
Show file versions as they were at the specified time.
The parameter should be a date, "2006-01-02", datetime "2006-01-0215:04:05" or a duration for that long ago, eg "100d" or "1h".
Note that when using this no file write operations are permitted,so you can't upload files or delete them.
Seethe time option docs for valid formats.
Properties:
Show deleted file markers when using versions.
This shows deleted file markers in the listing when using versions. These will appearas 0 size files. The only operation which can be performed on them is deletion.
Deleting a delete marker will reveal the previous version.
Deleted files will always show with a timestamp.
Properties:
If set this will decompress gzip encoded objects.
It is possible to upload objects to S3 with "Content-Encoding: gzip"set. Normally rclone will download these files as compressed objects.
If this flag is set then rclone will decompress these files with"Content-Encoding: gzip" as they are received. This means that rclonecan't check the size and hash but the file contents will be decompressed.
Properties:
Set this if the backend might gzip objects.
Normally providers will not alter objects when they are downloaded. Ifan object was not uploaded withContent-Encoding: gzip then it won'tbe set on download.
However some providers may gzip objects even if they weren't uploadedwithContent-Encoding: gzip (eg Cloudflare).
A symptom of this would be receiving errors like
ERROR corrupted on transfer: sizes differ NNN vs MMMIf you set this flag and rclone downloads an object withContent-Encoding: gzip set and chunked transfer encoding, then rclonewill decompress the object on the fly.
If this is set to unset (the default) then rclone will chooseaccording to the provider setting what to apply, but you can overriderclone's choice here.
Properties:
Whether to sendAccept-Encoding: gzip header.
By default, rclone will appendAccept-Encoding: gzip to the request to downloadcompressed objects whenever possible.
However some providers such as Google Cloud Storage may alter the HTTP headers, breakingthe signature of the request.
A symptom of this would be receiving errors like
SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided.In this case, you might want to try disabling this option.
Properties:
Suppress setting and reading of system metadata
Properties:
Endpoint for STS (deprecated).
Leave blank if using AWS to use the default endpoint for the region.
Properties:
Set if rclone should report BucketAlreadyExists errors on bucket creation.
At some point during the evolution of the s3 protocol, AWS startedreturning anAlreadyOwnedByYou error when attempting to create abucket that the user already owned, rather than aBucketAlreadyExists error.
Unfortunately exactly what has been implemented by s3 clones is alittle inconsistent, some returnAlreadyOwnedByYou, some returnBucketAlreadyExists and some return no error at all.
This is important to rclone because it ensures the bucket exists bycreating it on quite a lot of operations (unless--s3-no-check-bucket is used).
If rclone knows the provider can returnAlreadyOwnedByYou or returnsno error then it can reportBucketAlreadyExists errors when the userattempts to create a bucket not owned by them. Otherwise rcloneignores theBucketAlreadyExists error which can lead to confusion.
This should be automatically set correctly for all providers rcloneknows about - please make a bug report if not.
Properties:
Set if rclone should use multipart uploads.
You can change this if you want to disable the use of multipart uploads.This shouldn't be necessary in normal operation.
This should be automatically set correctly for all providers rcloneknows about - please make a bug report if not.
Properties:
Set if rclone should add x-id URL parameters.
You can change this if you want to disable the AWS SDK fromadding x-id URL parameters.
This shouldn't be necessary in normal operation.
This should be automatically set correctly for all providers rcloneknows about - please make a bug report if not.
Properties:
Set if rclone should include Accept-Encoding as part of the signature.
You can change this if you want to stop rclone includingAccept-Encoding as part of the signature.
This shouldn't be necessary in normal operation.
This should be automatically set correctly for all providers rcloneknows about - please make a bug report if not.
Properties:
Set to use AWS Directory Buckets
If you are using an AWS Directory Bucket then set this flag.
This will ensure noContent-Md5 headers are sent and ensureETagheaders are not interpreted as MD5 sums.X-Amz-Meta-Md5chksum willbe set on all objects whether single or multipart uploaded.
This also setsno_check_bucket = true.
Note that Directory Buckets do not support:
Content-Encoding: gzipRclone limitations with Directory Buckets:
rclone mkdirrclone rmdir yetrclone lsf at the top level.directory_markers = true but it doesn't.Properties:
Set to debug the SDK
This can be set to a comma separated list of the following functions:
SigningRetriesRequestRequestWithBodyResponseResponseWithBodyDeprecatedUsageRequestEventMessageResponseEventMessageUseOff to disable andAll to set all log levels. You will need touse-vv to see the debug level logs.
Properties:
Description of the remote.
Properties:
User metadata is stored as x-amz-meta- keys. S3 metadata keys are case insensitive and are always returned in lower case.
Here are the possible system metadata items for the s3 backend.
| Name | Help | Type | Example | Read Only |
|---|---|---|---|---|
| btime | Time of file birth (creation) read from Last-Modified header | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | Y |
| cache-control | Cache-Control header | string | no-cache | N |
| content-disposition | Content-Disposition header | string | inline | N |
| content-encoding | Content-Encoding header | string | gzip | N |
| content-language | Content-Language header | string | en-US | N |
| content-type | Content-Type header | string | text/plain | N |
| mtime | Time of last modification, read from rclone metadata | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | N |
| tier | Tier of the object | string | GLACIER | Y |
See themetadata docs for more info.
Here are the commands specific to the s3 backend.
Run them with:
rclone backend COMMAND remote:The help below will explain what arguments each command takes.
See thebackend command for moreinfo on how to pass options and arguments.
These can be run on a running backend using the rc commandbackend/command.
Restore objects from GLACIER or INTELLIGENT-TIERING archive tier.
rclone backend restore remote: [options] [<arguments>+]This command can be used to restore one or more objects from GLACIER to normalstorage or from INTELLIGENT-TIERING Archive Access / Deep Archive Access tierto the Frequent Access tier.
Usage examples:
rclone backend restore s3:bucket/path/to/ --include /object -o priority=PRIORITY -o lifetime=DAYSrclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY -o lifetime=DAYSrclone backend restore s3:bucket -o priority=PRIORITY -o lifetime=DAYSrclone backend restore s3:bucket/path/to/directory -o priority=PRIORITYThis flag also obeys the filters. Test first with --interactive/-i or --dry-runflags.
rclone --interactive backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1All the objects shown will be marked for restore, then:
rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1It returns a list of status dictionaries with Remote and Statuskeys. The Status will be OK if it was successful or an error messageif not.
[{"Status":"OK","Remote":"test.txt"},{"Status":"OK","Remote":"test/file4.txt"}]Options:
Show the status for objects being restored from GLACIER or INTELLIGENT-TIERING.
rclone backend restore-status remote: [options] [<arguments>+]This command can be used to show the status for objects being restored fromGLACIER to normal storage or from INTELLIGENT-TIERING Archive Access / DeepArchive Access tier to the Frequent Access tier.
Usage examples:
rclone backend restore-status s3:bucket/path/to/objectrclone backend restore-status s3:bucket/path/to/directoryrclone backend restore-status -o all s3:bucket/path/to/directoryThis command does not obey the filters.
It returns a list of status dictionaries:
[{"Remote":"file.txt","VersionID":null,"RestoreStatus":{"IsRestoreInProgress":true,"RestoreExpiryDate":"2023-09-06T12:29:19+01:00"},"StorageClass":"GLACIER"},{"Remote":"test.pdf","VersionID":null,"RestoreStatus":{"IsRestoreInProgress":false,"RestoreExpiryDate":"2023-09-06T12:29:19+01:00"},"StorageClass":"DEEP_ARCHIVE"},{"Remote":"test.gz","VersionID":null,"RestoreStatus":{"IsRestoreInProgress":true,"RestoreExpiryDate":"null"},"StorageClass":"INTELLIGENT_TIERING"}]Options:
List the unfinished multipart uploads.
rclone backend list-multipart-uploads remote: [options] [<arguments>+]This command lists the unfinished multipart uploads in JSON format.
Usage examples:
rclone backend list-multipart s3:bucket/path/to/objectIt returns a dictionary of buckets with values as lists of unfinishedmultipart uploads.
You can call it with no bucket in which case it lists all bucket, witha bucket or with a bucket and path.
{"rclone":[{"Initiated":"2020-06-26T14:20:36Z","Initiator":{"DisplayName":"XXX","ID":"arn:aws:iam::XXX:user/XXX"},"Key":"KEY","Owner":{"DisplayName":null,"ID":"XXX"},"StorageClass":"STANDARD","UploadId":"XXX"}],"rclone-1000files":[],"rclone-dst":[]}Remove unfinished multipart uploads.
rclone backend cleanup remote: [options] [<arguments>+]This command removes unfinished multipart uploads of age greater thanmax-age which defaults to 24 hours.
Note that you can use --interactive/-i or --dry-run with this command to seewhat it would do.
Usage examples:
rclone backend cleanup s3:bucket/path/to/objectrclone backend cleanup -o max-age=7w s3:bucket/path/to/objectDurations are parsed as per the rest of rclone, 2h, 7d, 7w etc.
Options:
Remove old versions of files.
rclone backend cleanup-hidden remote: [options] [<arguments>+]This command removes any old hidden versions of fileson a versions enabled bucket.
Note that you can use --interactive/-i or --dry-run with this command to seewhat it would do.
Usage example:
rclone backend cleanup-hidden s3:bucket/path/to/dirSet/get versioning support for a bucket.
rclone backend versioning remote: [options] [<arguments>+]This command sets versioning support if a parameter ispassed and then returns the current versioning status for the bucketsupplied.
Usage examples:
rclone backend versioning s3:bucket # read status onlyrclone backend versioning s3:bucket Enabledrclone backend versioning s3:bucket SuspendedIt may return "Enabled", "Suspended" or "Unversioned". Note that onceversioning has been enabled the status can't be set back to "Unversioned".
Set command for updating the config parameters.
rclone backend set remote: [options] [<arguments>+]This set command can be used to update the config parametersfor a running s3 backend.
Usage examples:
rclone backend set s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2]rclone rc backend/command command=set fs=s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2]rclone rc backend/command command=set fs=s3: -o session_token=X -o access_key_id=X -o secret_access_key=XThe option keys are named as they are in the config file.
This rebuilds the connection to the s3 backend when it is called withthe new parameters. Only new parameters need be passed as the valueswill default to those currently in use.
It doesn't return anything.
If you want to use rclone to access a public bucket, configure with ablankaccess_key_id andsecret_access_key. Your config should endup looking like this:
[anons3]type=s3provider=AWSThen use it as normal with the name of the public bucket, e.g.
rclone lsd anons3:1000genomesYou will be able to list and copy data but not upload it.
You can also do this entirely on the command line
rclone lsd :s3,provider=AWS:1000genomesThis is the provider used as main example and described in theconfigurationsection above.
From rclone v1.69Directory Bucketsare supported.
You will need to set thedirectory_buckets = true config parameteror use--s3-directory-buckets.
Note that rclone cannot yet:
Seethe --s3-directory-buckets flag for more info
AWS Snowball is a hardwareappliance used for transferring bulk data back to AWS. Its mainsoftware interface is S3 object storage.
To use rclone with AWS Snowball Edge devices, configure as standardfor an 'S3 Compatible Service'.
If using rclone pre v1.59 be sure to setupload_cutoff = 0 otherwiseyou will run into authentication header issues as the snowball devicedoes not support query parameter based authentication.
With rclone v1.59 or later settingupload_cutoff should not be necessary.
eg.
[snowball]type=s3provider=Otheraccess_key_id=YOUR_ACCESS_KEYsecret_access_key=YOUR_SECRET_KEYendpoint=http://[IP of Snowball]:8080upload_cutoff=0Here is an example of making anAlibaba Cloud (Aliyun) OSSconfiguration. First run:
rclone configThis will guide you through an interactive setup process.
No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nname> ossType of storage to configure.Enter a string value. Press Enter for the default ("").Choose a number from below, or type in your own value[snip]XX / Amazon S3 Compliant Storage Providers including AWS, ... \ "s3"[snip]Storage> s3Choose your S3 provider.Enter a string value. Press Enter for the default ("").Choose a number from below, or type in your own value 1 / Amazon Web Services (AWS) S3 \ "AWS" 2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun \ "Alibaba" 3 / Ceph Object Storage \ "Ceph"[snip]provider> AlibabaGet AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Enter a boolean value (true or false). Press Enter for the default ("false").Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step \ "false" 2 / Get AWS credentials from the environment (env vars or IAM) \ "true"env_auth> 1AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a string value. Press Enter for the default ("").access_key_id> accesskeyidAWS Secret Access Key (password)Leave blank for anonymous access or runtime credentials.Enter a string value. Press Enter for the default ("").secret_access_key> secretaccesskeyEndpoint for OSS API.Enter a string value. Press Enter for the default ("").Choose a number from below, or type in your own value 1 / East China 1 (Hangzhou) \ "oss-cn-hangzhou.aliyuncs.com" 2 / East China 2 (Shanghai) \ "oss-cn-shanghai.aliyuncs.com" 3 / North China 1 (Qingdao) \ "oss-cn-qingdao.aliyuncs.com"[snip]endpoint> 1Canned ACL used when creating buckets and storing or copying objects.Note that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.Enter a string value. Press Enter for the default ("").Choose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default). \ "private" 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. \ "public-read" / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.[snip]acl> 1The storage class to use when storing new objects in OSS.Enter a string value. Press Enter for the default ("").Choose a number from below, or type in your own value 1 / Default \ "" 2 / Standard storage class \ "STANDARD" 3 / Archive storage mode. \ "GLACIER" 4 / Infrequent access storage mode. \ "STANDARD_IA"storage_class> 1Edit advanced config? (y/n)y) Yesn) Noy/n> nRemote config--------------------[oss]type = s3provider = Alibabaenv_auth = falseaccess_key_id = accesskeyidsecret_access_key = secretaccesskeyendpoint = oss-cn-hangzhou.aliyuncs.comacl = privatestorage_class = Standard--------------------y) Yes this is OKe) Edit this remoted) Delete this remotey/e/d> yArvanCloud ArvanCloudObject Storage goes beyond the limited traditional file storage.It gives you access to backup and archived files and allows sharing.Files like profile image in the app, images sent by users or scanned documentscan be stored securely and easily in our Object Storage service.
ArvanCloud provides an S3 interface which can be configured for use withrclone like this.
No remotes found, make a new one\?n) New remotes) Set configuration passwordn/s> nname> ArvanCloudType of storage to configure.Choose a number from below, or type in your own value[snip]XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Liara, Minio) \ "s3"[snip]Storage> s3Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step \ "false" 2 / Get AWS credentials from the environment (env vars or IAM) \ "true"env_auth> 1AWS Access Key ID - leave blank for anonymous access or runtime credentials.access_key_id> YOURACCESSKEYAWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.secret_access_key> YOURSECRETACCESSKEYRegion to connect to.Choose a number from below, or type in your own value / The default endpoint - a good choice if you are unsure. 1 | US Region, Northern Virginia, or Pacific Northwest. | Leave location constraint empty. \ "us-east-1"[snip]region>Endpoint for S3 API.Leave blank if using ArvanCloud to use the default endpoint for the region.Specify if using an S3 clone such as Ceph.endpoint> s3.arvanstorage.comLocation constraint - must be set to match the Region. Used when creating buckets only.Choose a number from below, or type in your own value 1 / Empty for Iran-Tehran Region. \ ""[snip]location_constraint>Canned ACL used when creating buckets and/or storing objects in S3.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclChoose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default). \ "private"[snip]acl>The server-side encryption algorithm used when storing this object in S3.Choose a number from below, or type in your own value 1 / None \ "" 2 / AES256 \ "AES256"server_side_encryption>The storage class to use when storing objects in S3.Choose a number from below, or type in your own value 1 / Default \ "" 2 / Standard storage class \ "STANDARD"storage_class>Remote config--------------------[ArvanCloud]env_auth = falseaccess_key_id = YOURACCESSKEYsecret_access_key = YOURSECRETACCESSKEYregion = ir-thr-at1endpoint = s3.arvanstorage.comlocation_constraint =acl =server_side_encryption =storage_class =--------------------y) Yes this is OKe) Edit this remoted) Delete this remotey/e/d> yThis will leave the config file looking like this.
[ArvanCloud]type=s3provider=ArvanCloudenv_auth=falseaccess_key_id=YOURACCESSKEYsecret_access_key=YOURSECRETACCESSKEYregion=endpoint=s3.arvanstorage.comlocation_constraint=acl=server_side_encryption=storage_class=Ceph is an open-source, unified, distributedstorage system designed for excellent performance, reliability andscalability. It has an S3 compatible object storage interface.
To use rclone with Ceph, configure as above but leave the region blankand set the endpoint. You should end up with something like this inyour config:
[ceph]type=s3provider=Cephenv_auth=falseaccess_key_id=XXXsecret_access_key=YYYregion=endpoint=https://ceph.endpoint.example.comlocation_constraint=acl=server_side_encryption=storage_class=If you are using an older version of CEPH (e.g. 10.2.x Jewel) and aversion of rclone before v1.59 then you may need to supply theparameter--s3-upload-cutoff 0 or put this in the config file asupload_cutoff 0 to work around a bug which causes uploading of smallfiles to fail.
Note also that Ceph sometimes puts/ in the passwords it givesusers. If you read the secret access key using the command line toolsyou will get a JSON blob with the/ escaped as\/. Make sure youonly write/ in the secret access key.
Eg the dump from Ceph looks something like this (irrelevant keysremoved).
{"user_id":"xxx","display_name":"xxxx","keys":[{"user":"xxx","access_key":"xxxxxx","secret_key":"xxxxxx\/xxxx"}],}Because this is a json dump, it is encoding the/ as\/, so if youuse the secret key asxxxxxx/xxxx it will work fine.
Here is an example of making anChina Mobile Ecloud Elastic Object Storage (EOS)configuration. First run:
rclone configThis will guide you through an interactive setup process.
No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nname> ChinaMobileOption Storage.Type of storage to configure.Choose a number from below, or type in your own value. ...XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3) ...Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty. ... 4 / China Mobile Ecloud Elastic Object Storage (EOS) \ (ChinaMobile) ...provider> ChinaMobileOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step. \ (false) 2 / Get AWS credentials from the environment (env vars or IAM). \ (true)env_auth>Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> accesskeyidOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> secretaccesskeyOption endpoint.Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API.Choose a number from below, or type in your own value.Press Enter to leave empty. / The default endpoint - a good choice if you are unsure. 1 | East China (Suzhou) \ (eos-wuxi-1.cmecloud.cn) 2 / East China (Jinan) \ (eos-jinan-1.cmecloud.cn) 3 / East China (Hangzhou) \ (eos-ningbo-1.cmecloud.cn) 4 / East China (Shanghai-1) \ (eos-shanghai-1.cmecloud.cn) 5 / Central China (Zhengzhou) \ (eos-zhengzhou-1.cmecloud.cn) 6 / Central China (Changsha-1) \ (eos-hunan-1.cmecloud.cn) 7 / Central China (Changsha-2) \ (eos-zhuzhou-1.cmecloud.cn) 8 / South China (Guangzhou-2) \ (eos-guangzhou-1.cmecloud.cn) 9 / South China (Guangzhou-3) \ (eos-dongguan-1.cmecloud.cn)10 / North China (Beijing-1) \ (eos-beijing-1.cmecloud.cn)11 / North China (Beijing-2) \ (eos-beijing-2.cmecloud.cn)12 / North China (Beijing-3) \ (eos-beijing-4.cmecloud.cn)13 / North China (Huhehaote) \ (eos-huhehaote-1.cmecloud.cn)14 / Southwest China (Chengdu) \ (eos-chengdu-1.cmecloud.cn)15 / Southwest China (Chongqing) \ (eos-chongqing-1.cmecloud.cn)16 / Southwest China (Guiyang) \ (eos-guiyang-1.cmecloud.cn)17 / Nouthwest China (Xian) \ (eos-xian-1.cmecloud.cn)18 / Yunnan China (Kunming) \ (eos-yunnan.cmecloud.cn)19 / Yunnan China (Kunming-2) \ (eos-yunnan-2.cmecloud.cn)20 / Tianjin China (Tianjin) \ (eos-tianjin-1.cmecloud.cn)21 / Jilin China (Changchun) \ (eos-jilin-1.cmecloud.cn)22 / Hubei China (Xiangyan) \ (eos-hubei-1.cmecloud.cn)23 / Jiangxi China (Nanchang) \ (eos-jiangxi-1.cmecloud.cn)24 / Gansu China (Lanzhou) \ (eos-gansu-1.cmecloud.cn)25 / Shanxi China (Taiyuan) \ (eos-shanxi-1.cmecloud.cn)26 / Liaoning China (Shenyang) \ (eos-liaoning-1.cmecloud.cn)27 / Hebei China (Shijiazhuang) \ (eos-hebei-1.cmecloud.cn)28 / Fujian China (Xiamen) \ (eos-fujian-1.cmecloud.cn)29 / Guangxi China (Nanning) \ (eos-guangxi-1.cmecloud.cn)30 / Anhui China (Huainan) \ (eos-anhui-1.cmecloud.cn)endpoint> 1Option location_constraint.Location constraint - must match endpoint.Used when creating buckets only.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / East China (Suzhou) \ (wuxi1) 2 / East China (Jinan) \ (jinan1) 3 / East China (Hangzhou) \ (ningbo1) 4 / East China (Shanghai-1) \ (shanghai1) 5 / Central China (Zhengzhou) \ (zhengzhou1) 6 / Central China (Changsha-1) \ (hunan1) 7 / Central China (Changsha-2) \ (zhuzhou1) 8 / South China (Guangzhou-2) \ (guangzhou1) 9 / South China (Guangzhou-3) \ (dongguan1)10 / North China (Beijing-1) \ (beijing1)11 / North China (Beijing-2) \ (beijing2)12 / North China (Beijing-3) \ (beijing4)13 / North China (Huhehaote) \ (huhehaote1)14 / Southwest China (Chengdu) \ (chengdu1)15 / Southwest China (Chongqing) \ (chongqing1)16 / Southwest China (Guiyang) \ (guiyang1)17 / Nouthwest China (Xian) \ (xian1)18 / Yunnan China (Kunming) \ (yunnan)19 / Yunnan China (Kunming-2) \ (yunnan2)20 / Tianjin China (Tianjin) \ (tianjin1)21 / Jilin China (Changchun) \ (jilin1)22 / Hubei China (Xiangyan) \ (hubei1)23 / Jiangxi China (Nanchang) \ (jiangxi1)24 / Gansu China (Lanzhou) \ (gansu1)25 / Shanxi China (Taiyuan) \ (shanxi1)26 / Liaoning China (Shenyang) \ (liaoning1)27 / Hebei China (Shijiazhuang) \ (hebei1)28 / Fujian China (Xiamen) \ (fujian1)29 / Guangxi China (Nanning) \ (guangxi1)30 / Anhui China (Huainan) \ (anhui1)location_constraint> 1Option acl.Canned ACL used when creating buckets and storing or copying objects.This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclNote that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.Choose a number from below, or type in your own value.Press Enter to leave empty. / Owner gets FULL_CONTROL. 1 | No one else has access rights (default). \ (private) / Owner gets FULL_CONTROL. 2 | The AllUsers group gets READ access. \ (public-read) / Owner gets FULL_CONTROL. 3 | The AllUsers group gets READ and WRITE access. | Granting this on a bucket is generally not recommended. \ (public-read-write) / Owner gets FULL_CONTROL. 4 | The AuthenticatedUsers group gets READ access. \ (authenticated-read) / Object owner gets FULL_CONTROL.acl> privateOption server_side_encryption.The server-side encryption algorithm used when storing this object in S3.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / None \ () 2 / AES256 \ (AES256)server_side_encryption>Option storage_class.The storage class to use when storing new objects in ChinaMobile.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Default \ () 2 / Standard storage class \ (STANDARD) 3 / Archive storage mode \ (GLACIER) 4 / Infrequent access storage mode \ (STANDARD_IA)storage_class>Edit advanced config?y) Yesn) No (default)y/n> n--------------------[ChinaMobile]type = s3provider = ChinaMobileaccess_key_id = accesskeyidsecret_access_key = secretaccesskeyendpoint = eos-wuxi-1.cmecloud.cnlocation_constraint = wuxi1acl = private--------------------y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> yCloudflare R2 Storageallows developers to store large amounts of unstructured data withoutthe costly egress bandwidth fees associated with typical cloud storageservices.
Here is an example of making a Cloudflare R2 configuration. First run:
rclone configThis will guide you through an interactive setup process.
Note that all buckets are private, and all are stored in the same"auto" region. It is necessary to use Cloudflare workers to share thecontent of a bucket publicly.
No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nname> r2Option Storage.Type of storage to configure.Choose a number from below, or type in your own value....XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Magalu, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi \ (s3)...Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty....XX / Cloudflare R2 Storage \ (Cloudflare)...provider> CloudflareOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step. \ (false) 2 / Get AWS credentials from the environment (env vars or IAM). \ (true)env_auth> 1Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> ACCESS_KEYOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> SECRET_ACCESS_KEYOption region.Region to connect to.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / R2 buckets are automatically distributed across Cloudflare's data centers for low latency. \ (auto)region> 1Option endpoint.Endpoint for S3 API.Required when using an S3 clone.Enter a value. Press Enter to leave empty.endpoint> https://ACCOUNT_ID.r2.cloudflarestorage.comEdit advanced config?y) Yesn) No (default)y/n> n--------------------y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> yThis will leave your config looking something like:
[r2]type=s3provider=Cloudflareaccess_key_id=ACCESS_KEYsecret_access_key=SECRET_ACCESS_KEYregion=autoendpoint=https://ACCOUNT_ID.r2.cloudflarestorage.comacl=privateNow runrclone lsf r2: to see your buckets andrclone lsf r2:bucket to look within a bucket.
For R2 tokens with the "Object Read & Write" permission, you may alsoneed to addno_check_bucket = true for object uploads to workcorrectly.
Note that Cloudflare decompresses files uploaded withContent-Encoding: gzip by default which is a deviation from what AWSdoes. If this is causing a problem then upload the files with--header-upload "Cache-Control: no-transform"
A consequence of this is thatContent-Encoding: gzip will neverappear in the metadata on Cloudflare.
Cubbit Object Storage is a geo-distributedcloud object storage platform.
To connect to Cubbit DS3 you will need an access key and secret key pair. Youcan follow thisguideto retrieve these keys. They will be needed when prompted byrclone config.
Default region will correspond toeu-west-1 and the endpoint has to be specifiedass3.cubbit.eu.
Going through the whole process of creating a new remote by runningrclone config,each prompt should be answered as shown below:
name> cubbit-ds3 (or any name you like)Storage> s3provider> Cubbitenv_auth> falseaccess_key_id> YOUR_ACCESS_KEYsecret_access_key> YOUR_SECRET_KEYregion> eu-west-1 (or leave empty)endpoint> s3.cubbit.euacl>The resulting configuration file should look like:
[cubbit-ds3]type=s3provider=Cubbitaccess_key_id=ACCESS_KEYsecret_access_key=SECRET_KEYregion=eu-west-1endpoint=s3.cubbit.euYou can then start using Cubbit DS3 with rclone. For example, to create a newbucket and copy files into it, you can run:
rclone mkdir cubbit-ds3:my-bucketrclone copy /path/to/files cubbit-ds3:my-bucketSpaces is anS3-interoperableobject storage service from cloud provider DigitalOcean.
To connect to DigitalOcean Spaces you will need an access key and secret key.These can be retrieved on theApplications & APIpage of the DigitalOcean control panel. They will be needed when prompted byrclone config for youraccess_key_id andsecret_access_key.
When prompted for aregion orlocation_constraint, press enter to use thedefault value. The region must be included in theendpoint setting (e.g.nyc3.digitaloceanspaces.com). The default values can be used for other settings.
Going through the whole process of creating a new remote by runningrclone config,each prompt should be answered as shown below:
Storage> s3env_auth> 1access_key_id> YOUR_ACCESS_KEYsecret_access_key> YOUR_SECRET_KEYregion>endpoint> nyc3.digitaloceanspaces.comlocation_constraint>acl>storage_class>The resulting configuration file should look like:
[spaces]type=s3provider=DigitalOceanenv_auth=falseaccess_key_id=YOUR_ACCESS_KEYsecret_access_key=YOUR_SECRET_KEYregion=endpoint=nyc3.digitaloceanspaces.comlocation_constraint=acl=server_side_encryption=storage_class=Once configured, you can create a new Space and begin copying files. For example:
rclone mkdir spaces:my-new-spacerclone copy /path/to/files spaces:my-new-spaceDreamhostDreamObjects isan object storage system based on CEPH.
To use rclone with Dreamhost, configure as above but leave the region blankand set the endpoint. You should end up with something like this inyour config:
[dreamobjects]type=s3provider=DreamHostenv_auth=falseaccess_key_id=your_access_keysecret_access_key=your_secret_keyregion=endpoint=objects-us-west-1.dream.iolocation_constraint=acl=privateserver_side_encryption=storage_class=Exaba is an on-premises, S3-compatible storagefor service providers and large enterprises. It is quick to deploy,with dynamic node management, tenant accounting, and a built-in keymanagement system. It delivers secure, high-performance data storagewith flexible, usage-based pricing.
A container versionexaba/exaba is free forend-users and on that page you can find instructions on how to set itup. You will need to log into the admin first (on port 9006 bydefault) to set up the container, then you can use the service on port9000.
You can also join theexaba supportslackif you need more help.
Anrclone config walkthrough might look like this but details mayvary depending exactly on how you have set up the container.
No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.name> exabaOption Storage.Type of storage to configure.Storage> s3Option provider.Choose your S3 provider.provider> ExabaOption env_auth.env_auth> 1Option access_key_id.access_key_id> XXXOption secret_access_key.secret_access_key> YYYOption region.region>Option endpoint.endpoint> http://127.0.0.1:9000/Option location_constraint.location_constraint>Option acl.acl>Edit advanced config?y) Yesn) No (default)y/n> nAnd the config generated will end up looking like this:
[exaba]type=s3provider=Exabaaccess_key_id=XXXsecret_access_key=XXXendpoint=http://127.0.0.1:9000/GoogleCloudStorage is anS3-interoperable objectstorage service from Google Cloud Platform.
To connect to Google Cloud Storage you will need an access key and secret key.These can be retrieved by creating anHMAC key.
[gs]type=s3provider=GCSaccess_key_id=your_access_keysecret_access_key=your_secret_keyendpoint=https://storage.googleapis.comNote that--s3-versions does not work with GCS when it needs to dodirectory paging. Rclone will return the error:
s3 protocol error: received versions listing with IsTruncated set with no NextKeyMarkerThis is Google bug#312292516.
Here is an example of making aHetzner Object Storageconfiguration. First run:
rclone configThis will guide you through an interactive setup process.
No remotes found, make a new one?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.name> my-hetznerOption Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip] XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others \ (s3)[snip]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]XX / Hetzner Object Storage \ (Hetzner)[snip]provider> HetznerOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step. \ (false) 2 / Get AWS credentials from the environment (env vars or IAM). \ (true)env_auth>Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> ACCESS_KEYOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> SECRET_KEYOption region.Region to connect to.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Helsinki \ (hel1) 2 / Falkenstein \ (fsn1) 3 / Nuremberg \ (nbg1)region>Option endpoint.Endpoint for Hetzner Object StorageChoose a number from below, or type in your own value.Press Enter to leave empty. 1 / Helsinki \ (hel1.your-objectstorage.com) 2 / Falkenstein \ (fsn1.your-objectstorage.com) 3 / Nuremberg \ (nbg1.your-objectstorage.com)endpoint>Option location_constraint.Location constraint - must be set to match the Region.Leave blank if not sure. Used when creating buckets only.Enter a value. Press Enter to leave empty.location_constraint>Option acl.Canned ACL used when creating buckets and storing or copying objects.This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclNote that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.If the acl is an empty string then no X-Amz-Acl: header is added andthe default (private) will be used.Choose a number from below, or type in your own value.Press Enter to leave empty. / Owner gets FULL_CONTROL. 1 | No one else has access rights (default). \ (private) / Owner gets FULL_CONTROL. 2 | The AllUsers group gets READ access. \ (public-read)acl>Edit advanced config?y) Yesn) No (default)y/n>Configuration complete.Options:- type: s3- provider: Hetzner- access_key_id: ACCESS_KEY- secret_access_key: SECRET_KEYKeep this "my-hetzner" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d>Current remotes:Name Type==== ====my-hetzner s3e) Edit existing remoten) New remoted) Delete remoter) Rename remotec) Copy remotes) Set configuration passwordq) Quit confige/n/d/r/c/s/q>This will leave the config file looking like this.
[my-hetzner]type=s3provider=Hetzneraccess_key_id=ACCESS_KEYsecret_access_key=SECRET_KEYregion=hel1endpoint=hel1.your-objectstorage.comacl=privateObject Storage Service (OBS) provides stable, secure, efficient, and easy-to-usecloud storage that lets you store virtually any volume of unstructured data inany format and access it from anywhere.
OBS provides an S3 interface, you can copy and modify the following configurationand add it to your rclone configuration file.
[obs]type=s3provider=HuaweiOBSaccess_key_id=your-access-key-idsecret_access_key=your-secret-access-keyregion=af-south-1endpoint=obs.af-south-1.myhuaweicloud.comacl=privateOr you can also configure via the interactive command line:
No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nname> obsOption Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip]XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3)[snip]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip] 9 / Huawei Object Storage Service \ (HuaweiOBS)[snip]provider> 9Option env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step. \ (false) 2 / Get AWS credentials from the environment (env vars or IAM). \ (true)env_auth> 1Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> your-access-key-idOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> your-secret-access-keyOption region.Region to connect to.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / AF-Johannesburg \ (af-south-1) 2 / AP-Bangkok \ (ap-southeast-2)[snip]region> 1Option endpoint.Endpoint for OBS API.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / AF-Johannesburg \ (obs.af-south-1.myhuaweicloud.com) 2 / AP-Bangkok \ (obs.ap-southeast-2.myhuaweicloud.com)[snip]endpoint> 1Option acl.Canned ACL used when creating buckets and storing or copying objects.This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclNote that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.Choose a number from below, or type in your own value.Press Enter to leave empty. / Owner gets FULL_CONTROL. 1 | No one else has access rights (default). \ (private)[snip]acl> 1Edit advanced config?y) Yesn) No (default)y/n>--------------------[obs]type = s3provider = HuaweiOBSaccess_key_id = your-access-key-idsecret_access_key = your-secret-access-keyregion = af-south-1endpoint = obs.af-south-1.myhuaweicloud.comacl = private--------------------y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> yCurrent remotes:Name Type==== ====obs s3e) Edit existing remoten) New remoted) Delete remoter) Rename remotec) Copy remotes) Set configuration passwordq) Quit confige/n/d/r/c/s/q> qInformation stored with IBM Cloud Object Storage is encrypted and dispersed acrossmultiple geographic locations, and accessed through an implementation of the S3 API.This service makes use of the distributed storage technologies provided by IBM’sCloud Object Storage System (formerly Cleversafe). For more information visit:http://www.ibm.com/cloud/object-storage
To configure access to IBM COS S3, follow the steps below:
Run rclone config and select n for a new remote.
2018/02/14 14:13:11 NOTICE: Config file "C:\\Users\\a\\.config\\rclone\\rclone.conf" not found - using defaultsNo remotes found, make a new one?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter the name for the configuration
name> <YOUR NAME>Select "s3" storage.
Choose a number from below, or type in your own value[snip]XX / Amazon S3 Compliant Storage Providers including AWS, ... \ "s3"[snip]Storage> s3Select IBM COS as the S3 Storage Provider.
Choose the S3 provider.Choose a number from below, or type in your own value 1 / Choose this option to configure Storage to AWS S3 \ "AWS" 2 / Choose this option to configure Storage to Ceph Systems \ "Ceph" 3 / Choose this option to configure Storage to Dreamhost \ "Dreamhost" 4 / Choose this option to the configure Storage to IBM COS S3 \ "IBMCOS" 5 / Choose this option to the configure Storage to Minio \ "Minio" Provider>4Enter the Access Key and Secret.
AWS Access Key ID - leave blank for anonymous access or runtime credentials.access_key_id> <>AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.secret_access_key> <>Specify the endpoint for IBM COS. For Public IBM COS, choose from the optionbelow. For On Premise IBM COS, enter an endpoint address.
Endpoint for IBM COS S3 API.Specify if using an IBM COS On Premise.Choose a number from below, or type in your own value 1 / US Cross Region Endpoint \ "s3-api.us-geo.objectstorage.softlayer.net" 2 / US Cross Region Dallas Endpoint \ "s3-api.dal.us-geo.objectstorage.softlayer.net" 3 / US Cross Region Washington DC Endpoint \ "s3-api.wdc-us-geo.objectstorage.softlayer.net" 4 / US Cross Region San Jose Endpoint \ "s3-api.sjc-us-geo.objectstorage.softlayer.net" 5 / US Cross Region Private Endpoint \ "s3-api.us-geo.objectstorage.service.networklayer.com" 6 / US Cross Region Dallas Private Endpoint \ "s3-api.dal-us-geo.objectstorage.service.networklayer.com" 7 / US Cross Region Washington DC Private Endpoint \ "s3-api.wdc-us-geo.objectstorage.service.networklayer.com" 8 / US Cross Region San Jose Private Endpoint \ "s3-api.sjc-us-geo.objectstorage.service.networklayer.com" 9 / US Region East Endpoint \ "s3.us-east.objectstorage.softlayer.net"10 / US Region East Private Endpoint \ "s3.us-east.objectstorage.service.networklayer.com"11 / US Region South Endpoint[snip]34 / Toronto Single Site Private Endpoint \ "s3.tor01.objectstorage.service.networklayer.com"endpoint>1Specify a IBM COS Location Constraint. The location constraint must matchendpoint when using IBM Cloud Public. For on-prem COS, do not make a selectionfrom this list, hit enter
1 / US Cross Region Standard \ "us-standard" 2 / US Cross Region Vault \ "us-vault" 3 / US Cross Region Cold \ "us-cold" 4 / US Cross Region Flex \ "us-flex" 5 / US East Region Standard \ "us-east-standard" 6 / US East Region Vault \ "us-east-vault" 7 / US East Region Cold \ "us-east-cold" 8 / US East Region Flex \ "us-east-flex" 9 / US South Region Standard \ "us-south-standard"10 / US South Region Vault \ "us-south-vault"[snip]32 / Toronto Flex \ "tor01-flex"location_constraint>1Specify a canned ACL. IBM Cloud (Storage) supports "public-read" and "private".IBM Cloud(Infra) supports all the canned ACLs. On-Premise COS supports all thecanned ACLs.
Canned ACL used when creating buckets and/or storing objects in S3.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclChoose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS \ "private" 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS \ "public-read" 3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS \ "public-read-write" 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS \ "authenticated-read"acl> 1Review the displayed configuration and accept to save the "remote" then quit.The config file should look like this
[xxx]type=s3Provider=IBMCOSaccess_key_id=xxxsecret_access_key=yyyendpoint=s3-api.us-geo.objectstorage.softlayer.netlocation_constraint=us-standardacl=privateExecute rclone commands
1) Create a bucket. rclone mkdir IBM-COS-XREGION:newbucket2) List available buckets. rclone lsd IBM-COS-XREGION: -1 2017-11-08 21:16:22 -1 test -1 2018-02-14 20:16:39 -1 newbucket3) List contents of a bucket. rclone ls IBM-COS-XREGION:newbucket 18685952 test.exe4) Copy a file from local to remote. rclone copy /Users/file.txt IBM-COS-XREGION:newbucket5) Copy a file from remote to local. rclone copy IBM-COS-XREGION:newbucket/file.txt .6) Delete a file on remote. rclone delete IBM-COS-XREGION:newbucket/file.txtIf using IBM IAM authentication with IBM API KEY you need to fill in theseadditional parameters
Select false for env_auth
Leaveaccess_key_id andsecret_access_key blank
Paste youribm_api_key
Option ibm_api_key.IBM API Key to be used to obtain IAM tokenEnter a value of type string. Press Enter for the default (1).ibm_api_key>Paste youribm_resource_instance_id
Option ibm_resource_instance_id.IBM service instance idEnter a value of type string. Press Enter for the default (2).ibm_resource_instance_id>In advanced settings type true forv2_auth
Option v2_auth.If true use v2 authentication.If this is false (the default) then rclone will use v4 authentication.If it is set then rclone will use v2 authentication.Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH.Enter a boolean value (true or false). Press Enter for the default (true).v2_auth>Here is an example of making anIDrive e2configuration. First run:
rclone configThis will guide you through an interactive setup process.
No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.name> e2Option Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip]XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3)[snip]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]XX / IDrive e2 \ (IDrive)[snip]provider> IDriveOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step. \ (false) 2 / Get AWS credentials from the environment (env vars or IAM). \ (true)env_auth>Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> YOUR_ACCESS_KEYOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> YOUR_SECRET_KEYOption acl.Canned ACL used when creating buckets and storing or copying objects.This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclNote that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.Choose a number from below, or type in your own value.Press Enter to leave empty. / Owner gets FULL_CONTROL. 1 | No one else has access rights (default). \ (private) / Owner gets FULL_CONTROL. 2 | The AllUsers group gets READ access. \ (public-read) / Owner gets FULL_CONTROL. 3 | The AllUsers group gets READ and WRITE access. | Granting this on a bucket is generally not recommended. \ (public-read-write) / Owner gets FULL_CONTROL. 4 | The AuthenticatedUsers group gets READ access. \ (authenticated-read) / Object owner gets FULL_CONTROL. 5 | Bucket owner gets READ access. | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. \ (bucket-owner-read) / Both the object owner and the bucket owner get FULL_CONTROL over the object. 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. \ (bucket-owner-full-control)acl>Edit advanced config?y) Yesn) No (default)y/n>Configuration complete.Options:- type: s3- provider: IDrive- access_key_id: YOUR_ACCESS_KEY- secret_access_key: YOUR_SECRET_KEY- endpoint: q9d9.la12.idrivee2-5.comKeep this "e2" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> yIntercolo Object Storage offersGDPR-compliant, transparently priced, S3-compatiblecloud storage hosted in Frankfurt, Germany.
Here's an example of making a configuration for Intercolo.
First run:
rclone configThis will guide you through an interactive setup process.
No remotes found, make a new one?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.name> intercoloOption Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip] xx / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3)[snip]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]xx / Intercolo Object Storage \ (Intercolo)[snip]provider> IntercoloOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step. \ (false) 2 / Get AWS credentials from the environment (env vars or IAM). \ (true)env_auth> falseOption access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> ACCESS_KEYOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> SECRET_KEYOption region.Region where your bucket will be created and your data stored.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Frankfurt, Germany \ (de-fra)region> 1Option endpoint.Endpoint for Intercolo Object Storage.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Frankfurt, Germany \ (de-fra.i3storage.com)endpoint> 1Option acl.Canned ACL used when creating buckets and storing or copying objects.This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclNote that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.If the acl is an empty string then no X-Amz-Acl: header is added andthe default (private) will be used.Choose a number from below, or type in your own value.Press Enter to leave empty. / Owner gets FULL_CONTROL. 1 | No one else has access rights (default). \ (private) [snip]acl>Edit advanced config?y) Yesn) No (default)y/n> nConfiguration complete.Options:- type: s3- provider: Intercolo- access_key_id: ACCESS_KEY- secret_access_key: SECRET_KEY- region: de-fra- endpoint: de-fra.i3storage.comKeep this "intercolo" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> yThis will leave the config file looking like this.
[intercolo]type=s3provider=Intercoloaccess_key_id=ACCESS_KEYsecret_access_key=SECRET_KEYregion=de-fraendpoint=de-fra.i3storage.comIONOS S3 Object Storage is aservice offered by IONOS for storing and accessing unstructured data.To connect to the service, you will need an access key and a secret key. Thesecan be found in theData Center Designer, byselectingManager resources >Object Storage Key Manager.
Here is an example of a configuration. First, runrclone config. This willwalk you through an interactive setup process. Typen to add the new remote,and then enter a name:
Enter name for new remote.name> ionos-fraTypes3 to choose the connection type:
Option Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip]XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3)[snip]Storage> s3TypeIONOS:
Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]XX / IONOS Cloud \ (IONOS)[snip]provider> IONOSPress Enter to choose the default optionEnter AWS credentials in the next step:
Option env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step. \ (false) 2 / Get AWS credentials from the environment (env vars or IAM). \ (true)env_auth>Enter your Access Key and Secret key. These can be retrieved in theData Center Designer, click on the menu"Manager resources" / "Object Storage Key Manager".
Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> YOUR_ACCESS_KEYOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> YOUR_SECRET_KEYChoose the region where your bucket is located:
Option region.Region where your bucket will be created and your data stored.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Frankfurt, Germany \ (de) 2 / Berlin, Germany \ (eu-central-2) 3 / Logrono, Spain \ (eu-south-2)region> 2Choose the endpoint from the same region:
Option endpoint.Endpoint for IONOS S3 Object Storage.Specify the endpoint from the same region.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Frankfurt, Germany \ (s3-eu-central-1.ionoscloud.com) 2 / Berlin, Germany \ (s3-eu-central-2.ionoscloud.com) 3 / Logrono, Spain \ (s3-eu-south-2.ionoscloud.com)endpoint> 1Press Enter to choose the default option or choose the desired ACL setting:
Option acl.Canned ACL used when creating buckets and storing or copying objects.This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclNote that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.Choose a number from below, or type in your own value.Press Enter to leave empty. / Owner gets FULL_CONTROL. 1 | No one else has access rights (default). \ (private) / Owner gets FULL_CONTROL.[snip]acl>Press Enter to skip the advanced config:
Edit advanced config?y) Yesn) No (default)y/n>Press Enter to save the configuration, and thenq to quit the configuration process:
Configuration complete.Options:- type: s3- provider: IONOS- access_key_id: YOUR_ACCESS_KEY- secret_access_key: YOUR_SECRET_KEY- endpoint: s3-eu-central-1.ionoscloud.comKeep this "ionos-fra" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> yDone! Now you can try some commands (for macOS, use./rclone instead ofrclone).
Create a bucket (the name must be unique within the whole IONOS S3)
rclone mkdir ionos-fra:my-bucketList available buckets
rclone lsd ionos-fra:Copy a file from local to remote
rclone copy /Users/file.txt ionos-fra:my-bucketList contents of a bucket
rclone ls ionos-fra:my-bucketCopy a file from remote to local
rclone copy ionos-fra:my-bucket/file.txtLeviia Object Storage, backup and secureyour data in a 100% French cloud, independent of GAFAM..
To configure access to Leviia, follow the steps below:
Runrclone config and selectn for a new remote.
rclone configNo remotes found, make a new one?n) New remotes) Set configuration passwordq) Quit confign/s/q> nGive the name of the configuration. For example, name it 'leviia'.
name> leviiaSelects3 storage.
Choose a number from below, or type in your own value[snip]XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3)[snip]Storage> s3SelectLeviia provider.
Choose a number from below, or type in your own value1 / Amazon Web Services (AWS) S3 \ "AWS"[snip]15 / Leviia Object Storage \ (Leviia)[snip]provider> LeviiaEnter your SecretId and SecretKey of Leviia.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Enter a boolean value (true or false). Press Enter for the default ("false").Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step \ "false" 2 / Get AWS credentials from the environment (env vars or IAM) \ "true"env_auth> 1AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a string value. Press Enter for the default ("").access_key_id> ZnIx.xxxxxxxxxxxxxxxAWS Secret Access Key (password)Leave blank for anonymous access or runtime credentials.Enter a string value. Press Enter for the default ("").secret_access_key> xxxxxxxxxxxSelect endpoint for Leviia.
/ The default endpoint 1 | Leviia. \ (s3.leviia.com)[snip]endpoint> 1Choose acl.
Note that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.Enter a string value. Press Enter for the default ("").Choose a number from below, or type in your own value / Owner gets FULL_CONTROL. 1 | No one else has access rights (default). \ (private) / Owner gets FULL_CONTROL. 2 | The AllUsers group gets READ access. \ (public-read)[snip]acl> 1Edit advanced config? (y/n)y) Yesn) No (default)y/n> nRemote config--------------------[leviia]- type: s3- provider: Leviia- access_key_id: ZnIx.xxxxxxx- secret_access_key: xxxxxxxx- endpoint: s3.leviia.com- acl: private--------------------y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> yCurrent remotes:Name Type==== ====leviia s3Here is an example of making aLiara Object Storageconfiguration. First run:
rclone configThis will guide you through an interactive setup process.
No remotes found, make a new one\?n) New remotes) Set configuration passwordn/s> nname> LiaraType of storage to configure.Choose a number from below, or type in your own value[snip]XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Liara, Minio) \ "s3"[snip]Storage> s3Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step \ "false" 2 / Get AWS credentials from the environment (env vars or IAM) \ "true"env_auth> 1AWS Access Key ID - leave blank for anonymous access or runtime credentials.access_key_id> YOURACCESSKEYAWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.secret_access_key> YOURSECRETACCESSKEYRegion to connect to.Choose a number from below, or type in your own value / The default endpoint 1 | US Region, Northern Virginia, or Pacific Northwest. | Leave location constraint empty. \ "us-east-1"[snip]region>Endpoint for S3 API.Leave blank if using Liara to use the default endpoint for the region.Specify if using an S3 clone such as Ceph.endpoint> storage.iran.liara.spaceCanned ACL used when creating buckets and/or storing objects in S3.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclChoose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default). \ "private"[snip]acl>The server-side encryption algorithm used when storing this object in S3.Choose a number from below, or type in your own value 1 / None \ "" 2 / AES256 \ "AES256"server_side_encryption>The storage class to use when storing objects in S3.Choose a number from below, or type in your own value 1 / Default \ "" 2 / Standard storage class \ "STANDARD"storage_class>Remote config--------------------[Liara]env_auth = falseaccess_key_id = YOURACCESSKEYsecret_access_key = YOURSECRETACCESSKEYendpoint = storage.iran.liara.spacelocation_constraint =acl =server_side_encryption =storage_class =--------------------y) Yes this is OKe) Edit this remoted) Delete this remotey/e/d> yThis will leave the config file looking like this.
[Liara]type=s3provider=Liaraenv_auth=falseaccess_key_id=YOURACCESSKEYsecret_access_key=YOURSECRETACCESSKEYregion=endpoint=storage.iran.liara.spacelocation_constraint=acl=server_side_encryption=storage_class=Here is an example of making aLinode Object Storageconfiguration. First run:
rclone configThis will guide you through an interactive setup process.
No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.name> linodeOption Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip]XX / Amazon S3 Compliant Storage Providers including AWS, ...Linode, ...and others \ (s3)[snip]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]XX / Linode Object Storage \ (Linode)[snip]provider> LinodeOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step. \ (false) 2 / Get AWS credentials from the environment (env vars or IAM). \ (true)env_auth>Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> ACCESS_KEYOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> SECRET_ACCESS_KEYOption endpoint.Endpoint for Linode Object Storage API.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Amsterdam (Netherlands), nl-ams-1 \ (nl-ams-1.linodeobjects.com) 2 / Atlanta, GA (USA), us-southeast-1 \ (us-southeast-1.linodeobjects.com) 3 / Chennai (India), in-maa-1 \ (in-maa-1.linodeobjects.com) 4 / Chicago, IL (USA), us-ord-1 \ (us-ord-1.linodeobjects.com) 5 / Frankfurt (Germany), eu-central-1 \ (eu-central-1.linodeobjects.com) 6 / Jakarta (Indonesia), id-cgk-1 \ (id-cgk-1.linodeobjects.com) 7 / London 2 (Great Britain), gb-lon-1 \ (gb-lon-1.linodeobjects.com) 8 / Los Angeles, CA (USA), us-lax-1 \ (us-lax-1.linodeobjects.com) 9 / Madrid (Spain), es-mad-1 \ (es-mad-1.linodeobjects.com)10 / Melbourne (Australia), au-mel-1 \ (au-mel-1.linodeobjects.com)11 / Miami, FL (USA), us-mia-1 \ (us-mia-1.linodeobjects.com)12 / Milan (Italy), it-mil-1 \ (it-mil-1.linodeobjects.com)13 / Newark, NJ (USA), us-east-1 \ (us-east-1.linodeobjects.com)14 / Osaka (Japan), jp-osa-1 \ (jp-osa-1.linodeobjects.com)15 / Paris (France), fr-par-1 \ (fr-par-1.linodeobjects.com)16 / São Paulo (Brazil), br-gru-1 \ (br-gru-1.linodeobjects.com)17 / Seattle, WA (USA), us-sea-1 \ (us-sea-1.linodeobjects.com)18 / Singapore, ap-south-1 \ (ap-south-1.linodeobjects.com)19 / Singapore 2, sg-sin-1 \ (sg-sin-1.linodeobjects.com)20 / Stockholm (Sweden), se-sto-1 \ (se-sto-1.linodeobjects.com)21 / Washington, DC, (USA), us-iad-1 \ (us-iad-1.linodeobjects.com)endpoint> 5Option acl.Canned ACL used when creating buckets and storing or copying objects.This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclNote that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.If the acl is an empty string then no X-Amz-Acl: header is added andthe default (private) will be used.Choose a number from below, or type in your own value.Press Enter to leave empty. / Owner gets FULL_CONTROL. 1 | No one else has access rights (default). \ (private)[snip]acl>Edit advanced config?y) Yesn) No (default)y/n> nConfiguration complete.Options:- type: s3- provider: Linode- access_key_id: ACCESS_KEY- secret_access_key: SECRET_ACCESS_KEY- endpoint: eu-central-1.linodeobjects.comKeep this "linode" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> yThis will leave the config file looking like this.
[linode]type=s3provider=Linodeaccess_key_id=ACCESS_KEYsecret_access_key=SECRET_ACCESS_KEYendpoint=eu-central-1.linodeobjects.comHere is an example of making aMagalu Object Storageconfiguration. First run:
rclone configThis will guide you through an interactive setup process.
No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.name> magaluOption Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip]XX / Amazon S3 Compliant Storage Providers including AWS, ...Magalu, ...and others \ (s3)[snip]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]XX / Magalu Object Storage \ (Magalu)[snip]provider> MagaluOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step. \ (false) 2 / Get AWS credentials from the environment (env vars or IAM). \ (true)env_auth> 1Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> ACCESS_KEYOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> SECRET_ACCESS_KEYOption endpoint.Endpoint for Magalu Object Storage API.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / São Paulo, SP (BR), br-se1 \ (br-se1.magaluobjects.com) 2 / Fortaleza, CE (BR), br-ne1 \ (br-ne1.magaluobjects.com)endpoint> 2Option acl.Canned ACL used when creating buckets and storing or copying objects.This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclNote that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.If the acl is an empty string then no X-Amz-Acl: header is added andthe default (private) will be used.Choose a number from below, or type in your own value.Press Enter to leave empty. / Owner gets FULL_CONTROL. 1 | No one else has access rights (default). \ (private)[snip]acl>Edit advanced config?y) Yesn) No (default)y/n> nConfiguration complete.Options:- type: s3- provider: magalu- access_key_id: ACCESS_KEY- secret_access_key: SECRET_ACCESS_KEY- endpoint: br-ne1.magaluobjects.comKeep this "magalu" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> yThis will leave the config file looking like this.
[magalu]type=s3provider=Magaluaccess_key_id=ACCESS_KEYsecret_access_key=SECRET_ACCESS_KEYendpoint=br-ne1.magaluobjects.comMEGA S4 Object Storage is an S3compatible object storage system. It has a single pricing tier with noadditional charges for data transfers or API requests and it isincluded in existing Pro plans.
Here is an example of making a configuration. First run:
rclone configThis will guide you through an interactive setup process.
No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.name> megas4Option Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip]XX / Amazon S3 Compliant Storage Providers including AWS,... Mega, ... \ (s3)[snip]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]XX / MEGA S4 Object Storage \ (Mega)[snip]provider> MegaOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step. \ (false) 2 / Get AWS credentials from the environment (env vars or IAM). \ (true)env_auth>Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> XXXOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> XXXOption endpoint.Endpoint for S3 API.Required when using an S3 clone.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Mega S4 eu-central-1 (Amsterdam) \ (s3.eu-central-1.s4.mega.io) 2 / Mega S4 eu-central-2 (Bettembourg) \ (s3.eu-central-2.s4.mega.io) 3 / Mega S4 ca-central-1 (Montreal) \ (s3.ca-central-1.s4.mega.io) 4 / Mega S4 ca-west-1 (Vancouver) \ (s3.ca-west-1.s4.mega.io)endpoint> 1Edit advanced config?y) Yesn) No (default)y/n> nConfiguration complete.Options:- type: s3- provider: Mega- access_key_id: XXX- secret_access_key: XXX- endpoint: s3.eu-central-1.s4.mega.ioKeep this "megas4" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> yThis will leave the config file looking like this.
[megas4]type=s3provider=Megaaccess_key_id=XXXsecret_access_key=XXXendpoint=s3.eu-central-1.s4.mega.ioMinio is an object storage server built for cloud applicationdevelopers and devops.
It is very easy to install and provides an S3 compatible server which can be usedby rclone.
To use it, install Minio following the instructionshere.
When it configures itself Minio will print something like this
Endpoint: http://192.168.1.106:9000 http://172.23.0.1:9000AccessKey: USWUXHGYZQYFYFFIT3RESecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03Region: us-east-1SQS ARNs: arn:minio:sqs:us-east-1:1:redis arn:minio:sqs:us-east-1:2:redisBrowser Access: http://192.168.1.106:9000 http://172.23.0.1:9000Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide $ mc config host add myminio http://192.168.1.106:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03Object API (Amazon S3 compatible): Go: https://docs.minio.io/docs/golang-client-quickstart-guide Java: https://docs.minio.io/docs/java-client-quickstart-guide Python: https://docs.minio.io/docs/python-client-quickstart-guide #"0">env_auth> 1access_key_id> USWUXHGYZQYFYFFIT3REsecret_access_key> MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03region> us-east-1endpoint> http://192.168.1.106:9000location_constraint>server_side_encryption>Which makes the config file look like this
[minio]type=s3provider=Minioenv_auth=falseaccess_key_id=USWUXHGYZQYFYFFIT3REsecret_access_key=MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03region=us-east-1endpoint=http://192.168.1.106:9000location_constraint=server_side_encryption=So once set up, for example, to copy files into a bucket
rclone copy /path/to/files minio:bucketFor Netease NOS configure as per the configuratorrclone configsetting the providerNetease. This will automatically setforce_path_style = false which is necessary for it to run properly.
OUTSCALE Object Storage (OOS)is an enterprise-grade, S3-compatible storage service provided by OUTSCALE,a brand of Dassault Systèmes. For more information about OOS, see theofficial documentation.
Here is an example of an OOS configuration that you can paste into your rcloneconfiguration file:
[outscale]type=s3provider=Outscaleenv_auth=falseaccess_key_id=ABCDEFGHIJ0123456789secret_access_key=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXregion=eu-west-2endpoint=oos.eu-west-2.outscale.comacl=privateYou can also runrclone config to go through the interactive setup process:
No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.name> outscaleOption Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip] X / Amazon S3 Compliant Storage Providers including AWS, ...Outscale, ...and others \ (s3)[snip]Storage> outscaleOption provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]XX / OUTSCALE Object Storage (OOS) \ (Outscale)[snip]provider> OutscaleOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step. \ (false) 2 / Get AWS credentials from the environment (env vars or IAM). \ (true)env_auth>Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> ABCDEFGHIJ0123456789Option secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXOption region.Region where your bucket will be created and your data stored.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Paris, France \ (eu-west-2) 2 / New Jersey, USA \ (us-east-2) 3 / California, USA \ (us-west-1) 4 / SecNumCloud, Paris, France \ (cloudgouv-eu-west-1) 5 / Tokyo, Japan \ (ap-northeast-1)region> 1Option endpoint.Endpoint for S3 API.Required when using an S3 clone.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Outscale EU West 2 (Paris) \ (oos.eu-west-2.outscale.com) 2 / Outscale US east 2 (New Jersey) \ (oos.us-east-2.outscale.com) 3 / Outscale EU West 1 (California) \ (oos.us-west-1.outscale.com) 4 / Outscale SecNumCloud (Paris) \ (oos.cloudgouv-eu-west-1.outscale.com) 5 / Outscale AP Northeast 1 (Japan) \ (oos.ap-northeast-1.outscale.com)endpoint> 1Option acl.Canned ACL used when creating buckets and storing or copying objects.This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclNote that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.If the acl is an empty string then no X-Amz-Acl: header is added andthe default (private) will be used.Choose a number from below, or type in your own value.Press Enter to leave empty. / Owner gets FULL_CONTROL. 1 | No one else has access rights (default). \ (private)[snip]acl> 1Edit advanced config?y) Yesn) No (default)y/n> nConfiguration complete.Options:- type: s3- provider: Outscale- access_key_id: ABCDEFGHIJ0123456789- secret_access_key: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX- endpoint: oos.eu-west-2.outscale.comKeep this "outscale" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> yOVHcloud Object Storageis an S3-compatible general-purpose object storage platform available in allOVHcloud regions. To use the platform, you will need an access key and secret key.To know more about it and how to interact with the platform, take a look at thedocumentation.
Here is an example of making an OVHcloud Object Storage configuration withrclone config:
No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.name> ovhcloud-rbxOption Storage.Type of storage to configure.Choose a number from below, or type in your own value.[...] XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, OVHcloud, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others \ (s3)[...]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[...]XX / OVHcloud Object Storage \ (OVHcloud)[...]provider> OVHcloudOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step. \ (false) 2 / Get AWS credentials from the environment (env vars or IAM). \ (true)env_auth> 1Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> my_accessOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> my_secretOption region.Region where your bucket will be created and your data stored.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Gravelines, France \ (gra) 2 / Roubaix, France \ (rbx) 3 / Strasbourg, France \ (sbg) 4 / Paris, France (3AZ) \ (eu-west-par) 5 / Frankfurt, Germany \ (de) 6 / London, United Kingdom \ (uk) 7 / Warsaw, Poland \ (waw) 8 / Beauharnois, Canada \ (bhs) 9 / Toronto, Canada \ (ca-east-tor)10 / Singapore \ (sgp)11 / Sydney, Australia \ (ap-southeast-syd)12 / Mumbai, India \ (ap-south-mum)13 / Vint Hill, Virginia, USA \ (us-east-va)14 / Hillsboro, Oregon, USA \ (us-west-or)15 / Roubaix, France (Cold Archive) \ (rbx-archive)region> 2Option endpoint.Endpoint for OVHcloud Object Storage.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / OVHcloud Gravelines, France \ (s3.gra.io.cloud.ovh.net) 2 / OVHcloud Roubaix, France \ (s3.rbx.io.cloud.ovh.net) 3 / OVHcloud Strasbourg, France \ (s3.sbg.io.cloud.ovh.net) 4 / OVHcloud Paris, France (3AZ) \ (s3.eu-west-par.io.cloud.ovh.net) 5 / OVHcloud Frankfurt, Germany \ (s3.de.io.cloud.ovh.net) 6 / OVHcloud London, United Kingdom \ (s3.uk.io.cloud.ovh.net) 7 / OVHcloud Warsaw, Poland \ (s3.waw.io.cloud.ovh.net) 8 / OVHcloud Beauharnois, Canada \ (s3.bhs.io.cloud.ovh.net) 9 / OVHcloud Toronto, Canada \ (s3.ca-east-tor.io.cloud.ovh.net)10 / OVHcloud Singapore \ (s3.sgp.io.cloud.ovh.net)11 / OVHcloud Sydney, Australia \ (s3.ap-southeast-syd.io.cloud.ovh.net)12 / OVHcloud Mumbai, India \ (s3.ap-south-mum.io.cloud.ovh.net)13 / OVHcloud Vint Hill, Virginia, USA \ (s3.us-east-va.io.cloud.ovh.us)14 / OVHcloud Hillsboro, Oregon, USA \ (s3.us-west-or.io.cloud.ovh.us)15 / OVHcloud Roubaix, France (Cold Archive) \ (s3.rbx-archive.io.cloud.ovh.net)endpoint> 2Option acl.Canned ACL used when creating buckets and storing or copying objects.This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclNote that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.If the acl is an empty string then no X-Amz-Acl: header is added andthe default (private) will be used.Choose a number from below, or type in your own value.Press Enter to leave empty. / Owner gets FULL_CONTROL. 1 | No one else has access rights (default). \ (private) / Owner gets FULL_CONTROL. 2 | The AllUsers group gets READ access. \ (public-read) / Owner gets FULL_CONTROL. 3 | The AllUsers group gets READ and WRITE access. | Granting this on a bucket is generally not recommended. \ (public-read-write) / Owner gets FULL_CONTROL. 4 | The AuthenticatedUsers group gets READ access. \ (authenticated-read) / Object owner gets FULL_CONTROL. 5 | Bucket owner gets READ access. | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. \ (bucket-owner-read) / Both the object owner and the bucket owner get FULL_CONTROL over the object. 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. \ (bucket-owner-full-control)acl> 1Edit advanced config?y) Yesn) No (default)y/n> nConfiguration complete.Options:- type: s3- provider: OVHcloud- access_key_id: my_access- secret_access_key: my_secret- region: rbx- endpoint: s3.rbx.io.cloud.ovh.net- acl: privateKeep this "ovhcloud-rbx" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> yYour configuration file should now look like this:
[ovhcloud-rbx]type=s3provider=OVHcloudaccess_key_id=my_accesssecret_access_key=my_secretregion=rbxendpoint=s3.rbx.io.cloud.ovh.netacl=privateHere is an example of making aPetaboxconfiguration. First run:
rclone configThis will guide you through an interactive setup process.
No remotes found, make a new one\?n) New remotes) Set configuration passwordn/s> nEnter name for new remote.name> My Petabox StorageOption Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip]XX / Amazon S3 Compliant Storage Providers including AWS, ... \ "s3"[snip]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]XX / Petabox Object Storage \ (Petabox)[snip]provider> PetaboxOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step. \ (false) 2 / Get AWS credentials from the environment (env vars or IAM). \ (true)env_auth> 1Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> YOUR_ACCESS_KEY_IDOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> YOUR_SECRET_ACCESS_KEYOption region.Region where your bucket will be created and your data stored.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / US East (N. Virginia) \ (us-east-1) 2 / Europe (Frankfurt) \ (eu-central-1) 3 / Asia Pacific (Singapore) \ (ap-southeast-1) 4 / Middle East (Bahrain) \ (me-south-1) 5 / South America (São Paulo) \ (sa-east-1)region> 1Option endpoint.Endpoint for Petabox S3 Object Storage.Specify the endpoint from the same region.Choose a number from below, or type in your own value. 1 / US East (N. Virginia) \ (s3.petabox.io) 2 / US East (N. Virginia) \ (s3.us-east-1.petabox.io) 3 / Europe (Frankfurt) \ (s3.eu-central-1.petabox.io) 4 / Asia Pacific (Singapore) \ (s3.ap-southeast-1.petabox.io) 5 / Middle East (Bahrain) \ (s3.me-south-1.petabox.io) 6 / South America (São Paulo) \ (s3.sa-east-1.petabox.io)endpoint> 1Option acl.Canned ACL used when creating buckets and storing or copying objects.This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclNote that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.If the acl is an empty string then no X-Amz-Acl: header is added andthe default (private) will be used.Choose a number from below, or type in your own value.Press Enter to leave empty. / Owner gets FULL_CONTROL. 1 | No one else has access rights (default). \ (private) / Owner gets FULL_CONTROL. 2 | The AllUsers group gets READ access. \ (public-read) / Owner gets FULL_CONTROL. 3 | The AllUsers group gets READ and WRITE access. | Granting this on a bucket is generally not recommended. \ (public-read-write) / Owner gets FULL_CONTROL. 4 | The AuthenticatedUsers group gets READ access. \ (authenticated-read) / Object owner gets FULL_CONTROL. 5 | Bucket owner gets READ access. | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. \ (bucket-owner-read) / Both the object owner and the bucket owner get FULL_CONTROL over the object. 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. \ (bucket-owner-full-control)acl> 1Edit advanced config?y) Yesn) No (default)y/n> NoConfiguration complete.Options:- type: s3- provider: Petabox- access_key_id: YOUR_ACCESS_KEY_ID- secret_access_key: YOUR_SECRET_ACCESS_KEY- region: us-east-1- endpoint: s3.petabox.ioKeep this "My Petabox Storage" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> yThis will leave the config file looking like this.
[My Petabox Storage]type=s3provider=Petaboxaccess_key_id=YOUR_ACCESS_KEY_IDsecret_access_key=YOUR_SECRET_ACCESS_KEYregion=us-east-1endpoint=s3.petabox.ioPure Storage FlashBladeis a high performance S3-compatible object store.
FlashBlade supports most modern S3 features including:
To configure rclone for Pure Storage FlashBlade:
First run:
rclone configThis will guide you through an interactive setup process:
No remotes found, make a new one?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.name> flashbladeOption Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip] 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, FlashBlade, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others \ (s3)[snip]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip] 9 / Pure Storage FlashBlade Object Storage \ (FlashBlade)[snip]provider> FlashBladeOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step. \ (false) 2 / Get AWS credentials from the environment (env vars or IAM). \ (true)env_auth> 1Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> ACCESS_KEY_IDOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> SECRET_ACCESS_KEYOption endpoint.Endpoint for S3 API.Required when using an S3 clone.Enter a value. Press Enter to leave empty.endpoint> https://s3.flashblade.example.comEdit advanced config?y) Yesn) No (default)y/n> nConfiguration complete.Options:- type: s3- provider: FlashBlade- access_key_id: ACCESS_KEY_ID- secret_access_key: SECRET_ACCESS_KEY- endpoint: https://s3.flashblade.example.comKeep this "flashblade" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> yThis results in the following configuration being stored in~/.config/rclone/rclone.conf:
[flashblade]type=s3provider=FlashBladeaccess_key_id=ACCESS_KEY_IDsecret_access_key=SECRET_ACCESS_KEYendpoint=https://s3.flashblade.example.comNote: The FlashBlade endpoint should be the S3 data VIP. For virtual-hosted stylerequests, ensure proper DNS configuration: subdomains of the endpoint hostname shouldresolve to a FlashBlade data VIP. For example, if your endpoint ishttps://s3.flashblade.example.com,thenbucket-name.s3.flashblade.example.com should also resolve to the data VIP.
Qiniu Cloud Object Storage (Kodo), acompletely independent-researched core technology which is proven by repeatedcustomer experience has occupied absolute leading market leader position. Kodocan be widely applied to mass data management.
To configure access to Qiniu Kodo, follow the steps below:
Runrclone config and selectn for a new remote.
rclone configNo remotes found, make a new one?n) New remotes) Set configuration passwordq) Quit confign/s/q> nGive the name of the configuration. For example, name it 'qiniu'.
name> qiniuSelects3 storage.
Choose a number from below, or type in your own value[snip]XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3)[snip]Storage> s3SelectQiniu provider.
Choose a number from below, or type in your own value1 / Amazon Web Services (AWS) S3 \ "AWS"[snip]22 / Qiniu Object Storage (Kodo) \ (Qiniu)[snip]provider> QiniuEnter your SecretId and SecretKey of Qiniu Kodo.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Enter a boolean value (true or false). Press Enter for the default ("false").Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step \ "false" 2 / Get AWS credentials from the environment (env vars or IAM) \ "true"env_auth> 1AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a string value. Press Enter for the default ("").access_key_id> AKIDxxxxxxxxxxAWS Secret Access Key (password)Leave blank for anonymous access or runtime credentials.Enter a string value. Press Enter for the default ("").secret_access_key> xxxxxxxxxxxSelect endpoint for Qiniu Kodo. This is the standard endpoint for different region.
/ The default endpoint - a good choice if you are unsure. 1 | East China Region 1. | Needs location constraint cn-east-1. \ (cn-east-1) / East China Region 2. 2 | Needs location constraint cn-east-2. \ (cn-east-2) / North China Region 1. 3 | Needs location constraint cn-north-1. \ (cn-north-1) / South China Region 1. 4 | Needs location constraint cn-south-1. \ (cn-south-1) / North America Region. 5 | Needs location constraint us-north-1. \ (us-north-1) / Southeast Asia Region 1. 6 | Needs location constraint ap-southeast-1. \ (ap-southeast-1) / Northeast Asia Region 1. 7 | Needs location constraint ap-northeast-1. \ (ap-northeast-1)[snip]endpoint> 1Option endpoint.Endpoint for Qiniu Object Storage.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / East China Endpoint 1 \ (s3-cn-east-1.qiniucs.com) 2 / East China Endpoint 2 \ (s3-cn-east-2.qiniucs.com) 3 / North China Endpoint 1 \ (s3-cn-north-1.qiniucs.com) 4 / South China Endpoint 1 \ (s3-cn-south-1.qiniucs.com) 5 / North America Endpoint 1 \ (s3-us-north-1.qiniucs.com) 6 / Southeast Asia Endpoint 1 \ (s3-ap-southeast-1.qiniucs.com) 7 / Northeast Asia Endpoint 1 \ (s3-ap-northeast-1.qiniucs.com)endpoint> 1Option location_constraint.Location constraint - must be set to match the Region.Used when creating buckets only.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / East China Region 1 \ (cn-east-1) 2 / East China Region 2 \ (cn-east-2) 3 / North China Region 1 \ (cn-north-1) 4 / South China Region 1 \ (cn-south-1) 5 / North America Region 1 \ (us-north-1) 6 / Southeast Asia Region 1 \ (ap-southeast-1) 7 / Northeast Asia Region 1 \ (ap-northeast-1)location_constraint> 1Choose acl and storage class.
Note that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.Enter a string value. Press Enter for the default ("").Choose a number from below, or type in your own value / Owner gets FULL_CONTROL. 1 | No one else has access rights (default). \ (private) / Owner gets FULL_CONTROL. 2 | The AllUsers group gets READ access. \ (public-read)[snip]acl> 2The storage class to use when storing new objects in Tencent COS.Enter a string value. Press Enter for the default ("").Choose a number from below, or type in your own value 1 / Standard storage class \ (STANDARD) 2 / Infrequent access storage mode \ (LINE) 3 / Archive storage mode \ (GLACIER) 4 / Deep archive storage mode \ (DEEP_ARCHIVE)[snip]storage_class> 1Edit advanced config? (y/n)y) Yesn) No (default)y/n> nRemote config--------------------[qiniu]- type: s3- provider: Qiniu- access_key_id: xxx- secret_access_key: xxx- region: cn-east-1- endpoint: s3-cn-east-1.qiniucs.com- location_constraint: cn-east-1- acl: public-read- storage_class: STANDARD--------------------y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> yCurrent remotes:Name Type==== ====qiniu s3FileLu S5 Object Storage is an S3-compatible object storagesystem. It provides multiple region options (Global, US-East, EU-Central,AP-Southeast, and ME-Central) while using a single endpoint (s5lu.com).FileLu S5 is designed for scalability, security, and simplicity, with predictablepricing and no hidden charges for data transfers or API requests.
Here is an example of making a configuration. First run:
rclone configThis will guide you through an interactive setup process.
No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.name> s5luOption Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip]XX / Amazon S3 Compliant Storage Providers including AWS,... FileLu, ... \ (s3)[snip]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]XX / FileLu S5 Object Storage \ (FileLu)[snip]provider> FileLuOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step. \ (false) 2 / Get AWS credentials from the environment (env vars or IAM). \ (true)env_auth>Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> XXXOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> XXXOption endpoint.Endpoint for S3 API.Required when using an S3 clone.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Global \ (global) 2 / North America (US-East) \ (us-east) 3 / Europe (EU-Central) \ (eu-central) 4 / Asia Pacific (AP-Southeast) \ (ap-southeast) 5 / Middle East (ME-Central) \ (me-central)region> 1Edit advanced config?y) Yesn) No (default)y/n> nConfiguration complete.Options:- type: s3- provider: FileLu- access_key_id: XXX- secret_access_key: XXX- endpoint: s5lu.comKeep this "s5lu" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> yThis will leave the config file looking like this.
[s5lu]type=s3provider=FileLuaccess_key_id=XXXsecret_access_key=XXXendpoint=s5lu.comRabata is an S3-compatible secure cloud storage service thatoffers flat, transparent pricing (no API request fees) while supporting standardS3 APIs. It is suitable for backup, application storage,media workflows, andarchive use cases.
Server side copy is not implemented with Rabata, also meaning modification timeof objects cannot be updated.
Rclone config:
rclone configNo remotes found, make a new one?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.name> RabataOption Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip]XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3)[snip]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]XX / Rabata Cloud Storage \ (Rabata)[snip]provider> RabataOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step. \ (false) 2 / Get AWS credentials from the environment (env vars or IAM). \ (true)env_auth>Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> ACCESS_KEY_IDOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> SECRET_ACCESS_KEYOption region.Region where your bucket will be created and your data stored.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / US East (N. Virginia) \ (us-east-1) 2 / EU (Ireland) \ (eu-west-1) 3 / EU (London) \ (eu-west-2)region> 3Option endpoint.Endpoint for Rabata Object Storage.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / US East (N. Virginia) \ (s3.us-east-1.rabata.io) 2 / EU West (Ireland) \ (s3.eu-west-1.rabata.io) 3 / EU West (London) \ (s3.eu-west-2.rabata.io)endpoint> 3Option location_constraint.location where your bucket will be created and your data stored.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / US East (N. Virginia) \ (us-east-1) 2 / EU (Ireland) \ (eu-west-1) 3 / EU (London) \ (eu-west-2)location_constraint> 3Edit advanced config?y) Yesn) No (default)y/n> nConfiguration complete.Options:- type: s3- provider: Rabata- access_key_id: ACCESS_KEY_ID- secret_access_key: SECRET_ACCESS_KEY- region: eu-west-2- endpoint: s3.eu-west-2.rabata.io- location_constraint: eu-west-2Keep this "rabata" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> yCurrent remotes:Name Type==== ====rabata s3RackCorp Object Storage is an S3compatible object storage platform from your friendly cloud provider RackCorp.The service is fast, reliable, well priced and located in many strategiclocations unserviced by others, to ensure you can maintain data sovereignty.
Before you can use RackCorp Object Storage, you'll need tosign up for an account on ourportal.Next you can create anaccess key, asecret key andbuckets, in yourlocation of choice with ease. These details are required for the next steps ofconfiguration, whenrclone config asks for youraccess_key_id andsecret_access_key.
Your config should end up looking a bit like this:
[RCS3-demo-config]type=s3provider=RackCorpenv_auth=trueaccess_key_id=YOURACCESSKEYsecret_access_key=YOURSECRETACCESSKEYregion=au-nswendpoint=s3.rackcorp.comlocation_constraint=au-nswRclone can serve any remote over the S3 protocol. For details see therclone serve s3 documentation.
For example, to serveremote:path over s3, run the server like this:
rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:pathThis will be compatible with an rclone remote which is defined like this:
[serves3]type=s3provider=Rcloneendpoint=http://127.0.0.1:8080/access_key_id=ACCESS_KEY_IDsecret_access_key=SECRET_ACCESS_KEYuse_multipart_uploads=falseNote that settinguse_multipart_uploads = false is to work arounda bug which will be fixed in due course.
Scaleway The Object Storage platformallows you to store anything from backups, logs and web assets to documents and photos.Files can be dropped from the Scaleway console or transferred through our API andCLI or using any S3-compatible tool.
Scaleway provides an S3 interface which can be configured for use with rclonelike this:
[scaleway]type=s3provider=Scalewayenv_auth=falseendpoint=s3.nl-ams.scw.cloudaccess_key_id=SCWXXXXXXXXXXXXXXsecret_access_key=1111111-2222-3333-44444-55555555555555region=nl-amslocation_constraint=nl-amsacl=privateupload_cutoff=5Mchunk_size=5Mcopy_cutoff=5MScaleway Glacier is thelow-cost S3 Glacier alternative from Scaleway and it works the same way as onS3 by accepting the "GLACIER"storage_class. So you can configure your remotewith thestorage_class = GLACIER option to upload directly to Scaleway Glacier.Don't forget that in this state you can't read files back after, you will needto restore them to "STANDARD" storage_class first before being able to readthem (see "restore" section above)
Seagate Lyve Cloud isan S3 compatible object storage platform fromSeagateintended for enterprise use.
Here is a config run through for a remote calledremote - you maychoose a different name of course. Note that to create an access keyand secret key you will need to create a service account first.
$ rclone configNo remotes found, make a new one?n) New remotes) Set configuration passwordq) Quit confign/s/q> nname> remoteChooses3 backend
Type of storage to configure.Choose a number from below, or type in your own value.[snip]XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3)[snip]Storage> s3ChooseLyveCloud as S3 provider
Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]XX / Seagate Lyve Cloud \ (LyveCloud)[snip]provider> LyveCloudTake the default (just press enter) to enter access key and secret in theconfig file.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step. \ (false) 2 / Get AWS credentials from the environment (env vars or IAM). \ (true)env_auth>AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> XXXAWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> YYYLeave region blank
Region to connect to.Leave blank if you are using an S3 clone and you don't have a region.Choose a number from below, or type in your own value.Press Enter to leave empty. / Use this if unsure. 1 | Will use v4 signatures and an empty region. \ () / Use this only if v4 signatures don't work. 2 | E.g. pre Jewel/v10 CEPH. \ (other-v2-signature)region>Enter your Lyve Cloud endpoint. This field cannot be kept empty.
Endpoint for Lyve Cloud S3 API.Required when using an S3 clone.Please type in your LyveCloud endpoint.Examples:- s3.us-west-1.{account_name}.lyve.seagate.com (US West 1 - California)- s3.eu-west-1.{account_name}.lyve.seagate.com (US West 1 - Ireland)Enter a value.endpoint> s3.us-west-1.global.lyve.seagate.comLeave location constraint blank
Location constraint - must be set to match the Region.Leave blank if not sure. Used when creating buckets only.Enter a value. Press Enter to leave empty.location_constraint>Choose default ACL (private).
Canned ACL used when creating buckets and storing or copying objects.This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclNote that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.Choose a number from below, or type in your own value.Press Enter to leave empty. / Owner gets FULL_CONTROL. 1 | No one else has access rights (default). \ (private)[snip]acl>And the config file should end up looking like this:
[remote]type=s3provider=LyveCloudaccess_key_id=XXXsecret_access_key=YYYendpoint=s3.us-east-1.lyvecloud.seagate.comSeaweedFS is a distributed storagesystem for blobs, objects, files, and data lake, with O(1) disk seek and ascalable file metadata store. It has an S3 compatible object storage interface.SeaweedFS can also act as agateway to remote S3 compatible object storeto cache data and metadata with asynchronous write back, for fast local speedand minimize access cost.
Assuming the SeaweedFS are configured withweed shell as such:
> s3.bucket.create -name foo> s3.configure -access_key=any -secret_key=any -buckets=foo -user=me -actions=Read,Write,List,Tagging,Admin -apply{ "identities": [ { "name": "me", "credentials": [ { "accessKey": "any", "secretKey": "any" } ], "actions": [ "Read:foo", "Write:foo", "List:foo", "Tagging:foo", "Admin:foo" ] } ]}To use rclone with SeaweedFS, above configuration should end up with somethinglike this in your config:
[seaweedfs_s3]type=s3provider=SeaweedFSaccess_key_id=anysecret_access_key=anyendpoint=localhost:8333So once set up, for example to copy files into a bucket
rclone copy /path/to/files seaweedfs_s3:fooSelectel Cloud Storageis an S3 compatible storage system which features triple redundancystorage, automatic scaling, high availability and a comprehensive IAMsystem.
Selectel have a section on their website forconfiguringrclonewhich shows how to make the right API keys.
From rclone v1.69 Selectel is a supported operator - please choose theSelectel provider type.
Note that you should use "vHosted" access for the buckets (which isthe recommended default), not "path style".
You can userclone config to make a new provider like this
No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.name> selectelOption Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip]XX / Amazon S3 Compliant Storage Providers including ..., Selectel, ... \ (s3)[snip]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]XX / Selectel Object Storage \ (Selectel)[snip]provider> SelectelOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step. \ (false) 2 / Get AWS credentials from the environment (env vars or IAM). \ (true)env_auth> 1Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> ACCESS_KEYOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> SECRET_ACCESS_KEYOption region.Region where your data stored.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / St. Petersburg \ (ru-1)region> 1Option endpoint.Endpoint for Selectel Object Storage.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Saint Petersburg \ (s3.ru-1.storage.selcloud.ru)endpoint> 1Edit advanced config?y) Yesn) No (default)y/n> nConfiguration complete.Options:- type: s3- provider: Selectel- access_key_id: ACCESS_KEY- secret_access_key: SECRET_ACCESS_KEY- region: ru-1- endpoint: s3.ru-1.storage.selcloud.ruKeep this "selectel" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> yAnd your config should end up looking like this:
[selectel]type=s3provider=Selectelaccess_key_id=ACCESS_KEYsecret_access_key=SECRET_ACCESS_KEYregion=ru-1endpoint=s3.ru-1.storage.selcloud.ruServercore Object Storage is an S3compatible object storage system that provides scalable and secure storagesolutions for businesses of all sizes.
rclone config example:
No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.name> servercoreOption Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip]XX / Amazon S3 Compliant Storage Providers including ..., Servercore, ... \ (s3)[snip]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]XX / Servercore Object Storage \ (Servercore)[snip]provider> ServercoreOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step. \ (false) 2 / Get AWS credentials from the environment (env vars or IAM). \ (true)env_auth> 1Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> ACCESS_KEYOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> SECRET_ACCESS_KEYOption region.Region where your is data stored.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / St. Petersburg \ (ru-1) 2 / Moscow \ (gis-1) 3 / Moscow \ (ru-7) 4 / Tashkent, Uzbekistan \ (uz-2) 5 / Almaty, Kazakhstan \ (kz-1)region> 1Option endpoint.Endpoint for Servercore Object Storage.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Saint Petersburg \ (s3.ru-1.storage.selcloud.ru) 2 / Moscow \ (s3.gis-1.storage.selcloud.ru) 3 / Moscow \ (s3.ru-7.storage.selcloud.ru) 4 / Tashkent, Uzbekistan \ (s3.uz-2.srvstorage.uz) 5 / Almaty, Kazakhstan \ (s3.kz-1.srvstorage.kz)endpoint> 1Edit advanced config?y) Yesn) No (default)y/n> nConfiguration complete.Options:- type: s3- provider: Servercore- access_key_id: ACCESS_KEY- secret_access_key: SECRET_ACCESS_KEY- region: ru-1- endpoint: s3.ru-1.storage.selcloud.ruKeep this "servercore" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> ySpectra Logicis an on-prem S3-compatible object storage gateway that exposes local objectstorage and policy-tiers data to Spectra tape and public clouds under a singlenamespace for backup and archiving.
The S3 compatible gateway is configured usingrclone config with atype ofs3 and with a provider name ofSpectraLogic. Here is an examplerun of the configurator.
No remotes found, make a new one?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.name> spectralogicOption Storage.Type of storage to configure.Choose a number from below, or type in your own value.[snip]XX / Amazon S3 Compliant Storage Providers including ..., SpectraLogic, ... \ (s3)[snip]Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.[snip]XX / SpectraLogic BlackPearl \ (SpectraLogic)[snip]provider> SpectraLogicOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step. \ (false) 2 / Get AWS credentials from the environment (env vars or IAM). \ (true)env_auth> 1Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> ACCESS_KEYOption secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> SECRET_ACCESS_KEYOption endpoint.Endpoint for S3 API.Required when using an S3 clone.Enter a value. Press Enter to leave empty.endpoint> https://bp.example.comEdit advanced config?y) Yesn) No (default)y/n> nConfiguration complete.Options:- type: s3- provider: SpectraLogic- access_key_id: ACCESS_KEY- secret_access_key: SECRET_ACCESS_KEY- endpoint: https://bp.example.comKeep this "spectratest" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> yAnd your config should end up looking like this:
[spectratest]type=s3provider=SpectraLogicaccess_key_id=ACCESS_KEYsecret_access_key=SECRET_ACCESS_KEYendpoint=https://bp.example.comStorj is a decentralized cloud storage which can be used through itsnative protocol or an S3 compatible gateway.
The S3 compatible gateway is configured usingrclone config with atype ofs3 and with a provider name ofStorj. Here is an examplerun of the configurator.
Type of storage to configure.Storage> s3Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step. \ (false) 2 / Get AWS credentials from the environment (env vars or IAM). \ (true)env_auth> 1Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> XXXX (as shown when creating the access grant)Option secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> XXXX (as shown when creating the access grant)Option endpoint.Endpoint of the Shared Gateway.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / EU1 Shared Gateway \ (gateway.eu1.storjshare.io) 2 / US1 Shared Gateway \ (gateway.us1.storjshare.io) 3 / Asia-Pacific Shared Gateway \ (gateway.ap1.storjshare.io)endpoint> 1 (as shown when creating the access grant)Edit advanced config?y) Yesn) No (default)y/n> nNote that s3 credentials are generated when youcreate an accessgrant.
--chunk-size is forced to be 64 MiB or greater. This will use morememory than the default of 5 MiB.Due toissue #39uploading multipart files via the S3 gateway causes them to lose theirmetadata. For rclone's purpose this means that the modification timeis not stored, nor is any MD5SUM (if one is available from thesource).
This has the following consequences:
rclone rcat will fail as the metadata doesn't match after uploadrclone mount will fail for the same reason--vfs-cache-mode writes or--vfs-cache-mode full or setting--s3-upload-cutoff largerclone sync will likely keep trying to uploadfiles bigger than--s3-upload-cutoff--checksum or--size-only orsetting--s3-upload-cutoff large--s3-upload-cutoff is 5GiB thoughOne general purpose workaround is to set--s3-upload-cutoff 5G. Thismeans that rclone will upload files smaller than 5GiB as single parts.Note that this can be set in the config file withupload_cutoff = 5Gor configured in the advanced settings. If you regularly transferfiles larger than 5G then using--checksum or--size-only inrclone sync is the recommended workaround.
Use thethe native protocol to take advantage ofclient-side encryption as well as to achieve the best possibledownload performance. Uploads will be erasure-coded locally, thus a1gb upload will result in 2.68gb of data being uploaded to storagenodes across the network.
Use this backend and the S3 compatible Hosted Gateway to increaseupload performance and reduce the load on your systems and network.Uploads will be encrypted and erasure-coded server-side, thus a 1GBupload will result in only in 1GB of data being uploaded to storagenodes across the network.
For more detailed comparison please check the documentation of thestorj backend.
Synology C2 Object Storageprovides a secure, S3-compatible, and cost-effective cloud storage solutionwithout API request, download fees, and deletion penalty.
The S3 compatible gateway is configured usingrclone config with atype ofs3 and with a provider name ofSynology. Here is an examplerun of the configurator.
First run:
rclone configThis will guide you through an interactive setup process.
No remotes found, make a new one\?n) New remotes) Set configuration passwordq) Quit confign/s/q> nEnter name for new remote.1name> synoType of storage to configure.Enter a string value. Press Enter for the default ("").Choose a number from below, or type in your own valueXX / Amazon S3 Compliant Storage Providers including AWS, ... \ "s3"Storage> s3Choose your S3 provider.Enter a string value. Press Enter for the default ("").Choose a number from below, or type in your own value 24 / Synology C2 Object Storage \ (Synology)provider> SynologyGet AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Enter a boolean value (true or false). Press Enter for the default ("false").Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step \ "false" 2 / Get AWS credentials from the environment (env vars or IAM) \ "true"env_auth> 1AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a string value. Press Enter for the default ("").access_key_id> accesskeyidAWS Secret Access Key (password)Leave blank for anonymous access or runtime credentials.Enter a string value. Press Enter for the default ("").secret_access_key> secretaccesskeyRegion where your data stored.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Europe Region 1 \ (eu-001) 2 / Europe Region 2 \ (eu-002) 3 / US Region 1 \ (us-001) 4 / US Region 2 \ (us-002) 5 / Asia (Taiwan) \ (tw-001)region > 1Option endpoint.Endpoint for Synology C2 Object Storage API.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / EU Endpoint 1 \ (eu-001.s3.synologyc2.net) 2 / US Endpoint 1 \ (us-001.s3.synologyc2.net) 3 / TW Endpoint 1 \ (tw-001.s3.synologyc2.net)endpoint> 1Option location_constraint.Location constraint - must be set to match the Region.Leave blank if not sure. Used when creating buckets only.Enter a value. Press Enter to leave empty.location_constraint>Edit advanced config? (y/n)y) Yesn) Noy/n> yOption no_check_bucket.If set, don't attempt to check the bucket exists or create it.This can be useful when trying to minimise the number of transactionsrclone does if you know the bucket exists already.It can also be needed if the user you are using does not have bucketcreation permissions. Before v1.52.0 this would have passed silentlydue to a bug.Enter a boolean value (true or false). Press Enter for the default (true).no_check_bucket> trueConfiguration complete.Options:- type: s3- provider: Synology- region: eu-001- endpoint: eu-001.s3.synologyc2.net- no_check_bucket: trueKeep this "syno" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> yTencent Cloud Object Storage (COS)is a distributed storage service offered by Tencent Cloud for unstructured data.It is secure, stable, massive, convenient, low-delay and low-cost.
To configure access to Tencent COS, follow the steps below:
Runrclone config and selectn for a new remote.
rclone configNo remotes found, make a new one?n) New remotes) Set configuration passwordq) Quit confign/s/q> nGive the name of the configuration. For example, name it 'cos'.
name> cosSelects3 storage.
Choose a number from below, or type in your own value[snip]XX / Amazon S3 Compliant Storage Providers including AWS, ... \ "s3"[snip]Storage> s3SelectTencentCOS provider.
Choose a number from below, or type in your own value1 / Amazon Web Services (AWS) S3 \ "AWS"[snip]11 / Tencent Cloud Object Storage (COS) \ "TencentCOS"[snip]provider> TencentCOSEnter your SecretId and SecretKey of Tencent Cloud.
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Enter a boolean value (true or false). Press Enter for the default ("false").Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step \ "false" 2 / Get AWS credentials from the environment (env vars or IAM) \ "true"env_auth> 1AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a string value. Press Enter for the default ("").access_key_id> AKIDxxxxxxxxxxAWS Secret Access Key (password)Leave blank for anonymous access or runtime credentials.Enter a string value. Press Enter for the default ("").secret_access_key> xxxxxxxxxxxSelect endpoint for Tencent COS. This is the standard endpoint for different region.
1 / Beijing Region. \ "cos.ap-beijing.myqcloud.com" 2 / Nanjing Region. \ "cos.ap-nanjing.myqcloud.com" 3 / Shanghai Region. \ "cos.ap-shanghai.myqcloud.com" 4 / Guangzhou Region. \ "cos.ap-guangzhou.myqcloud.com"[snip]endpoint> 4Choose acl and storage class.
Note that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.Enter a string value. Press Enter for the default ("").Choose a number from below, or type in your own value 1 / Owner gets Full_CONTROL. No one else has access rights (default). \ "default"[snip]acl> 1The storage class to use when storing new objects in Tencent COS.Enter a string value. Press Enter for the default ("").Choose a number from below, or type in your own value 1 / Default \ ""[snip]storage_class> 1Edit advanced config? (y/n)y) Yesn) No (default)y/n> nRemote config--------------------[cos]type = s3provider = TencentCOSenv_auth = falseaccess_key_id = xxxsecret_access_key = xxxendpoint = cos.ap-guangzhou.myqcloud.comacl = default--------------------y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d> yCurrent remotes:Name Type==== ====cos s3Wasabi is a cloud-based object storage service for abroad range of applications and use cases. Wasabi is designed forindividuals and organizations that require a high-performance,reliable, and secure data storage infrastructure at minimal cost.
Wasabi provides an S3 interface which can be configured for use withrclone like this.
No remotes found, make a new one\?n) New remotes) Set configuration passwordn/s> nname> wasabiType of storage to configure.Choose a number from below, or type in your own value[snip]XX / Amazon S3 (also Dreamhost, Ceph, ChinaMobile, ArvanCloud, Minio, Liara) \ "s3"[snip]Storage> s3Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step \ "false" 2 / Get AWS credentials from the environment (env vars or IAM) \ "true"env_auth> 1AWS Access Key ID - leave blank for anonymous access or runtime credentials.access_key_id> YOURACCESSKEYAWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.secret_access_key> YOURSECRETACCESSKEYRegion to connect to.Choose a number from below, or type in your own value / The default endpoint - a good choice if you are unsure. 1 | US Region, Northern Virginia, or Pacific Northwest. | Leave location constraint empty. \ "us-east-1"[snip]region> us-east-1Endpoint for S3 API.Leave blank if using AWS to use the default endpoint for the region.Specify if using an S3 clone such as Ceph.endpoint> s3.wasabisys.comLocation constraint - must be set to match the Region. Used when creating buckets only.Choose a number from below, or type in your own value 1 / Empty for US Region, Northern Virginia, or Pacific Northwest. \ ""[snip]location_constraint>Canned ACL used when creating buckets and/or storing objects in S3.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclChoose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default). \ "private"[snip]acl>The server-side encryption algorithm used when storing this object in S3.Choose a number from below, or type in your own value 1 / None \ "" 2 / AES256 \ "AES256"server_side_encryption>The storage class to use when storing objects in S3.Choose a number from below, or type in your own value 1 / Default \ "" 2 / Standard storage class \ "STANDARD" 3 / Reduced redundancy storage class \ "REDUCED_REDUNDANCY" 4 / Standard Infrequent Access storage class \ "STANDARD_IA"storage_class>Remote config--------------------[wasabi]env_auth = falseaccess_key_id = YOURACCESSKEYsecret_access_key = YOURSECRETACCESSKEYregion = us-east-1endpoint = s3.wasabisys.comlocation_constraint =acl =server_side_encryption =storage_class =--------------------y) Yes this is OKe) Edit this remoted) Delete this remotey/e/d> yThis will leave the config file looking like this.
[wasabi]type=s3provider=Wasabienv_auth=falseaccess_key_id=YOURACCESSKEYsecret_access_key=YOURSECRETACCESSKEYregion=endpoint=s3.wasabisys.comlocation_constraint=acl=server_side_encryption=storage_class=Zata Object Storage provides a secure, S3-compatible cloudstorage solution designed for scalability and performance, ideal for a varietyof data storage needs.
First run:
rclone configThis will guide you through an interactive setup process:e) Edit existing remoten) New remoted) Delete remoter) Rename remotec) Copy remotes) Set configuration passwordq) Quit confige/n/d/r/c/s/q> nEnter name for new remote.name> my zata storageOption Storage.Type of storage to configure.Choose a number from below, or type in your own value.XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3)Storage> s3Option provider.Choose your S3 provider.Choose a number from below, or type in your own value.Press Enter to leave empty.XX / Zata (S3 compatible Gateway) \ (Zata)provider> ZataOption env_auth.Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).Only applies if access_key_id and secret_access_key is blank.Choose a number from below, or type in your own boolean value (true or false).Press Enter for the default (false). 1 / Enter AWS credentials in the next step. \ (false) 2 / Get AWS credentials from the environment (env vars or IAM). \ (true)env_auth>Option access_key_id.AWS Access Key ID.Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.access_key_id> "your key"Option secret_access_key.AWS Secret Access Key (password).Leave blank for anonymous access or runtime credentials.Enter a value. Press Enter to leave empty.secret_access_key> "your secret key"Option region.Region where you can connect with.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / Indore, Madhya Pradesh, India \ (us-east-1)region> 1Option endpoint.Endpoint for Zata Object Storage.Choose a number from below, or type in your own value.Press Enter to leave empty. 1 / South Asia Endpoint \ (idr01.zata.ai)endpoint> 1Option location_constraint.Location constraint - must be set to match the Region.Leave blank if not sure. Used when creating buckets only.Enter a value. Press Enter to leave empty.location_constraint>Option acl.Canned ACL used when creating buckets and storing or copying objects.This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-aclNote that this ACL is applied when server-side copying objects as S3doesn't copy the ACL from the source but rather writes a fresh one.If the acl is an empty string then no X-Amz-Acl: header is added andthe default (private) will be used.Choose a number from below, or type in your own value.Press Enter to leave empty. / Owner gets FULL_CONTROL. 1 | No one else has access rights (default). \ (private) / Owner gets FULL_CONTROL. 2 | The AllUsers group gets READ access. \ (public-read) / Owner gets FULL_CONTROL. 3 | The AllUsers group gets READ and WRITE access. | Granting this on a bucket is generally not recommended. \ (public-read-write) / Owner gets FULL_CONTROL. 4 | The AuthenticatedUsers group gets READ access. \ (authenticated-read) / Object owner gets FULL_CONTROL. 5 | Bucket owner gets READ access. | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. \ (bucket-owner-read) / Both the object owner and the bucket owner get FULL_CONTROL over the object. 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. \ (bucket-owner-full-control)acl>Edit advanced config?y) Yesn) No (default)y/n>Configuration complete.Options:- type: s3- provider: Zata- access_key_id: xxx- secret_access_key: xxx- region: us-east-1- endpoint: idr01.zata.aiKeep this "my zata storage" remote?y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d>This will leave the config file looking like this.
[my zata storage]type=s3provider=Zataaccess_key_id=xxxsecret_access_key=xxxregion=us-east-1endpoint=idr01.zata.aiThe most common cause of rclone using lots of memory is a singledirectory with millions of files in. Despite s3 not really having theconcepts of directories, rclone does the sync on a directory bydirectory basis to be compatible with normal filing systems.
Rclone loads each directory into memory as rclone objects. Each rcloneobject takes 0.5k-1k of memory, so approximately 1GB per 1,000,000files, and the sync for that directory does not begin until it isentirely loaded in memory. So the sync can take a long time to startfor large directories.
To sync a directory with 100,000,000 files in you would need approximately100 GB of memory. At some point the amount of memory becomes difficultto provide so there isa workaround for thiswhich involves a bit of scripting.
At some point rclone will gain a sync mode which is effectively thisworkaround but built in to rclone.
rclone about is not supported by the S3 backend. Backends withoutthis capability cannot determine free space for an rclone mount oruse policymfs (most free space) as a member of an rclone unionremote.
SeeList of backends that do not support rclone aboutandrclone about.