Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

tar-like utility for constructing archives of objects stored on S3 or S3-compatible services

License

Apache-2.0, MIT licenses found

Licenses found

Apache-2.0
LICENSE-APACHE
MIT
LICENSE-MIT
NotificationsYou must be signed in to change notification settings

elastio/ssstar

Repository files navigation

ssstar is a Rust library crate as well as a command-line tool to create and extracttar-compatible archives containingobjects stored in S3 or S3-compatible storage. It works similarly to GNUtar, and produces archives that are 100%compatible withtar, though it uses different command line arguments.

Crates.ioDocs.rsCICoverage StatusCrates.io

ssstar provides a cross-platform Rust-powered CLI as well as a Rust library crate that lets you create tar archivescontaining objects from S3 and S3-compatible object storage, regardless of size.ssstar applies concurrencyaggressively, and uses a streaming design which means even multi-TB objects can be processed with minimal memoryutilization. The resulting tar archive can itself be uploaded to object storage, written to a local file, or writtentostdout and piped to another command line tool.

We builtssstar so our customers using theelastio cloud native backup and recovery CLI could backup and restore S3buckets directly into and fromElastio vaults, however we made the tool generic enough that it can be used by itselfwhenever you need to package one or more S3 objects into a tarball.

Installation

Cargo

On any supported platform (meaning Windows, macOS (both Intel and Apple Silicon) and Linux), if you have a recent Rustcompiler installed you can usecargo install to get the ssstar CLI:

  • Ensure you have at least Rust 1.63.0 installed by followingthis guide.
  • Runcargo install ssstar-cli --locked to compilessstar from source and install locally.

Precompiled binaries

See theGitHub Releases for pre-compiled binaries for Windows, mac, and Linux.

Usage (without Elastio)

To create a tar archive, you specify S3 buckets, objects, entire prefixes, or globs, as well as where you want the tararchive to be written:

# Archive an entire bucket and write the tar archive to another bucketssstar create \  s3://my-source-bucket \  --s3 s3://my-destination-bucket/backup.tar# Archive all objects in the `foo/` prefix (non-recursive) and write the tar archive to a local filessstar create \  s3://my-source-bucket/foo/ \  --file ./backup.tar# Archive some specific objects identified by name, and write the tar archive to stdout and pipe that to# gzipssstar create \  s3://my-source-bucket/object1 s3://my-source-bucket/object2 \  --stdout| gzip> backup.tar.gz# Archive all objects matching a glob, and write the tar archive to another bucketssstar create \"s3://my-source-bucket/foo/**" \  --s3 s3://my-destination-bucket/backup.tar

You can pass multiple inputs tossstar create, using a mix of entire buckets, prefixes, specific objects, and globs.Just make sure that when you use globs you wrap them in quotes, otherwise your shell may try to evaluate them. Forexample:

# Archive a bunch of different inputs, writing the result to a filessstar create \  s3://my-source-bucket/\# <-- include all objects in `my-source-bucket`  s3://my-other-bucket/foo/\# <-- include all objects in `foo/` (non-recursive)  s3://my-other-bucket/bar/boo\# <-- include the object with key `bar/boo`"s3://yet-another-bucket/logs/2022*/**"\# <-- recursively include all objects in any prefix `logs/2022*`  --file ./backup.tar# <-- this is the path where the tar archive will be written

To extract a tar archive and write the contents directly to S3 objects, you specify where to find the tar archive,optional filters to filter what is extracted, and the S3 bucket and prefix to which to extract the contents.

A simple example:

# Extract a local tar archive to the root of an S3 bucket `my-bucket`ssstar extract --file ./backup.tar s3://my-bucket

Each file in the tar archive will be written to the bucketmy-bucket, with the object key equal to the file pathwithin the archive. For example if the archive contains a filefoo/bar/baz.txt, that file will be written tos3://my-bucket/foo/bar/baz.txt.

You can provide not just a target bucket but also a prefix as well, e.g.:

# Extract a local tar archive to the prefix `restored/` of an S3 bucket `my-bucket`ssstar extract --file ./backup.tar s3://my-bucket/restored/

In that case, if the tar archive contains a filefoo/bar/baz.txt, it will be written tos3://my-bucket/restored/foo/bar/baz.txt.NOTE: In S3, prefixes don't necessarily end in/; if you don't providethe trailing/ character to the S3 URL passed tossstar extract, it will not be added for you! Instead you'll getsomething likes3://my-bucket/restoredfoo/bar/baz, which may or may not be what you actually want!

If you don't want to extract the full contents of the archive, you can specify one or more filters. These can be exactfile paths, directory paths ending in/, or globs. For example:

ssstar extract --file ./backup.tar \  foo/bar/baz.txt\# <-- extract the file `foo/bar/baz.txt` if it's present in the archive  boo/\# <-- extract all files in the `boo` directory (recursive)"baz/**/*.txt"\# <-- extract any `.txt` file anywhere in `baz/`, recursively  s3://my-bucket/restored/# <-- write all matching files to the `restored/` prefix in `my-bucket`

Usage (with Elastio)

To use with Elastio, create archives with the--stdout option and pipe toelastio stream backup, and restore them by pipingelastio stream restore tossstar extract with the--stdin option. For example:

# Backup an entire S3 bucket `my-source-bucket` to the default Elastio vault:ssstar create s3://my-source-bucket/ --stdout \| elastio stream backup --hostname-override my-source-bucket --stream-name my-backup
# Restore a recovery point with ID `$RP_ID` from Elastio to the `my-destination-bucket` bucket:elastio stream restore --rp$RP_ID \| ssstar extract --stdin s3://my-destination-bucket

For more about using the Elastio CLI, see theElastio CLI docs

Advanced CLI Options

Runssstar create --help andssstar extract --help to get the complete CLI usage documentation for archive creationand extraction, respectively. There are a few command line options that are particularly likely to be of interest:

Using a custom S3 endpoint

ssstar is developed and tested against AWS S3, however it should work with any object storage system that provides anS3-compatible API. In particular, most of the automated tests our CI system runs actually useMinioand not the real S3 API. To usessstar with an S3-compatible API, use the--s3-endpoint option. For example, ifyou have a Minio server running at127.0.7.1:30000, using defaultminioadmin credentials, you can use it withssstar like this:

ssstar --s3-endpoint http://127.0.0.1:30000 \  --aws-access-key-id minioadmin --aws-secret-access-key minio-admin \  ...

Controlling Concurrency

The--max-concurrent-requests argument controls how many concurrent S3 API operations will be performed in each stageof the archive creation or extraction process. The default is 10, because that is what the AWS CLI uses. However ifyou are runningssstar on an EC2 instance with multi-gigabit Ethernet connectivity to S3, 10 concurrent requests maynot be enough to saturate the network connection. Experiment with larger values to see if you experience fastertransfer times with more concurrency.

Usage (in a Rust project)

The library cratessstar is the engine that powers thessstar CLI. When we wrotessstar we deliberately kept all of the functionality in a library crate with a thin CLI wrapper on top, becausessstar is being used internally in Elastio to power our upcoming S3 backup feature. You too can integratessstarfunctionality into your Rust application. Just addssstar as a dependency in yourCargo.toml:

[dependencies]ssstar ="0.7.3"

See thedocs.rs documentation forssstar for more details and some examples. You can alsolook at thessstar CLI codessstar-cli/main.rs to see how we implemented our CLI in terms of thessstar library crate.

License

Licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submittedfor inclusion in the work by you, as defined in the Apache-2.0 license, shall bedual licensed as above, without any additional terms or conditions.

SeeCONTRIBUTING.md.

About

tar-like utility for constructing archives of objects stored on S3 or S3-compatible services

Resources

License

Apache-2.0, MIT licenses found

Licenses found

Apache-2.0
LICENSE-APACHE
MIT
LICENSE-MIT

Stars

Watchers

Forks

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp