#tar-archive #archive #s3 #amazon-s3

ssstar

Library crate that creates and restores archives to and from S3 or S3-compatible storage. ssstar is specifically designed to stream both input and output data so memory usage is minimal, while aggressive concurrency is used to maximize network throughput. If you’re looking for the command line (CLI), see ssstar-cli

15 unstable releases (6 breaking)

0.7.3 Mar 12, 2024
0.7.1 Nov 20, 2023
0.6.0 Jul 19, 2023
0.4.3 Mar 3, 2023
0.2.0 Sep 2, 2022

#172 in Filesystem

Download history 1362/week @ 2024-06-30 1382/week @ 2024-07-07 825/week @ 2024-07-14 1800/week @ 2024-07-21 3074/week @ 2024-07-28 2387/week @ 2024-08-04 1772/week @ 2024-08-11 2111/week @ 2024-08-18 1983/week @ 2024-08-25 1936/week @ 2024-09-01 2410/week @ 2024-09-08 1449/week @ 2024-09-15 2386/week @ 2024-09-22 852/week @ 2024-09-29 1695/week @ 2024-10-06 1899/week @ 2024-10-13

6,941 downloads per month
Used in ssstar-cli

Apache-2.0 OR MIT

250KB
3.5K SLoC

ssstar

Highly concurrent archiving of S3 objects to and from tar archives.


This is the Rust library crate which powers the ssstar CLI. If you're looking for the ssstar command line utility, see the ssstar-cli crate.

To create a tar archive containing S3 objects, instantiate a CreateArchiveJob:

# use ssstar::*;
# #[tokio::main]
# async fn main() -> Result<(), Box<dyn std::error::Error + Sync + Send + 'static>> {
// Write the archive to a local file
let target = TargetArchive::File("test.tar".into());

let mut builder = CreateArchiveJobBuilder::new(Config::default(), target);

// Archive all of the objects in this bucket
builder.add_input(&"s3://my-bucket".parse()?).await?;

let job = builder.build().await?;

job.run_without_progress(futures::future::pending()).await?;

# Ok(())
# }

Target archives can be written to a local file, an S3 bucket, or an arbitrary Tokio AsyncWrite implementation. See TargetArchive for more details.

Restoring a tar archive to object storage is similarly straightforward:

# use ssstar::*;
# #[tokio::main]
# async fn main() -> Result<(), Box<dyn std::error::Error + Sync + Send + 'static>> {
// Read the archive from a local file
let source = SourceArchive::File("test.tar".into());

// Extract the archive to an S3 bucket, prepending a `foo/` prefix to every file path in
// the archive
let target = "s3://my-bucket/foo/".parse::<url::Url>()?;

let mut builder = ExtractArchiveJobBuilder::new(Config::default(), source, target).await?;

// Extract only text files, in any directory, from the archive
builder.add_filter("**/*.txt")?;

let job = builder.build().await?;

job.run_without_progress(futures::future::pending()).await?;

# Ok(())
# }

Dependencies

~29–40MB
~589K SLoC