#amazon-s3 #aws-sdk #env-var #s3 #aws #cloud

app ss3

Yet another S3 command line utilities... but env driven, per bucket.

12 releases

0.2.0-rc.1 May 12, 2024
0.1.2 Apr 5, 2023
0.1.1 Mar 13, 2023
0.0.8 Dec 11, 2022
0.0.5 Feb 28, 2022

#322 in Command line utilities

MIT/Apache

68KB
1.5K SLoC

Yet Another S3 Command-Line, driven by environment variables for per-bucket or profile credentials.

Note: Use v0.2.0-rc.1 as it has been updated to use AWS SDK 1.x and includes new features like clean and cp ... --overwrite etag.

Key points:

  • Utilizes the official AWS-SDK-S3 and related AWS SDK libraries.
  • Credentials are driven by environment variables (per bucket, per profile, with fallback to AWS CLI defaults).
  • Aims to mimic most of the official aws s3 ... command line (however, it is not overly dogmatic).
  • Plans to eventually provide a library as well.

Note: Tested on Mac and Linux (may not work on Windows, contributions welcome).

Install

# With Cargo install
cargo install ss3

Command Examples

# list all buckets (assuming appropriate access)
ss3 ls s3://

# list all object and prefixes (-r for recursive)
ss3 ls s3://my-bucket -r

# list all object and prefixes (--info to display total count & size, also per extensions)
ss3 ls s3://my-bucket -r --info

# Upload a single file
ss3 cp ./image-01.jpg s3://my-bucket/my-folder

# Upload full folder (recursive)
ss3 cp ./ s3://my-bucket/my-folder/ -r

# Upload full folder with "text/html" content-type for file without extension 
# (rather than fall back "application/octet-stream")
ss3 cp ./ s3://my-bucket/my-folder/ -r --noext-ct "text/html"

# Upload full folder except the *.mp4
ss3 cp ./ s3://my-bucket/my-folder/ -e "*.mp4" -r

# Upload full folder but only the *.mp4 and *.jpg
ss3 cp ./ s3://my-bucket/my-folder/ -i "*.mp4" -i "*.jpg" -r

# Download a single file to a local directory (parent dirs will be )
ss3 cp s3://my-bucket/image-01.jpg ./.downloads/

# Download a full folder (for now make sure to add end '/' in the s3 URL to distinguish from object)
ss3 cp s3://my-bucket/my-folder/ ./.downloads/ -r

Configurations

Here is the order in which the credentials will be resolved:

  • First check the following SS3_BUCKET_... environments for the given bucket
    • SS3_BUCKET_bucket_name_KEY_ID
    • SS3_BUCKET_bucket_name_KEY_SECRET
    • SS3_BUCKET_bucket_name_REGION
    • SS3_BUCKET_bucket_name_ENDPOINT (optional, for minio)
  • Second, when --profile profile_name, check the following SS3_PROFILE_... environments
    • SS3_PROFILE_profile_name_KEY_ID
    • SS3_PROFILE_profile_name_KEY_SECRET
    • SS3_PROFILE_profile_name_REGION
    • SS3_PROFILE_profile_name_ENDPOINT (optional, for minio)
  • Third, when --profile profile_name, and no profile environments, will check default AWS config files
  • As as a last fallback, use the default AWS environment variables:
    • AWS_ACCESS_KEY_ID
    • AWS_SECRET_ACCESS_KEY
    • AWS_DEFAULT_REGION
    • AWS_ENDPOINT (optional, for minio)

NOTE: '-' characters in profile and bucket names will be replaced by '_' for environment names above. So a bucket name my-bucket-001 will map to the environment variable SS3_BUCKET_my_bucket_001_KEY_ID ...

Other Examples


# ls
ss3 ls s3://my-bucket

# UPLOAD - cp file to s3 dir
ss3 cp ./.test-data/to-upload/image-01.jpg s3://my-bucket

# UPLOAD - cp dir to s3 dir
# NOTE: By default will skip if exists on s3 (use `--over write` to ovewrite)
ss3 cp ./.test-data/to-upload/ s3://my-bucket -r 

# UPLOAD - Check etag (simple etag/md5 only, not multi-part s3 etag)
# NOTE: For now, `--etag` is only implement on upload, not download
ss3 cp ./.test-data/to-upload/ s3://my-bucket -r --over etag --show-skip

# LIST - recursive
ss3 ls s3://my-bucket -r --info

# UPLOAD - rename
ss3 cp ./.test-data/to-upload/image-01.jpg s3://my-bucket/image-01-renamed.jpg

# UPLOAD - excludes
ss3 cp .test-data/to-upload s3://my-bucket -r -e "*.txt" --exclude "*.jpg"

# UPLOAD - includes
ss3 cp .test-data/to-upload s3://my-bucket -r -i "*.txt"

# UPLOAD - cp dir to s3 (recursive)
ss3 cp ./.test-data/to-upload/ s3://my-bucket/ggg -r

# DOWNLOAD - cp s3 file to local dir 
ss3 cp s3://my-bucket/image-01.jpg ./.test-data/downloads/

# DOWNLOAD - cp s3 file to local file (rename)
ss3 cp s3://my-bucket/image-01.jpg ./.test-data/downloads/image-01-rename.jpg

# DOWNLOAD - cp s3 folder to local dir
ss3 cp s3://my-bucket/ ./.test-data/downloads/

Dev & Test

ss3 integration tests run with both cargo test or cargo nextest run.

Terminal 1

Pre-requisite for test, run minio as such:

docker run --name minio_1 --rm \
  -p 9000:9000 \
  -p 9900:9900 \
  -e "MINIO_ROOT_USER=minio" \
  -e "MINIO_ROOT_PASSWORD=miniominio" \
  minio/minio server /data --console-address :9900

Then, you can go to the minio web console if you want: http://127.0.0.1:9900/

Terminal 2

And run the test with cargo test or cargo nextest run:

cargo test

# Or, with nextest
cargo nextest run
# This requires to have installed cargo-nextest: https://nexte.st/book/installation.html

Dependencies

~30–41MB
~565K SLoC