#io-stream #api-bindings #encoding #async-io #bzip

bzip2

Bindings to libbzip2 for bzip2 compression and decompression exposed as Reader/Writer streams

20 releases

Uses old Rust 2015

0.4.4 Jan 5, 2023
0.4.3 Jun 9, 2021
0.4.2 Feb 19, 2021
0.4.1 Jul 6, 2020
0.0.3 Nov 19, 2014

#41 in Compression

Download history 425516/week @ 2024-08-02 443626/week @ 2024-08-09 460687/week @ 2024-08-16 463801/week @ 2024-08-23 433388/week @ 2024-08-30 466767/week @ 2024-09-06 420805/week @ 2024-09-13 468528/week @ 2024-09-20 463787/week @ 2024-09-27 504740/week @ 2024-10-04 490202/week @ 2024-10-11 493713/week @ 2024-10-18 512465/week @ 2024-10-25 458325/week @ 2024-11-01 492510/week @ 2024-11-08 500325/week @ 2024-11-15

2,057,030 downloads per month
Used in 2,052 crates (217 directly)

MIT/Apache

675KB
6.5K SLoC

C 5K SLoC // 0.2% comments Rust 1K SLoC // 0.0% comments XSL 278 SLoC // 0.1% comments Shell 72 SLoC // 0.3% comments Perl 38 SLoC // 0.4% comments

bzip2

Documentation

A streaming compression/decompression library for rust with bindings to libbz2.

# Cargo.toml
[dependencies]
bzip2 = "0.4"

License

This project is licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in this repository by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.


lib.rs:

Bzip compression for Rust

This library contains bindings to libbz2 to support bzip compression and decompression for Rust. The streams offered in this library are primarily found in the reader and writer modules. Both compressors and decompressors are available in each module depending on what operation you need.

Access to the raw decompression/compression stream is also provided through the raw module which has a much closer interface to libbz2.

Example

use std::io::prelude::*;
use bzip2::Compression;
use bzip2::read::{BzEncoder, BzDecoder};

// Round trip some bytes from a byte source, into a compressor, into a
// decompressor, and finally into a vector.
let data = "Hello, World!".as_bytes();
let compressor = BzEncoder::new(data, Compression::best());
let mut decompressor = BzDecoder::new(compressor);

let mut contents = String::new();
decompressor.read_to_string(&mut contents).unwrap();
assert_eq!(contents, "Hello, World!");

Multistreams (e.g. Wikipedia or pbzip2)

Some tools such as pbzip2 or data from sources such as Wikipedia are encoded as so called bzip2 "multistreams," meaning they contain back to back chunks of bzip'd data. BzDecoder does not attempt to convert anything after the the first bzip chunk in the source stream. Thus, if you wish to decode all bzip chunks from the input until end of file, use MultiBzDecoder.

Protip: If you use BzDecoder to decode data and the output is incomplete and exactly 900K bytes, you probably need a MultiBzDecoder.

Async I/O

This crate optionally can support async I/O streams with the Tokio stack via the tokio feature of this crate:

bzip2 = { version = "0.4", features = ["tokio"] }

All methods are internally capable of working with streams that may return ErrorKind::WouldBlock when they're not ready to perform the particular operation.

Note that care needs to be taken when using these objects, however. The Tokio runtime, in particular, requires that data is fully flushed before dropping streams. For compatibility with blocking streams all streams are flushed/written when they are dropped, and this is not always a suitable time to perform I/O. If I/O streams are flushed before drop, however, then these operations will be a noop.

Dependencies