20 releases
Uses old Rust 2015
0.4.4 | Jan 5, 2023 |
---|---|
0.4.3 | Jun 9, 2021 |
0.4.2 | Feb 19, 2021 |
0.4.1 | Jul 6, 2020 |
0.0.3 | Nov 19, 2014 |
#41 in Compression
2,057,030 downloads per month
Used in 2,052 crates
(217 directly)
675KB
6.5K
SLoC
bzip2
A streaming compression/decompression library for rust with bindings to libbz2.
# Cargo.toml
[dependencies]
bzip2 = "0.4"
License
This project is licensed under either of
- Apache License, Version 2.0, (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT)
at your option.
Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in this repository by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.
lib.rs
:
Bzip compression for Rust
This library contains bindings to libbz2 to support bzip compression and
decompression for Rust. The streams offered in this library are primarily
found in the reader
and writer
modules. Both compressors and
decompressors are available in each module depending on what operation you
need.
Access to the raw decompression/compression stream is also provided through
the raw
module which has a much closer interface to libbz2.
Example
use std::io::prelude::*;
use bzip2::Compression;
use bzip2::read::{BzEncoder, BzDecoder};
// Round trip some bytes from a byte source, into a compressor, into a
// decompressor, and finally into a vector.
let data = "Hello, World!".as_bytes();
let compressor = BzEncoder::new(data, Compression::best());
let mut decompressor = BzDecoder::new(compressor);
let mut contents = String::new();
decompressor.read_to_string(&mut contents).unwrap();
assert_eq!(contents, "Hello, World!");
Multistreams (e.g. Wikipedia or pbzip2)
Some tools such as pbzip2 or data from sources such as Wikipedia
are encoded as so called bzip2 "multistreams," meaning they
contain back to back chunks of bzip'd data. BzDecoder
does not
attempt to convert anything after the the first bzip chunk in the
source stream. Thus, if you wish to decode all bzip chunks from
the input until end of file, use MultiBzDecoder
.
Protip: If you use BzDecoder
to decode data and the output is
incomplete and exactly 900K bytes, you probably need a
MultiBzDecoder
.
Async I/O
This crate optionally can support async I/O streams with the Tokio stack via
the tokio
feature of this crate:
bzip2 = { version = "0.4", features = ["tokio"] }
All methods are internally capable of working with streams that may return
ErrorKind::WouldBlock
when they're not ready to perform the particular
operation.
Note that care needs to be taken when using these objects, however. The Tokio runtime, in particular, requires that data is fully flushed before dropping streams. For compatibility with blocking streams all streams are flushed/written when they are dropped, and this is not always a suitable time to perform I/O. If I/O streams are flushed before drop, however, then these operations will be a noop.