7 stable releases
3.0.0 | Jan 26, 2023 |
---|---|
2.0.2 | Jan 29, 2023 |
2.0.1 | Sep 8, 2022 |
1.1.0 | Sep 5, 2022 |
1.0.1 | Apr 12, 2022 |
#217 in Cryptography
18,387 downloads per month
Used in rgp
240KB
5K
SLoC
classic-mceliece-rust
This is a pure-rust safe-rust implementation of the Classic McEliece post-quantum scheme.
- Classic McEliece is a code-based key encapsulation mechanism (KEM)
- The implementation is based on the Classic McEliece reference implementation of NIST round 4
- The implementation does not utilize any concurrency techniques (SIMD/threading/…, except maybe auto-vectorization on your CPU)
- It depends on
sha3
as SHA-3 implementation andaes
as AES block cipher (used as RNG) implementation - It passes the 100 testcases of the C reference implementation
- It implements all 10 variants of the Classic McEliece KEM
- The implementation takes between 100 milliseconds (
mceliece348864
) and 500 milliseconds (mceliece8192128f
) to run on a modern computer - The implementation is constant-time on software instruction level
- The random number generator is based on AES256 in counter mode
- First described in 1978, the cryptographic scheme has a rich history in security analysis. Its large public key size, however, often limits adoption.
The 10 variants have the following designated identifiers:
mceliece348864
mceliece348864f
mceliece460896
mceliece460896f
mceliece6688128
mceliece6688128f
mceliece6960119
mceliece6960119f
mceliece8192128
mceliece8192128f
Who should use it?
Anyone, how wants to use Classic McEliece to negotiate a key between two parties.
How does one use it storing keys on the heap (default feature alloc
)?
Add this to your Cargo.toml
:
[dependencies]
classic-mceliece-rust = "3.0"
To use a specific Classic McEliece variant, you need to import it with the corresponding feature flag:
[dependencies]
classic-mceliece-rust = { version = "3.0", features = ["mceliece6960119"] }
Assuming this dependency, the simplest and most ergonomic way of using the library
is with heap allocated keys and the *_boxed
KEM step functions. First, we import them:
#[cfg(feature = "alloc")]
use classic_mceliece_rust::{keypair_boxed, encapsulate_boxed, decapsulate_boxed};
Followingly, we run the KEM and provide generated keys accordingly. Here, we consider an example where we run it in a separate thread (be aware that the example also depends on the rand crate):
#[cfg(feature = "alloc")] {
use classic_mceliece_rust::{keypair_boxed, encapsulate_boxed, decapsulate_boxed};
fn run_kem() {
let mut rng = rand::thread_rng();
// Alice computes the keypair
let (public_key, secret_key) = keypair_boxed(&mut rng);
// Send `secret_key` over to Bob.
// Bob computes the shared secret and a ciphertext
let (ciphertext, shared_secret_bob) = encapsulate_boxed(&public_key, &mut rng);
// Send `ciphertext` back to Alice.
// Alice decapsulates the ciphertext...
let shared_secret_alice = decapsulate_boxed(&ciphertext, &secret_key);
// ... and ends up with the same key material as Bob.
assert_eq!(shared_secret_bob.as_array(), shared_secret_alice.as_array());
}
fn main() {
std::thread::Builder::new()
// This library needs quite a lot of stack space to work
.stack_size(2 * 1024 * 1024)
.spawn(run_kem)
.unwrap()
.join()
.unwrap();
}
}
Pay attention that public keys in Classic McEliece are huge (between 255 KB and 1.3 MB). As a result, running the algorithm requires a lot of memory. You need to consider where you store it. In case of this API, the advantages are …
- you don't need to handle the memory manually
- on Windows, the call to
keypair
uses more stack than is available by default. Such stack size limitations can be avoided with the heap-allocation API (see Windows remark below).
How does one use it storing keys on the stack (disabled feature alloc
)?
The other option is that you exclude the heap-allocation API and use the provided stack-allocation API. Its advantages are:
- stack allocation also works in a
no_std
environment. - on some microcontroller platforms, a heap is not available.
- stack (de-)allocation in general is faster than heap (de-)allocation
Thus, in this section we want to show you how to use this API without the heap. For this, you need to disable feature alloc
which is enabled per default (this line retains default feature zeroize
but removes default feature alloc
):
classic-mceliece-rust = { version = "3.0", default-features = false, features = ["zeroize"] }
How does one use the API then (be aware that the example also depends on the rand crate)?
use classic_mceliece_rust::{keypair, encapsulate, decapsulate};
use classic_mceliece_rust::{CRYPTO_BYTES, CRYPTO_PUBLICKEYBYTES, CRYPTO_SECRETKEYBYTES};
fn main() {
let mut rng = rand::thread_rng();
// Please mind that `public_key_buf` is very large.
let mut public_key_buf = [0u8; CRYPTO_PUBLICKEYBYTES];
let mut secret_key_buf = [0u8; CRYPTO_SECRETKEYBYTES];
let (public_key, secret_key) = keypair(&mut public_key_buf, &mut secret_key_buf, &mut rng);
let mut shared_secret_bob_buf = [0u8; CRYPTO_BYTES];
let (ciphertext, shared_secret_bob) = encapsulate(&public_key, &mut shared_secret_bob_buf, &mut rng);
let mut shared_secret_alice_buf = [0u8; CRYPTO_BYTES];
let shared_secret_alice = decapsulate(&ciphertext, &secret_key, &mut shared_secret_alice_buf);
assert_eq!(shared_secret_bob.as_array(), shared_secret_alice.as_array());
}
Here, you can see how the keys are allocated explicitly.
A remark on Windows
If you want your program to be portable with stack allocation and not unexpectedly crash, you should probably run the entire key exchange in a dedicated thread with a large enough stack size. This code snippet shows the idea:
std::thread::Builder::new()
.stack_size(4 * 1024 * 1024)
.spawn(|| {/* Run the KEM here */})
.unwrap();
Feature kem: RustCrypto APIs
If the kem
feature is enabled, key encapsulation and decapsulation can also be done via
the standard traits in the kem
crate.
Feature zeroize: Clear out secrets from memory
If the zeroize
feature is enabled (it is by default), all key types that contain anything secret
implements Zeroize
and ZeroizeOnDrop
. This makes them clear their memory when they go out of
scope, and lowers the risk of secret key material leaking in one way or another.
Please mind that this of course makes any buffers you pass into the library useless for reading
out the key from. Instead of trying to fetch the key material from the buffers you pass in,
get it from the as_array
method.
#[cfg(not(windows))] {
use classic_mceliece_rust::keypair;
use classic_mceliece_rust::{CRYPTO_PUBLICKEYBYTES, CRYPTO_SECRETKEYBYTES};
let mut rng = rand::thread_rng();
let mut pk_buf = [0u8; CRYPTO_PUBLICKEYBYTES];
// Initialize to non-zero to show that it has been set to zero by the drop later
let mut sk_buf = [255u8; CRYPTO_SECRETKEYBYTES];
// This is the WRONG way of accessing your keys. The buffer will
// be cleared once the `PrivateKey` returned from `keypair` goes out of scope.
// You should not rely on that array for anything except providing a temporary storage
// location to this library.
#[cfg(feature = "zeroize")]
{
let (_, secret_key) = keypair(&mut pk_buf, &mut sk_buf, &mut rng);
drop(secret_key);
// Ouch! The array only has zeroes now.
assert_eq!(sk_buf, [0; CRYPTO_SECRETKEYBYTES]);
}
// Correct way of getting the secret key bytes if you do need them. However,
// if you want the secrets to stay secret, you should try to not read them out of their
// storage at all
{
let (_, secret_key) = keypair(&mut pk_buf, &mut sk_buf, &mut rng);
assert_ne!(secret_key.as_array(), &[0; CRYPTO_SECRETKEYBYTES]);
}
}
How does one run it?
This library comes with two examples:
$ cargo run --example basic
The output annotates messages with Alice/Bob to illustrate which data is processed by which party.
The katkem
example implements the classic request/response file structure which is part of the NIST PQC framework.
$ cargo run --example katkem PQCkemKAT_935.req PQCkemKAT_935.rsp
$ cargo run --example katkem PQCkemKAT_935.rsp
The different variants can be enabled through feature flags:
$ cargo run --example katkem --features mceliece6960119 -- PQCkemKAT_1450.req PQCkemKAT_1450.rsp
mceliece348864
is the default variant. You cannot enable two variants simultaneously.
How fast is it?
All data uses clock cycles as unit (the smaller the better). The rust implementation v2.0.0 yielded the following runtime results:
complete KEM | keypair | enc | dec | |
mceliece348864 | 460,062,191 | 439,682,143 | 222,424 | 42,046,357 |
mceliece348864f | 244,943,900 | 203,564,820 | 215,971 | 41,648,773 |
mceliece460896 | 1,326,425,784 | 1,434,864,061 | 487,522 | 111,547,716 |
mceliece460896f | 789,636,856 | 652,117,200 | 553,301 | 106,521,703 |
mceliece6688128 | 3,188,205,266 | 2,596,052,574 | 785,763 | 202,774,928 |
mceliece6688128f | 1,236,809,020 | 1,059,087,715 | 826,899 | 203,907,226 |
mceliece6960119 | 2,639,852,573 | 2,532,146,126 | 3,864,285 | 203,959,009 |
mceliece6960119f | 1,165,079,187 | 965,134,546 | 3,416,795 | 197,089,546 |
mceliece8192128 | 3,129,183,262 | 2,754,933,130 | 965,822 | 247,083,745 |
mceliece8192128f | 1,342,438,451 | 1,150,297,595 | 1,068,317 | 242,545,160 |
The C reference implementation yielded the following runtime results:
complete KEM | keypair | enc | dec | |
mceliece348864 | 434,103,000 | 437,187,000 | 187,557 | 73,801,300 |
mceliece348864f | 252,423,000 | 180,235,000 | 189,522 | 73,668,000 |
mceliece460896 | 760,993,000 | 894,497,000 | 298,041 | 154,507,000 |
mceliece460896f | 606,225,000 | 44,906,000 | 297,743 | 154,013,000 |
mceliece6688128 | 1,568,900,000 | 1,780,660,000 | 425,504 | 29,575,000 |
mceliece6688128f | 109,471,000 | 760,298,000 | 414,358 | 298,173,000 |
mceliece6960119 | 3,405,730,000 | 1,694,410,000 | 840,598 | 287,154,000 |
mceliece6960119f | 1,311,130,000 | 942,987,000 | 984,660 | 303,543,000 |
mceliece8192128 | 1,635,550,000 | 760,619,000 | 428,112 | 361,999,000 |
mceliece8192128f | 1,772,530,000 | 1,222,720,000 | 534,503 | 392,729,000 |
The tests were done on a Lenovo Thinkpad x260 (Intel Core i5-6200U CPU @ 2.30GHz). In the case of rust, criterion 0.3.5 has been used as given in benches/
and in case of C, Google's benchmark with PFM support and disabled CPU frequency scaling. You can run the benchmark suite yourself with the bench
subcommand and optionally some variant feature flag:
$ cargo bench --features mceliece348864
For optimal benchmark results you can run the rust benchmarks with CPU optimization using RUSTFLAGS="-C target-cpu=native"
, setting the cargo bench optimization profile to opt-level = 3
and set link time optimizations to lto = "fat"
. For further insight into the cargo bench settings refer to the official cargo bench profile documentation.
Is it correct?
Yes, besides passing unittests (derived from the C implementation), the generated KAT KEM test files have equivalent MD5 hashes. Namely …
variant | expected MD5 hash |
mceliece348864 | 11fd67ba1e2b93cceaec6f5e6fe4ddd1 |
mceliece348864f | 7a6f5262fa013fe7eedda0765a625789 |
mceliece460896 | c9acefa82aa705cd324f12df532744c2 |
mceliece460896f | cb08e0e3f2122c62692111d684f1cbe7 |
mceliece6688128 | 7e300cc0990b05f5edca3219ac769023 |
mceliece6688128f | 6d959c2bf54f7d3576a8e49475a74df5 |
mceliece6960119 | b4960a35e249d55fd48371f793608aa5 |
mceliece6960119f | 2f5d759cb579c6f85c1ee1306082ffdf |
mceliece8192128 | 26a47e6d01eec28e91abfdbdf19c3067 |
mceliece8192128f | a4cd676dc2c774d644f18de05762c51c |
Where is the source code?
On github.
What is the content's license?
Changelog
Please see the changelog
Where can I ask you to fix a bug?
On github.
Dependencies
~1.5MB
~15K SLoC