12 releases (breaking)

0.9.0 Dec 3, 2021
0.8.0 Nov 30, 2021
0.7.0 Jul 18, 2021
0.6.0 Feb 21, 2021
0.2.0 Feb 18, 2019

#1158 in Algorithms

Download history 131/week @ 2024-12-01 191/week @ 2024-12-08 145/week @ 2024-12-15 38/week @ 2024-12-22 9/week @ 2024-12-29 146/week @ 2025-01-05 221/week @ 2025-01-12 345/week @ 2025-01-19 297/week @ 2025-01-26 674/week @ 2025-02-02 233/week @ 2025-02-09 153/week @ 2025-02-16 172/week @ 2025-02-23 189/week @ 2025-03-02 245/week @ 2025-03-09 376/week @ 2025-03-16

988 downloads per month
Used in 12 crates (4 directly)

Apache-2.0

76KB
1.5K SLoC

Reductive

Training of optimized product quantizers

Training of optimized product quantizers requires a LAPACK implementation. For this reason, training of the Opq and GaussianOpq quantizers is feature-gated by the opq-train feature. This feature must be enabled if you want to use Opq or GaussianOpq:

[dependencies]
reductive = { version = "0.7", features = ["opq-train"] }

This also requires that a crate that links a LAPACK library is added as a dependency, e.g. accelerate-src, intel-mkl-src, openblas-src, or netlib-src.

Running tests

Linux

You can run all tests on Linux, including tests for optimized product quantizers, using the intel-mkl-test feature:

$ cargo test --features intel-mkl-test

macOS

All tests can be run on macOS with the accelerate-test feature:

$ cargo test --features accelerate-test

Multi-threaded OpenBLAS

reductive uses Rayon to parallelize quantizer training. However, multi-threaded OpenBLAS is known to conflict with application threading. Is you use OpenBLAS, ensure that threading is disabled, for instance by setting the number of threads to 1:

$ export OPENBLAS_NUM_THREADS=1

Dependencies

~4–14MB
~210K SLoC