#machine-learning #simd #vectorized #ml-model #mathml #function #exp

rten-vecmath

SIMD vectorized implementations of various math functions used in ML models

12 releases (breaking)

new 0.14.1 Nov 16, 2024
0.11.0 Jul 5, 2024
0.6.0 Mar 31, 2024
0.1.0 Dec 31, 2023

#613 in Machine learning

Download history 269/week @ 2024-08-01 204/week @ 2024-08-08 241/week @ 2024-08-15 286/week @ 2024-08-22 319/week @ 2024-08-29 273/week @ 2024-09-05 365/week @ 2024-09-12 273/week @ 2024-09-19 277/week @ 2024-09-26 213/week @ 2024-10-03 175/week @ 2024-10-10 413/week @ 2024-10-17 484/week @ 2024-10-24 455/week @ 2024-10-31 433/week @ 2024-11-07 573/week @ 2024-11-14

1,991 downloads per month
Used in 8 crates (via rten)

MIT/Apache

95KB
2K SLoC

rten-vecmath

This crate provides portable SIMD types that abstract over SIMD intrinsics on different architectures. Unlike std::simd this works on stable Rust. There is also functionality to detect the available instructions at runtime and dispatch to the optimal implementation.

This crate also contains SIMD-vectorized versions of math functions such as exp, erf, tanh, softmax etc. that are performance-critical in machine-learning models.


lib.rs:

SIMD-vectorized implementations of various math functions that are commonly used in neural networks.

For each function in this library there are multiple variants, which typically include:

  • A version that operates on scalars
  • A version that reads values from an input slice and writes to the corresponding position in an equal-length output slice. These have a vec_ prefix.
  • A version that reads values from a mutable input slice and writes the computed values back in-place. These have a vec_ prefix and _in_place suffix.

All variants use the same underlying implementation and should have the same accuracy.

See the source code for comments on accuracy.

Dependencies