1 unstable release
0.1.1 | Jul 25, 2023 |
---|---|
0.1.0 |
|
#673 in Machine learning
20KB
415 lines
llama2 in Rust!
This was derived from https://github.com/karpathy/llama2.c to run multi-threaded inference.
It's 3+ times faster to run inference using this Rust port than the original llama2.c.
Dependencies
~5MB
~100K SLoC