#machine-learning #tensor #blas

candle-flash-attn

Flash attention layer for the candle ML framework

19 releases (7 breaking)

0.8.0 Nov 12, 2024
0.6.0 Jun 29, 2024
0.4.1 Feb 28, 2024
0.3.2 Dec 20, 2023
0.3.1 Nov 12, 2023

#618 in Machine learning

Download history 11/week @ 2024-08-02 35/week @ 2024-08-09 34/week @ 2024-08-16 44/week @ 2024-08-23 68/week @ 2024-08-30 28/week @ 2024-09-06 47/week @ 2024-09-13 446/week @ 2024-09-20 266/week @ 2024-09-27 47/week @ 2024-10-04 59/week @ 2024-10-11 25/week @ 2024-10-18 11/week @ 2024-10-25 52/week @ 2024-11-01 133/week @ 2024-11-08 36/week @ 2024-11-15

234 downloads per month
Used in 10 crates (6 directly)

MIT/Apache

2.5MB
26K SLoC

candle-flash-attn

Dependencies

~33MB
~749K SLoC