#machine-learning #tensor #blas

candle-flash-attn

Flash attention layer for the candle ML framework

22 unstable releases (7 breaking)

new 0.8.3 Feb 15, 2025
0.8.1 Dec 7, 2024
0.8.0 Nov 12, 2024
0.6.0 Jun 29, 2024
0.3.1 Nov 12, 2023

#629 in Machine learning

Download history 26/week @ 2024-10-29 52/week @ 2024-11-05 137/week @ 2024-11-12 37/week @ 2024-11-19 3/week @ 2024-11-26 158/week @ 2024-12-03 179/week @ 2024-12-10 67/week @ 2024-12-17 53/week @ 2024-12-24 40/week @ 2024-12-31 207/week @ 2025-01-07 57/week @ 2025-01-14 8/week @ 2025-01-21 30/week @ 2025-01-28 102/week @ 2025-02-04 140/week @ 2025-02-11

287 downloads per month
Used in 10 crates (6 directly)

MIT/Apache

2.5MB
27K SLoC

candle-flash-attn

Dependencies

~37MB
~837K SLoC