#tensor #machine-learning #blas

candle-flash-attn

Flash attention layer for the candle ML framework

28 releases

new 0.9.0-alpha.5 Apr 19, 2025
0.8.4 Mar 15, 2025
0.8.1 Dec 7, 2024
0.8.0 Nov 12, 2024
0.3.1 Nov 12, 2023

#1211 in Machine learning

Download history 43/week @ 2024-12-28 159/week @ 2025-01-04 112/week @ 2025-01-11 21/week @ 2025-01-18 19/week @ 2025-01-25 84/week @ 2025-02-01 37/week @ 2025-02-08 216/week @ 2025-02-15 117/week @ 2025-02-22 107/week @ 2025-03-01 38/week @ 2025-03-08 223/week @ 2025-03-15 962/week @ 2025-03-22 123/week @ 2025-03-29 166/week @ 2025-04-05 331/week @ 2025-04-12

1,594 downloads per month
Used in 10 crates (6 directly)

MIT/Apache

2.5MB
27K SLoC

candle-flash-attn

Dependencies

~14–21MB
~370K SLoC