#llm #build #llama #cpp #binary #server #compile

llama_cpp_low

small server binary compile build from llama.cpp

23 releases

0.5.9 Feb 23, 2025
0.5.8 Jan 29, 2025
0.4.0 Dec 15, 2024
0.3.14 Sep 6, 2024
0.3.5 May 9, 2024

#50 in #llama

Download history 2/week @ 2024-11-27 60/week @ 2024-12-04 156/week @ 2024-12-11 41/week @ 2024-12-18 28/week @ 2024-12-25 12/week @ 2025-01-01 38/week @ 2025-01-08 268/week @ 2025-01-22 211/week @ 2025-01-29 28/week @ 2025-02-05 10/week @ 2025-02-12 121/week @ 2025-02-19 33/week @ 2025-02-26

209 downloads per month
Used in llm-daemon

MIT license

10MB
205K SLoC

C++ 115K SLoC // 0.1% comments C 31K SLoC // 0.1% comments Python 19K SLoC // 0.1% comments CUDA 9K SLoC // 0.0% comments GLSL 6.5K SLoC // 0.0% comments Metal Shading Language 5K SLoC // 0.0% comments OpenCL 4.5K SLoC Objective-C 4K SLoC // 0.0% comments JavaScript 2.5K SLoC // 0.2% comments TSX 2K SLoC // 0.0% comments Shell 2K SLoC // 0.1% comments Swift 1K SLoC // 0.0% comments Kotlin 703 SLoC // 0.1% comments Vim Script 671 SLoC // 0.1% comments TypeScript 587 SLoC // 0.1% comments RPM Specfile 109 SLoC // 0.2% comments Batch 78 SLoC // 0.2% comments Prolog 36 SLoC Rust 28 SLoC INI 11 SLoC

Contains (JAR file, 60KB) gradle-wrapper.jar

llama-cpp-low

Script to build llama.cpp server binary using cargo

Wait, are you sober?

I just wanted to have the daemon to run the LLM with minimal external dependency...

No runtime deps