#llama #model #prompt #interaction #cpp #interface #llama-cpp-2

simple_llama

A crate for run llama.cpp in Rust. based on llama-cpp-2

5 releases

0.1.3-b Jul 28, 2024
0.1.2 Jul 28, 2024
0.1.1 Jul 27, 2024
0.1.0 Jul 27, 2024

#274 in Machine learning

Download history 289/week @ 2024-07-23 67/week @ 2024-07-30 12/week @ 2024-09-10 1/week @ 2024-09-17 12/week @ 2024-09-24 45/week @ 2024-10-01

119 downloads per month

MIT license

13KB
288 lines

Simple Llama

This project, Simple Llama, is a library that encapsulates commonly used large model prompts based on the llama-cpp-2 framework. It aims to simplify the interaction with large-scale models by providing a streamlined interface for managing and invoking model prompts. This library is designed to enhance the efficiency and ease of use for developers working with large models in various applications.

Clone the repository

git clone https://github.com/L-jasmine/simple_llama

Download Llama model.

wget https://huggingface.co/second-state/Llama-3-8B-Instruct-GGUF/resolve/main/Meta-Llama-3-8B-Instruct-Q5_K_M.gguf

Configure Environment Variables

This project uses dynamic linking to connect to llama.cpp, so it is necessary to download or compile the llama.cpp dynamic link library in advance.

Before running the project, you need to configure environment variables to specify the location of the Llama library and the search path for dynamic link libraries. Please follow the steps below:

export LLAMA_LIB={LLama_Dynamic_Library_Dir}
# export LD_LIBRARY_PATH={LLama_Dynamic_Library_Dir}

Run the Example

Use the following command to run the example program:

cargo run --example simple -- --model-path Meta-Llama-3-8B-Instruct-Q5_K_M.gguf --model-type llama3 --prompt-path static/prompt.example.toml

Contributions

We welcome any form of contributions, including bug reports, new feature suggestions, and code submissions.

License

This project is licensed under the MIT License.

Dependencies

~9–12MB
~302K SLoC