1 unstable release

0.1.1 Aug 9, 2023
0.1.0 Aug 9, 2023

#700 in Configuration

Download history 28/week @ 2024-07-22 13/week @ 2024-07-29 24/week @ 2024-08-05 8/week @ 2024-08-12 12/week @ 2024-08-19 36/week @ 2024-08-26 10/week @ 2024-09-09 19/week @ 2024-09-16 38/week @ 2024-09-23 13/week @ 2024-09-30 2/week @ 2024-10-07 25/week @ 2024-10-14 10/week @ 2024-10-21 6/week @ 2024-10-28 11/week @ 2024-11-04

52 downloads per month
Used in 8 crates (6 directly)

MPL-2.0 license

8KB
139 lines

llmvm

GitHub

A protocol and modular application suite for language models.

Includes a code assistant that automatically retrieves context, powered by LSP.

Overview

llmvm consists of three types of executable applications:

  • Frontends: specialized applications that use language models
  • The core: acts as middleman between frontend and backend; manages state related to text generation, such as:
    • Model presets
    • Prompt templates
    • Message threads
    • Projects/workspaces
  • Backends: wrappers for language models, handles raw text generation requests

The protocol acts as the glue between the above applications. Uses multilink and tower to achieve this.

Available crates

  • Frontends
    • codeassist: A LLM-powered code assistant that automatically retrieves context (i.e. type definitions) from a Language Server Protocol server
    • chat: A CLI chat interface
  • Core
  • Backends
    • outsource: Forwards generation requests to known hosted language model providers such as OpenAI, Anthropic, Hugging Face and Ollama.
    • llmrs: Uses the llm crate to process generation requests. Supported models include LLaMA, GPT-2, GPT-J and more.

IPC details

Each component can interact with a dependency component via three methods:

  • Local child process: the component invokes the dependency component as a child process, and communicates via stdio using JSON-RPC
  • Remote HTTP service: the dependency component acts as a HTTP API, and the dependent component is configured to make web requests to the API
  • Direct linking: The core and backends have library crates which can be used directly. Only works if dependent component is a Rust application.

This allows for some flexible hosting configurations. Here are some examples:

Hosting scenarios

Benefits

  • Single protocol for state-managed text generation requests
  • A frontend or backend can be implemented in any language, only requires a stdio and/or HTTP server/client to be available.
  • Uses Handlebars for prompt templates, allowing powerful prompt generation
  • Saves message threads, presets and prompt templates on the filesystem for easy editing/tweaking
  • Workspace / project management for isolating project state from global state
  • Modular design; any component can by invoked by the user via CLI for a one-off low-level or high-level request.

Installation

cargo is needed to install the binaries. Use rustup to install cargo.

Install the core by running:

cargo install llmvm-core

Install the desired frontends & backends listed under "Available crates". See their READMEs for more details.

Usage / configuration

See the README of each relevant component for more information on usage and configuration.

Model IDs

Model IDs in llmvm are strings consisting of three parts:

<backend name>/<provider name>/<model name>

The provider name must have the suffix -chat or -text

Examples:

  • outsource/openai-chat/gpt-3.5-turbo
  • outsource/anthropic-chat/claude-3-5-sonnet-20240620
  • llmrs/llmrs-text/mpt-7b-chat-q4_0-ggjt

By default, the core will invoke the process llmvm-<backend name> for local process communication.

Presets / Projects / Threads / Prompt Templates

See the core README for more information.

Model weights

See the relevant backend README (i.e. llmrs).

License

Mozilla Public License, version 2.0

Dependencies

~4–15MB
~175K SLoC