#flows #llm #integration #service #api-integration #model #chat

bin+lib llmservice-flows

LLM Service integration for flows.network

13 unstable releases (4 breaking)

new 0.5.0 Nov 26, 2024
0.4.2 Nov 19, 2024
0.3.3 Aug 21, 2024
0.3.0 Jul 15, 2024
0.1.3 Aug 10, 2023

#936 in Network programming

Download history 263/week @ 2024-08-05 27/week @ 2024-08-12 270/week @ 2024-08-19 15/week @ 2024-08-26 9/week @ 2024-09-09 55/week @ 2024-09-16 51/week @ 2024-09-23 41/week @ 2024-09-30 23/week @ 2024-10-07 4/week @ 2024-10-14 5/week @ 2024-10-21 9/week @ 2024-11-04 480/week @ 2024-11-18

489 downloads per month

MIT/Apache

27KB
488 lines

LLM Service Integration for flows.network.


lib.rs:

LLM Service integration for Flows.network

Quick Start

To get started, let's write a tiny flow function.

use llmservice_flows::{
    chat::ChatOptions,
    LLMServiceFlows,
};
use lambda_flows::{request_received, send_response};
use serde_json::Value;
use std::collections::HashMap;

#[no_mangle]
#[tokio::main(flavor = "current_thread")]
pub async fn run() {
    request_received(handler).await;
}

async fn handler(_qry: HashMap<String, Value>, body: Vec<u8>) {
    let co = ChatOptions {
      model: Some("gpt-4"),
      token_limit: 8192,
      ..Default::default()
    };
    let mut lf = LLMServiceFlows::new("https://api.openai.com/v1");
    lf.set_api_key("your api key");

    let r = match lf.chat_completion(
        "any_conversation_id",
        String::from_utf8_lossy(&body).into_owned().as_str(),
        &co,
    )
    .await
    {
        Ok(c) => c.choice,
        Err(e) => e,
    };

    send_response(
        200,
        vec![(
            String::from("content-type"),
            String::from("text/plain; charset=UTF-8"),
        )],
        r.as_bytes().to_vec(),
    );
}

When the Lambda request is received, chat using LLMServiceFlows::chat_completion then send the response.

Dependencies

~7–20MB
~255K SLoC