#gemini #ai #rag #ai-api #openai #bard #language-model

gemini-ai

A Rust based Gemini AI API, enabling access to advanced natural language processing and multimodal models

21 releases

new 0.1.1676 Feb 20, 2025
0.1.1675 Feb 20, 2025
0.1.1672 Dec 9, 2024
0.1.166 Nov 30, 2024

#117 in Machine learning

Download history 65/week @ 2024-11-14 1222/week @ 2024-11-21 365/week @ 2024-11-28 711/week @ 2024-12-05 259/week @ 2024-12-12 50/week @ 2024-12-19 36/week @ 2024-12-26 58/week @ 2025-01-02 39/week @ 2025-01-09

2,232 downloads per month
Used in gemini-cli

MIT license

85KB
2.5K SLoC

Gemini AI Rust Wrapper

Welcome to the Rust Gemini AI! This crate provides a Rust interface to interact with the Gemini AI API, which powers advanced natural language processing (NLP) and multimodal capabilities.

License Gemini AI Logo

New Feature Added

  • Added Rag Concept Model

  • Added Pdf,Audio,Video Uploading feature

  • Added Function Calling Feature

  • MaxTokenLimit Based Response

  • Instruction Based Response

Previous New Feature Added

  • MaxTokenLimit Based Response

  • Instruction Based Response

Features

  • Instruction Processing: Based on instruction customize the response in the way you like.
  • Natural Language Processing: Access powerful language models like Gemini 1.5 Pro for advanced text analysis, summarization, and generation.
  • Multimodal Capabilities: Interact with Gemini models that can handle not only text but also images, audio,pdf, and video inputs.
  • Easy Integration: A straightforward API wrapper for easy integration into your Rust projects.

Installation

To add this crate to your project, include it in your Cargo.toml:


   [dependencies]
   gemini-ai = "0.1.167"

       let builder = GeminiContentGenBuilder::new()
        .env("GEMINI_API_KEY")
        .model(gemini_ai::Models::GEMINI_1_5_PRO_002)
        // .memory(gemini_ai::Memorys::Json)
        .no_memory()
        .kind(gemini_ai::Kind::Image("statics/OIP.jpeg"))
        .instruction("")
        .text("hi tell character name")
        .max_token(gemini_ai::TokenLen::Default)
        .build()
        .output();

   let string = decode_gemini(&builder); // optional to decode the output if it sends the reponse else error
    //eg function calling
    let feature1 = Properties::new(
        "get_current_place_detail",
        "current palce details",
        Some(gemini_ai::pulse::format::Paramters {
            r#type: String::from("object"),
            properties: gemini_ai::pulse::format::SubProperties {
                name: String::from("events"),
                r#type: String::from("string"),
                description: String::from("Render all the events located in current location"),
            },
        }),
        Some(&["events"]),
    );

    let feature = feature(&[&feature1]);

   let pluse = GeminiPulse::new()
      .env("GEMINI_API_KEY")
      .model(gemini_ai::Models::GEMINI_1_5_PRO)
      .train(&feature)
      .instruction("your are great in telling events in the current place")
      .tell("banglore at 24 november 2024")
      .build()
      .output();

        let builder = GeminiContentGenBuilder::new()
        .env("GEMINI_API_KEY")
        .model(gemini_ai::Models::GEMINI_1_5_PRO_002)
        // .memory(gemini_ai::Memorys::Json)
        .no_memory()
        .kind(gemini_ai::Kind::Audio("statics/OIP.mpeg"))
        .instruction("tell hi")
        .text("hi tell character name")
        .max_token(gemini_ai::TokenLen::Default)
        .build()
        .output();

Dependencies

~4–15MB
~197K SLoC