16 releases
new 0.1.16 | Nov 21, 2024 |
---|---|
0.1.15 | Nov 21, 2024 |
#349 in Web programming
1,536 downloads per month
18KB
332 lines
Gemini Bridge
Gemini Bridge is a Rust crate designed to interact with the Gemini API. This crate aims to provide a seamless and efficient way to communicate with Gemini API.
Features
- Easy-to-use interface for interacting with the Gemini API
- Asynchronous support for non-blocking operations
- Comprehensive error handling
Installation
Add gemini_bridge
to your Cargo.toml
:
[dependencies]
gemini_bridge = "0.1.15"
# example version
Usage
Text Generation
extern crate gemini_rs;
async fn main() {
let gen = GeminiRs::new_text_generator("api_key_xxxxxxx".to_owned(), "gemini-1.5-flash-latest".to_owned());
let res = gen
.generate_content(RequestBody {
contents: vec![Content {
parts: vec![Part {
text: "send me a test response".to_owned(),
}],
}],
safety_settings: None,
generation_config: None,
})
.await;
}
Interactive Chat
You can create an interactive chat with the Gemini API by sending multiple messages in a single request. The following example demonstrates how to create an interactive chat with the Gemini API. So you can use the latest response from the model, append to this history and send it back to the model. This way you can create an interactive chat. Always be aware of the token count and the maximum token count for the model you are using. So you will want to send the minimum useful information and limit the size of the history you are sending to the model.
async fn test_interactice_chat() {
let gen = GeminiRs::new_text_generator("api_key_xxxxxxx".to_owned(), "gemini-1.5-flash-latest".to_owned());
let res = gen
.generate_content(RequestBody {
contents: vec![
Content {
role: Some(String::from("user")),
parts: vec![Part {
text: "Hello".to_owned(),
}],
},
Content {
role: Some(String::from("model")),
parts: vec![Part {
text: "Great to meet you. What would you like to know?".to_owned(),
}],
},
Content {
role: Some(String::from("user")),
parts: vec![Part {
text: "I have two dogs in my house. How many paws are in my house?"
.to_owned(),
}],
},
Content {
role: Some(String::from("model")),
parts: vec![Part {
text: "That's a fun question! You have a total of **7 paws** in your house. πΆπΎ \n"
.to_owned(),
}],
},
Content {
role: Some(String::from("user")),
parts: vec![Part {
text: "Thank you! How did you calculate that? Are you sure?"
.to_owned(),
}],
},
],
safety_settings: None,
generation_config: None,
})
.await;
if res.is_ok() {
let response = res.unwrap();
print!("{:?}", response);
assert!(response.candidates.len() > 0);
return;
}
panic!("Error: {:?}", res.err().unwrap());
}
Not tested yet with other models.
TODO
- Implement Content Generation Method
- Implement Conversation
- Implement Stream
- Implement Embeddings
- Add examples and documentation
- Write tests
Dependencies
~7β18MB
~236K SLoC