3 releases
0.0.3 | Mar 10, 2023 |
---|---|
0.0.2 | Mar 9, 2023 |
0.0.1 | Mar 9, 2023 |
#1181 in Asynchronous
26KB
493 lines
rust-gpt
This crate provides a rusty interface to the OpenAI GPT-3 Completion/Chat API.
The goal of this crate is to provide a simple and idiomatic way of interacting with the GPT-3 API.
The API is still in beta, so this crate is subject to change.
Features
- Completion endpoint
- Chat endpoint
- Robust error handling
- Synchronous API
- The rest of the OpenAI Api
lib.rs
:
OpenAI Completion/Chat Rust API
Provides a neat and rusty way of interacting with the OpenAI Completion/Chat API. You can find the documentation for the API here.
Example
use rust_gpt::RequestBuilder;
use rust_gpt::CompletionModel;
use rust_gpt::SendRequest;
#[tokio::main]
async fn main() {
let req = RequestBuilder::new(CompletionModel::TextDavinci003, "YOUR_API_KEY")
.prompt("Write a sonnet about a crab named Ferris in the style of Shakespeare.")
.build_completion();
let response = req.send().await.unwrap();
println!("My bot replied with: \"{:?}\"", response);
}
General Usage
You will most likely just use the RequestBuilder
to create a request. You can then use the SendRequest
trait to send the request.
Right now only the completion and chat endpoints are supported.
These two endpoints require different parameters, so you will need to use the build_completion
and build_chat
methods respectively.
RequestBuilder
can take any type that implements ToString
as the model input and any type that implements Display
as the API key.
Completion
The completion endpoint requires a prompt
parameter. You can set this with the prompt
method which takes any type that implements ToString
.
Chat
The chat endpoint is a little more complicated. It requires a messages
parameter which is a list of messages.
These messages are represented by the ChatMessage
struct. You can create a ChatMessage
with the new
method.
Additional Notes
The API is still in development, so there may be some breaking changes in the future.
The API is also not fully tested, so there may be some bugs.
There is a little bit of error handling, but it is not very robust.
serde_json is used to seralize and deserialize the responses and messages. Although since many are derived they may not match up with the exact API json responses.
Dependencies
~6–18MB
~250K SLoC