10 releases (4 breaking)
0.25.0 | Oct 11, 2024 |
---|---|
0.24.0 | Aug 29, 2024 |
0.23.4 | Jul 24, 2024 |
0.23.1 | Jun 11, 2024 |
0.20.0 | Apr 16, 2024 |
#277 in Web programming
307 downloads per month
Used in transprompt
355KB
6K
SLoC
async-openai-wasm
Async Rust library for OpenAI on WASM
Overview
async-openai-wasm
is a FORK of async-openai
that supports WASM targets by targeting wasm32-unknown-unknown
.
That means >99% of the codebase should be attributed to the original project. The synchronization with the original
project is and will be done manually when async-openai
releases a new version. Versions are kept in sync
with async-openai
releases, which means when async-openai
releases x.y.z
, async-openai-wasm
also releases
a x.y.z
version.
async-openai-wasm
is an unofficial Rust library for OpenAI.
- It's based on OpenAI OpenAPI spec
- Current features:
- Assistants (v2)
- Audio
- Batch
- Chat
- Completions (Legacy)
- Embeddings
- Files
- Fine-Tuning
- Images
- Models
- Moderations
- Organizations | Administration
- Realtime API types (Beta)
- Uploads
- WASM support
- SSE streaming on available APIs
- Ergonomic builder pattern for all request objects.
- Microsoft Azure OpenAI Service (only for APIs matching OpenAI spec)
Note on Azure OpenAI Service (AOS): async-openai-wasm
primarily implements OpenAI spec, and doesn't try to
maintain parity with spec of AOS. Just like async-openai
.
Differences from async-openai
+ * WASM support
+ * WASM examples
+ * Realtime API: Does not bundle with a specific WS implementation. Need to convert a client event into a WS message by yourself, which is just simple `your_ws_impl::Message::Text(some_client_event.into_text())`
- * Tokio
- * Non-wasm examples: please refer to the original project [async-openai](https://github.com/64bit/async-openai/).
- * Builtin backoff retries: due to [this issue](https://github.com/ihrwein/backoff/issues/61).
- * Recommend: use `backon` with `gloo-timers-sleep` feature instead.
- * File saving: `wasm32-unknown-unknown` on browsers doesn't have access to filesystem.
Usage
The library reads API key from the environment
variable OPENAI_API_KEY
.
# On macOS/Linux
export OPENAI_API_KEY='sk-...'
# On Windows Powershell
$Env:OPENAI_API_KEY='sk-...'
- Visit examples directory on how to use
async-openai
, and WASM examples inasync-openai-wasm
. - Visit docs.rs/async-openai for docs.
Realtime API
Only types for Realtime API are implemented, and can be enabled with feature flag realtime
These types may change when OpenAI releases official specs for them.
Again, the types do not bundle with a specific WS implementation. Need to convert a client event into a WS message by yourself, which is just simple your_ws_impl::Message::Text(some_client_event.into_text())
.
Image Generation Example
use async_openai_wasm::{
types::{CreateImageRequestArgs, ImageSize, ImageResponseFormat},
Client,
};
use std::error::Error;
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
// create client, reads OPENAI_API_KEY environment variable for API key.
let client = Client::new();
let request = CreateImageRequestArgs::default()
.prompt("cats on sofa and carpet in living room")
.n(2)
.response_format(ImageResponseFormat::Url)
.size(ImageSize::S256x256)
.user("async-openai-wasm")
.build()?;
let response = client.images().create(request).await?;
// Download and save images to ./data directory.
// Each url is downloaded and saved in dedicated Tokio task.
// Directory is created if it doesn't exist.
let paths = response.save("./data").await?;
paths
.iter()
.for_each(|path| println!("Image file path: {}", path.display()));
Ok(())
}
Scaled up for README, actual size 256x256
Contributing
This repo will only accept issues and PRs related to WASM support. For other issues and PRs, please visit the original project async-openai.
This project adheres to Rust Code of Conduct
Complimentary Crates
- openai-func-enums provides procedural macros that make it easier to use this library with OpenAI API's tool calling feature. It also provides derive macros you can add to existing clap application subcommands for natural language use of command line tools. It also supports openai's parallel tool calls and allows you to choose between running multiple tool calls concurrently or own their own OS threads.
Why async-openai-wasm
Because I wanted to develop and release a crate that depends on the wasm feature in experiments
branch
of async-openai, but the pace of stabilizing the wasm feature is different
from what I expected.
License
The additional modifications are licensed under MIT license. The original project is also licensed under MIT license.
Dependencies
~6–20MB
~310K SLoC