- Notifications
You must be signed in to change notification settings - Fork2
Async Rust library for OpenAI and others on WASM
License
ifsheldon/async-openai-wasm
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Async Rust library for OpenAI on WASM
async-openai-wasm
is a FORK ofasync-openai
that supports WASM targets by targetingwasm32-unknown-unknown
.That means >99% of the codebase should be attributed to the original project. The synchronization with the originalproject is and will be done manually whenasync-openai
releases a new version. Versions are kept in syncwithasync-openai
releases, which means whenasync-openai
releasesx.y.z
,async-openai-wasm
also releasesax.y.z
version.
async-openai-wasm
is an unofficial Rust library for OpenAI.
- It's based onOpenAI OpenAPI spec
- Current features:
- Assistants (v2)
- Audio
- Batch
- Chat
- Completions (Legacy)
- Embeddings
- Files
- Fine-Tuning
- Images
- Models
- Moderations
- Organizations | Administration (partially implemented)
- Realtime (Beta) (partially implemented)
- Uploads
- Responses (partially implemented)
- WASM support
- Reasoning Model Support: support models like DeepSeek R1 via broader support for OpenAI-compatible endpoints, see
examples/reasoning
- SSE streaming on available APIs
- Ergonomic builder pattern for all request objects.
- Microsoft Azure OpenAI Service (only for APIs matching OpenAI spec)
- Bring your own custom types for Request or Response objects.
Note on Azure OpenAI Service (AOS):async-openai-wasm
primarily implements OpenAI spec, and doesn't try tomaintain parity with spec of AOS. Just likeasync-openai
.
+ * WASM support+ * WASM examples+ * Realtime API: Does not bundle with a specific WS implementation. Need to convert a client event into a WS message by yourself, which is just simple `your_ws_impl::Message::Text(some_client_event.into_text())`+ * Broader support for OpenAI-compatible Endpoints+ * Reasoning Model Support- * Tokio- * Non-wasm examples: please refer to the original project [async-openai](https://github.com/64bit/async-openai/).- * Builtin backoff retries: due to [this issue](https://github.com/ihrwein/backoff/issues/61).- * Recommend: use `backon` with `gloo-timers-sleep` feature instead.- * File saving: `wasm32-unknown-unknown` on browsers doesn't have access to filesystem.
The library readsAPI key from the environmentvariableOPENAI_API_KEY
.
# On macOS/Linuxexport OPENAI_API_KEY='sk-...'
# On Windows Powershell$Env:OPENAI_API_KEY='sk-...'
- Visitexamples directory on how to use
async-openai
,andWASM examples inasync-openai-wasm
. - Visitdocs.rs/async-openai for docs.
Only types for Realtime API are implemented, and can be enabled with feature flagrealtime
These types were written before OpenAI released official specs.
Again, the types do not bundle with a specific WS implementation. Need to convert a client event into a WS message by yourself, which is just simpleyour_ws_impl::Message::Text(some_client_event.into_text())
.
use async_openai_wasm::{ types::{CreateImageRequestArgs,ImageSize,ImageResponseFormat},Client,};use std::error::Error;#[tokio::main]asyncfnmain() ->Result<(),Box<dynError>>{// create client, reads OPENAI_API_KEY environment variable for API key.let client =Client::new();let request =CreateImageRequestArgs::default().prompt("cats on sofa and carpet in living room").n(2).response_format(ImageResponseFormat::Url).size(ImageSize::S256x256).user("async-openai-wasm").build()?;let response = client.images().create(request).await?;// Download and save images to ./data directory.// Each url is downloaded and saved in dedicated Tokio task.// Directory is created if it doesn't exist.let paths = response.save("./data").await?; paths.iter().for_each(|path|println!("Image file path: {}", path.display()));Ok(())}
For any struct that implementsConfig
trait, you can wrap it in a smart pointer and cast the pointer todyn Config
trait object, then your client can accept any wrapped configuration type.
For example,
use async_openai::{Client, config::Config, config::OpenAIConfig};let openai_config =OpenAIConfig::default();// You can use `std::sync::Arc` to wrap the config as welllet config =Box::new(openai_config)asBox<dynConfig>;let client:Client<Box<dynConfig>> =Client::with_config(config);
This repo will only accept issues and PRs related to WASM support. For other issues and PRs, please visit the originalprojectasync-openai.
This project adheres toRust Code of Conduct
- openai-func-enums provides procedural macros that make it easierto use this library with OpenAI API's tool calling feature. It also provides derive macros you can add toexistingclap application subcommands for natural language use of command linetools. It also supportsopenai'sparallel tool calls andallows you to choose between running multiple tool calls concurrently or own their own OS threads.
Because I wanted to develop and release a crate that depends on the wasm feature inexperiments
branchofasync-openai, but the pace of stabilizing the wasm feature is differentfrom what I expected.
The additional modifications are licensed underMIT license.The original project is also licensed underMIT license.
About
Async Rust library for OpenAI and others on WASM
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.
Languages
- Rust100.0%