- Notifications
You must be signed in to change notification settings - Fork363
Write Cloudflare Workers in 100% Rust via WebAssembly
License
cloudflare/workers-rs
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
Work-in-progress ergonomic Rust bindings to Cloudflare Workers environment. Write your entire worker in Rust!
Read theNotes and FAQ
use worker::*;#[event(fetch)]pubasyncfnmain(mutreq:Request,env:Env,_ctx: worker::Context) ->Result<Response>{console_log!("{} {}, located at: {:?}, within: {}", req.method().to_string(), req.path(), req.cf().unwrap().coordinates().unwrap_or_default(), req.cf().unwrap().region().unwrap_or("unknown region".into()));if !matches!(req.method(),Method::Post){returnResponse::error("Method Not Allowed",405);}ifletSome(file) = req.form_data().await?.get("file"){returnmatch file{FormEntry::File(buf) =>{Response::ok(&format!("size = {}", buf.bytes().await?.len()))} _ =>Response::error("`file` part of POST form must be a file",400),};}Response::error("Bad Request",400)}
The project useswrangler for running and publishing your Worker.
Usecargo generate to start from a template:
cargo generate cloudflare/workers-rs
There are several templates to chose from. You should see a new project layout with asrc/lib.rs.Start there! Use any local or remote crates and modules (as long as they compile to thewasm32-unknown-unknown target).
Once you're ready to run your project, run your worker locally:
npx wrangler dev
Finally, go live:
# configure your routes, zones & more in your worker's `wrangler.toml` filenpx wrangler deployIf you would like to havewrangler installed on your machine, see instructions inwrangler repository.
worker0.0.21 introduced anhttp feature flag which starts to replace custom types with widely used types from thehttp crate.
This makes it much easier to use crates which use these standard types such asaxum andhyper.
This currently does a few things:
- Introduce
Body, which implementshttp_body::Bodyand is a simple wrapper aroundweb_sys::ReadableStream. - The
reqargument when using the[event(fetch)]macro becomeshttp::Request<worker::Body>. - The expected return type for the fetch handler is
http::Response<B>whereBcan be anyhttp_body::Body<Data=Bytes>. - The argument for
Fetcher::fetch_requestishttp::Request<worker::Body>. - The return type of
Fetcher::fetch_requestisResult<http::Response<worker::Body>>.
The end result is being able to use frameworks likeaxum directly (seeexample):
pubasyncfnroot() ->&'staticstr{"Hello Axum!"}fnrouter() ->Router{Router::new().route("/",get(root))}#[event(fetch)]asyncfnfetch(req:HttpRequest,_env:Env,_ctx:Context,) ->Result<http::Response<axum::body::Body>>{Ok(router().call(req).await?)}
We also implementtry_from betweenworker::Request andhttp::Request<worker::Body>, and betweenworker::Response andhttp::Response<worker::Body>. This allows you to convert your code incrementally if it is tightly coupled to the original types.
Parameterize routes and access the parameter values from within a handler. Each handler function takes aRequest, and aRouteContext. TheRouteContext has shared data, route params,Env bindings, and more.
use serde::{Deserialize,Serialize};use worker::*;#[event(fetch)]pubasyncfnmain(req:Request,env:Env,_ctx: worker::Context) ->Result<Response>{// Create an instance of the Router, which can use parameters (/user/:name) or wildcard values// (/file/*pathname). Alternatively, use `Router::with_data(D)` and pass in arbitrary data for// routes to access and share using the `ctx.data()` method.let router =Router::new();// useful for JSON APIs#[derive(Deserialize,Serialize)]structAccount{id:u64,// ...} router.get_async("/account/:id", |_req, ctx|asyncmove{ifletSome(id) = ctx.param("id"){let accounts = ctx.kv("ACCOUNTS")?;returnmatch accounts.get(id).json::<Account>().await?{Some(account) =>Response::from_json(&account),None =>Response::error("Not found",404),};}Response::error("Bad Request",400)})// handle files and fields from multipart/form-data requests.post_async("/upload", |mut req, _ctx|asyncmove{let form = req.form_data().await?;ifletSome(entry) = form.get("file"){match entry{FormEntry::File(file) =>{let bytes = file.bytes().await?;}FormEntry::Field(_) =>returnResponse::error("Bad Request",400),}// ...ifletSome(permissions) = form.get("permissions"){// permissions == "a,b,c,d"}// or call `form.get_all("permissions")` if using multiple entries per field}Response::error("Bad Request",400)})// read/write binary data.post_async("/echo-bytes", |mut req, _ctx|asyncmove{let data = req.bytes().await?;if data.len() <1024{returnResponse::error("Bad Request",400);}Response::from_bytes(data)}).run(req, env).await}
All "bindings" to your script (Durable Object & KV Namespaces, Secrets, Variables and Version) areaccessible from theenv parameter provided to both the entrypoint (main in this example), and tothe route handler callback (in thectx argument), if you use theRouter from theworker crate.
use worker::*;#[event(fetch, respond_with_errors)]pubasyncfnmain(req:Request,env:Env,_ctx: worker::Context) ->Result<Response>{ utils::set_panic_hook();let router =Router::new(); router.on_async("/durable", |_req, ctx|asyncmove{let namespace = ctx.durable_object("CHATROOM")?;let stub = namespace.id_from_name("A")?.get_stub()?;// `fetch_with_str` requires a valid Url to make request to DO. But we can make one up! stub.fetch_with_str("http://fake_url.com/messages").await}).get("/secret", |_req, ctx|{Response::ok(ctx.secret("CF_API_TOKEN")?.to_string())}).get("/var", |_req, ctx|{Response::ok(ctx.var("BUILD_NUMBER")?.to_string())}).post_async("/kv", |_req, ctx|asyncmove{let kv = ctx.kv("SOME_NAMESPACE")?; kv.put("key","value")?.execute().await?;Response::empty()}).run(req, env).await}
For more information about how to configure these bindings, see:
- https://developers.cloudflare.com/workers/cli-wrangler/configuration#keys
- https://developers.cloudflare.com/workers/learning/using-durable-objects#configuring-durable-object-bindings
- https://developers.cloudflare.com/workers/runtime-apis/bindings/version-metadata/
To define a Durable Object using theworker crate you need to implement theDurableObject traiton your own struct. Additionally, the#[durable_object] attribute macro must be applied to the struct definition.
use worker::{durable_object,DurableObject,State,Env,Result,Request,Response};#[durable_object]pubstructChatroom{users:Vec<User>,messages:Vec<Message>,state:State,env:Env,// access `Env` across requests, use inside `fetch`}implDurableObjectforChatroom{fnnew(state:State,env:Env) ->Self{Self{users:vec![],messages:vec![],state: state, env,}}asyncfnfetch(&self,_req:Request) ->Result<Response>{// do some work when a worker makes a request to this DOResponse::ok(&format!("{} active users.",self.users.len()))}}
You'll need to "migrate" your worker script when it's published so that it is aware of this newDurable Object, and include a binding in yourwrangler.toml.
- Include the Durable Object binding type in you
wrangler.tomlfile:
# ...[durable_objects]bindings = [ {name ="CHATROOM",class_name ="Chatroom" }# the `class_name` uses the Rust struct identifier name][[migrations]]tag ="v1"# Should be unique for each entrynew_classes = ["Chatroom"]# Array of new classes
Durable Objects can use SQLite for persistent storage, providing a relational database interface. To enable SQLite storage, you need to usenew_sqlite_classes in your migration and access the SQL storage throughstate.storage().sql().
use worker::{durable_object,DurableObject,State,Env,Result,Request,Response,SqlStorage};#[durable_object]pubstructSqlCounter{sql:SqlStorage,}implDurableObjectforSqlCounter{fnnew(state:State,_env:Env) ->Self{let sql = state.storage().sql();// Create table if it does not exist sql.exec("CREATE TABLE IF NOT EXISTS counter(value INTEGER);",None).expect("create table");Self{ sql}}asyncfnfetch(&self,_req:Request) ->Result<Response>{#[derive(serde::Deserialize)]structRow{value:i32,}// Read current valuelet rows:Vec<Row> =self.sql.exec("SELECT value FROM counter LIMIT 1;",None)?.to_array()?;let current = rows.get(0).map(|r| r.value).unwrap_or(0);let next = current +1;// Update counterself.sql.exec("DELETE FROM counter;",None)?;self.sql.exec("INSERT INTO counter(value) VALUES (?);",vec![next.into()])?;Response::ok(format!("SQL counter is now {}", next))}}
Configure yourwrangler.toml to enable SQLite storage:
# ...[durable_objects]bindings = [ {name ="SQL_COUNTER",class_name ="SqlCounter" }][[migrations]]tag ="v1"# Should be unique for each entrynew_sqlite_classes = ["SqlCounter"]# Use new_sqlite_classes for SQLite-enabled objects
- For more information about migrating your Durable Object as it changes, see the docs here:https://developers.cloudflare.com/workers/learning/using-durable-objects#durable-object-migrations-in-wranglertoml
As queues are in beta you need to enable thequeue feature flag.
Enable it by adding it to the worker dependency in yourCargo.toml:
worker = {version ="...",features = ["queue"]}
use worker::*;use serde::{Deserialize,Serialize};#[derive(Serialize,Debug,Clone,Deserialize)]pubstructMyType{foo:String,bar:u32,}// Consume messages from a queue#[event(queue)]pubasyncfnmain(message_batch:MessageBatch<MyType>,env:Env,_ctx:Context) ->Result<()>{// Get a queue with the binding 'my_queue'let my_queue = env.queue("my_queue")?;// Deserialize the message batchlet messages = message_batch.messages()?;// Loop through the messagesfor messagein messages{// Log the message and meta dataconsole_log!("Got message {:?}, with id {} and timestamp: {}", message.body(), message.id(), message.timestamp().to_string());// Send the message body to the other queue my_queue.send(message.body()).await?;// Ack individual message message.ack();// Retry individual message message.retry();}// Retry all messages message_batch.retry_all();// Ack all messages message_batch.ack_all();Ok(())}
You'll need to ensure you have the correct bindings in yourwrangler.toml:
# ...[[queues.consumers]]queue ="myqueueotherqueue"max_batch_size =10max_batch_timeout =30[[queues.producers]]queue ="myqueue"binding ="my_queue"
workers-rs has experimental support forWorkers RPC.For now, this relies on JavaScript bindings and may require some manual usage ofwasm-bindgen.
Not all features of RPC are supported yet (or have not been tested), including:
- Function arguments and return values
- Class instances
- Stub forwarding
Writing an RPC server withworkers-rs is relatively simple. Simply export methods usingwasm-bindgen. Thesewill be automatically detected byworker-build and made available to other Workers. Seeexample.
Creating types and bindings for invoking another Worker's RPC methods is a bit more involved. You will need towrite more complexwasm-bindgen bindings and some boilerplate to make interacting with the RPC methods moreidiomatic. Seeexample.
With manually written bindings, it should be possible to support non-primitive argument and return types, usingserde-wasm-bindgen.
There are many routes that can be taken to describe RPC interfaces. Under the hood, Workers RPC usesCap'N Proto. A possible future direction is for Wasm guests to include Cap'N Protoserde support and speak directly to the RPC protocol, bypassing JavaScript. This would likely involve definingthe RPC interface in Cap'N Proto schema and generating Rust code from that.
Another popular interface schema in the WebAssembly community isWIT. This is a lightweight formatdesigned for the WebAssembly Component model.workers-rs includes anexperimental code generator whichallows you to describe your RPC interface using WIT and generate JavaScript bindings as shown in therpc-client example. The easiest way to use this code generator is using abuild script as shown in the example.This code generator is pre-alpha, with no support guarantee, and implemented only for primitive types at this time.
In order to test your Rust worker locally, the best approach is to useMiniflare. However, because Miniflareis a Node package, you will need to write your end-to-end tests in JavaScript orTypeScript in your project. The official documentation for writing tests usingMiniflare isavailable here. This documentationbeing focused on JavaScript / TypeScript codebase, you will need to configureas follows to make it work with your Rust-based, WASM-generated worker:
npm install --save-dev wrangler miniflare
Make sure that your worker is built before running your tests by calling thefollowing in your build chain:
wrangler deploy --dry-run
By default, this should build your worker under the./build/ directory at theroot of your project.
To instantiate theMiniflare testing instance in your tests, make sure toconfigure itsscriptPath option to the relative path of where your JavaScriptworker entrypoint was generated, and itsmoduleRules so that it is able toresolve the*.wasm file imported from that JavaScript worker:
// test.mjsimportassertfrom"node:assert";import{Miniflare}from"miniflare";constmf=newMiniflare({scriptPath:"./build/worker/shim.mjs",modules:true,modulesRules:[{type:"CompiledWasm",include:["**/*.wasm"],fallthrough:true}]});constres=awaitmf.dispatchFetch("http://localhost");assert(res.ok);assert.strictEqual(awaitres.text(),"Hello, World!");
As D1 databases are in alpha, you'll need to enable thed1 feature on theworker crate.
worker = {version ="x.y.z",features = ["d1"] }
use worker::*;#[derive(Deserialize)]structThing{thing_id:String,desc:String,num:u32,}#[event(fetch, respond_with_errors)]pubasyncfnmain(request:Request,env:Env,_ctx:Context) ->Result<Response>{Router::new().get_async("/:id", |_, ctx|asyncmove{let id = ctx.param("id").unwrap()?;let d1 = ctx.env.d1("things-db")?;let statement = d1.prepare("SELECT * FROM things WHERE thing_id = ?1");let query = statement.bind(&[id])?;let result = query.first::<Thing>(None).await?;match result{Some(thing) =>Response::from_json(&thing),None =>Response::error("Not found",404),}}).run(request, env).await}
It is exciting to see how much is possible with a framework like this, by expanding the optionsdevelopers have when building on top of the Workers platform. However, there is still much to bedone. Expect a few rough edges, some unimplemented APIs, and maybe a bug or two here and there. It’sworth calling out here that some things that may have worked in your Rust code might not work here -it’s all WebAssembly at the end of the day, and if your code or third-party libraries don’t targetwasm32-unknown-unknown, they can’t be used on Workers. Additionally, you’ve got to leave yourthreaded async runtimes at home; meaning no Tokio or async_std support. However, async/await syntaxis still available and supported out of the box when you use theworker crate.
We fully intend to support this crate and continue to build out its missing features, but your helpand feedback is a must. We don’t like to build in a vacuum, and we’re in an incredibly fortunateposition to have brilliant customers like you who can help steer us towards an even better product.
So give it a try, leave some feedback, and star the repo to encourage us to dedicate more time andresources to this kind of project.
If this is interesting to you and you want to help out, we’d be happy to get outside contributorsstarted. We know there are improvements to be made such as compatibility with popular Rust HTTPecosystem types (we have an example conversion forHeaders if you want to make one), implementing additional Web APIs, utility crates,and more. In fact, we’re always on the lookout for great engineers, and hiring for many open roles -pleasetake a look.
- Can I deploy a Worker that uses
tokioorasync_stdruntimes?
- Currently no. All crates in your Worker project must compile to
wasm32-unknown-unknowntarget,which is more limited in some ways than targets for x86 and ARM64. However, you should still be able to use runtime-agnostic primitives from those crates such as those fromtokio::sync.
- The
workercrate doesn't haveX! Why not?
- Most likely, it should, we just haven't had the time to fully implement it or add a library towrap the FFI. Please let us know you need a feature byopening an issue.
- My bundle size exceedsWorkers size limits, what do I do?
- We're working on solutions here, but in the meantime you'll need to minimize the number of cratesyour code depends on, or strip as much from the
.wasmbinary as possible. Here are some extrasteps you can try:https://rustwasm.github.io/book/reference/code-size.html#optimizing-builds-for-code-size
- Upgrading worker package to version
0.0.18and higher
- While upgrading your worker to version
0.0.18an error "error[E0432]: unresolved importcrate::sys::IoSourceState" can appear.In this case, upgradepackage.editiontoedition = "2021"inwrangler.toml
[package]edition ="2021"
- Trigger a workflow to create a release PR.
- Review version changes and merge PR.
- A draft GitHub release will be created. Author release notes and publish when ready.
- Crates (
worker-sys,worker-macros,worker) will be published automatically.
Your feedback is welcome and appreciated! Please use the issue tracker to talk about potentialimplementations or make feature requests. If you're interested in making a PR, we suggest opening upan issue to talk about the change you'd like to make as early as possible.
- worker: the user-facing crate, with Rust-familiar abstractions over the Rust<->JS/WebAssemblyinterop via wrappers and convenience library over the FFI bindings.
- worker-sys: Rust extern "C" definitions for FFI compatibility with the Workers JS Runtime.
- worker-macros: exports
eventanddurable_objectmacros for wrapping Rust entry point in afetchmethod of an ES Module, and code generation to create and interact with Durable Objects. - worker-sandbox: a functioning Cloudflare Worker for testing features and ergonomics.
- worker-build: a cross-platform build command for
workers-rs-based projects.
About
Write Cloudflare Workers in 100% Rust via WebAssembly
Topics
Resources
License
Code of conduct
Contributing
Security policy
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.
