
After dealing with Rust internals for a school project, I decided to give it a try in the application land, more specifically in web development. Frameworkbenchmarks looked extremely promising and I had a small backend that could benefit from a little speedup. It was written in Elixir, which I believe is the best language for the web, but nothing can beat the native code, right? The backend I’m about to build will contain a web-server, graphql query processor, and an ORM for PostgreSQL. After a bit of research I choseactix-web
,juniper
anddiesel
respectively. Bear in mind, that this is a learn-with-me kind of article, that said it’s not gonna present any best practices, as I have no idea what practices are best yet :)
Getting started
In order to proceed, you will need toinstall rust toolchain, and then creating an empty Rust project is as easy as:
cargo new rust_backend
Now we need to includeactix-rt
andactix-web
into Cargo.toml and here is our hello world example:
useactix_web::{web,App,HttpServer,Responder};asyncfnhello()->implResponder{"Hello world!"}#[actix_rt::main]asyncfnmain()->std::io::Result<()>{HttpServer::new(||{App::new().route("/",web::get().to(hello))}).bind("127.0.0.1:8080")?.run().await}
Couple of interesting things happening here. HttpServer constructor accepts a factory — a lambda function that returns a new instance of the App every time it's called, because Actix spawns a new App for every thread. The handler doesn't have any arguments in this case, but normally there will be URL-parameters, payload, etc. Now you can run it:
cargo run
And using Apache Benchmark make sure that it handles massive amount of requests in parallel (with effective rate of ~0.08 ms per request on my machine). A good surprise here is that Cowboy shows very similar levels of performance and parallelism.
GraphQL
Now it's time to start integrating GraphQL. Addingjuniper
to deps, and here's the simple schema:
usejuniper::{GraphQLObject,FieldResult,RootNode};#[derive(GraphQLObject)]#[graphql(description="An artist")]structArtist{id:String,name:String}pubstructQueryRoot;#[juniper::object]implQueryRoot{fnartists()->FieldResult<Vec<Artist>>{Ok(vec![Artist{id:"1".to_string(),name:"Ripe".to_string()}])}}pubstructMutationRoot;#[juniper::object]implMutationRoot{}pubtypeSchema=RootNode<'static,QueryRoot,MutationRoot>;pubfncreate()->Schema{Schema::new(QueryRoot{},MutationRoot{})}
What we did here is defined an Artist object and a single root queryartists
that always returns a singleton collection containing one artist Ripe. Before we continue, let's take a break and listen to them! :)
Everything is pretty straight forward so far, if you're already familiar with GraphQL. If not, check this beautifulmanual out.
One interesting point is that Juniper defines fields asnon_null
by default. In our case that means[Artist!]!
— non null collection of non null artists. Other GraphQL backends treat all fields and collection elements as optional by default and it's a known source of pain for frontenders using TypeScript. With all my love to monadic types,Maybe
that Apollo generates for this case is absolutely useless. In other words, this is a very reasonable default that everyone should adopt.
Now we need to teach the web server to process GraphQL queries. The updatedmain.rs
will look like this:
modschema;useactix_web::{App,HttpServer,HttpResponse};useactix_web::web::{Data,Json,post,get,resource};usejuniper::http::{graphiql,GraphQLRequest};asyncfngraphiql()->HttpResponse{HttpResponse::Ok().content_type("text/html; charset=utf-8").body(graphiql::graphiql_source("/api"))}asyncfnapi(scm:Data<schema::Schema>,req:Json<GraphQLRequest>)->HttpResponse{letres=req.execute(&scm,&());letjson=serde_json::to_string(&res).unwrap();HttpResponse::Ok().content_type("application/json").body(json)}#[actix_rt::main]asyncfnmain()->std::io::Result<()>{letfactory=||App::new().data(schema::create()).service(resource("/api").route(post().to(api))).service(resource("/graphiql").route(get().to(graphiql)));HttpServer::new(factory).bind("127.0.0.1:8080")?.run().await}
Now we got rid of the dummy handler, and added two new ones related to graphql instead. One isgraphiql
, a playground for your graphql endpoint, it just renders as a single page app allowing to play with actualgraphql
backend — run some queries, introspect the schema etc. The second handlerapi
is where the magic happens. We get two arguments there: a schema instance and a request payload json-decoded (automatically, thanks to actix). I want to keep it dead simple for now, so we ignore potential error conditions and blocking nature of graphql executor. So we are doing these simple steps:
- Execute graphql request
- Encode the result as json
- Send the response to client
Now it's a good time to run the app and visit/grphiql
route where we can already ask the api for artists:
{artists{idname}}
This should return that one artist we hardcoded earlier.
Hooking up the DB
So far, so good, but now we want to take the records dynamically from the database. From now on, changes are gonna pile up much faster, so bear with me.
First we define schema. Normally it's generated automatically out of migrations, but I want to use (part of) the existing database, so I just create it manually (schema.rs):
table!{artists(id){id->Integer,name->Text,}}
One caveat in case of my existing DB was thatid
field was defined as BigSerial, which would require defining it as aBigInt
, whilejuniper
maintainers dropped the default support ofi64
type for some reasons.
Now we modify GraphQl object definitions and move it to their own module (models.rs):
#[derive(Queryable,juniper::GraphQLObject)]#[graphql(description="An artist")]pubstructArtist{pubid:i32,pubname:String}
This is very close to what we had before, we just extended it withQueryable
derivation. Now we need to create a pool and inject it into the app. This is how updatedmain
function will look like:
#[actix_rt::main]asyncfnmain()->std::io::Result<()>{letdb_url="postgresql://USER:PASS@localhost:5432/DB_NAME";letmanager=ConnectionManager::<PgConnection>::new(db_url);letpool=Pool::builder().build(manager).expect("Failed to create pool.");letfactory=move||App::new().data(graphql::create_schema()).data(pool.clone()).service(resource("/api").route(post().to(api))).service(resource("/graphiql").route(get().to(graphiql)));HttpServer::new(factory).bind("127.0.0.1:8080")?.run().await}
In order to consume this pool we will pass it into GraphQL executor as a Context. In reality we might want to have more sophisticated Context structure (for example to pass individual dataloaders there), but for now we just assume that theDbPool
type implementsjuniper::Context
. This is how it's done:
pubtypeDbPool=Pool<ConnectionManager<PgConnection>>;#[juniper::object(Context=DbPool)]implQueryRoot{...}// And the same for MutationRoot
Checking out a connection from the pool is a blocking operation, just like a request to the DB itself. It is beneficial to offload this work off the main thread, which is done like this in the main graphql handler:
asyncfnapi(scm:Data<graphql::Schema>,pool:Data<Pool<ConnectionManager<PgConnection>>>,req:Json<GraphQLRequest>)->Result<HttpResponse,Error>{letjson=web::block(move||{letres=req.execute(&scm,&pool);serde_json::to_string(&res)}).await?;Ok(HttpResponse::Ok().content_type("application/json").body(json))}
Finally the last step is to replace that hardcoded collection of artists with the actual database request:
fnartists(pool:&DbPool)->FieldResult<Vec<Artist>>{letconn=pool.get().map_err(|_|FieldError::new("Could not open connection to the database",Value::null()))?;artists.limit(1000).load::<Artist>(&conn).map_err(|_|FieldError::new("Error loading artists",Value::null()))}
Assessing the performance
In order to assess the performance, we're gonna hit the endpoint using Apache Benchmark with the following (payload.txt):
{"query": "{artists{id name}}"}
and with the following settings:
ab-p payload.txt-T"application/json"-c 100-n 500 http://127.0.0.1:8080/api
And this is what we get in the result:
Finished 200 requestsServer Software: Server Hostname: 127.0.0.1Server Port: 8080Document Path: /apiDocument Length: 34245 bytesConcurrency Level: 50Time takenfortests: 0.174 secondsComplete requests: 200Failed requests: 0Total transferred: 6871200 bytesTotal body sent: 34000HTML transferred: 6849000 bytesRequests per second: 1149.91[#/sec] (mean)Time per request: 43.482[ms](mean)Time per request: 0.870[ms](mean, across all concurrent requests)Transfer rate: 38580.30[Kbytes/sec] received 190.90 kb/s sent 38771.21 kb/s totalConnection Times(ms) min mean[+/-sd] median maxConnect: 0 0 0.5 0 2Processing: 6 37 10.1 40 57Waiting: 3 37 10.2 40 57Total: 6 38 9.7 40 58Percentage of the requests served within a certaintime(ms) 50% 40 66% 41 75% 43 80% 43 90% 46 95% 49 98% 52 99% 53 100% 58(longest request)
This is pretty impressive! Unfortunately I don't have a similar small app written in Elixir, so I'm going to take my original app which does much more than this little rust exercise, and I will just comment out all extra queries in Absinthe. This app runs on Phoenix, I didn't bother stripping down any plugs and middleware that are not related to GraphQL endpoint. Results are a bit less impressive, but expected:
Finished 200 requestsServer Software: CowboyServer Hostname: 127.0.0.1Server Port: 4000Document Path: /apiDocument Length: 36245 bytesConcurrency Level: 50Time takenfortests: 1.384 secondsComplete requests: 200Failed requests: 0Total transferred: 7298800 bytesTotal body sent: 34000HTML transferred: 7249000 bytesRequests per second: 144.50[#/sec] (mean)Time per request: 346.013[ms](mean)Time per request: 6.920[ms](mean, across all concurrent requests)Transfer rate: 5149.90[Kbytes/sec] received 23.99 kb/s sent 5173.89 kb/s totalConnection Times(ms) min mean[+/-sd] median maxConnect: 0 1 0.6 0 2Processing: 39 310 117.2 307 690Waiting: 36 310 117.2 307 690Total: 40 311 117.1 308 690WARNING: The median and meanforthe initial connectiontimeare not within a normal deviation These results are probably not that reliable.Percentage of the requests served within a certaintime(ms) 50% 308 66% 367 75% 410 80% 422 90% 464 95% 499 98% 532 99% 545 100% 690(longest request)
There is a chance that I messed up the test conditions here, otherwise Rust appears to be significantly faster. I might dig into researching how the hello world app shows much smaller performance gap, this could be either Absinthe or Ecto dragging it back. However, in my opinion the development with Elixir is much more pleasant, that this could potentially compensate for the performance disparity.
Todos:
- Use
dataloader
when processing relations to avoidN+1
queries - Use
dotenv
for a proper database config
Top comments(0)
For further actions, you may consider blocking this person and/orreporting abuse