Movatterモバイル変換


[0]ホーム

URL:


Skip to content
DEV Community
Log in Create account

DEV Community

Cover image for GraphQL Response Caching with Envelop
The Guild profile imageTheGuildBot
TheGuildBot forThe Guild

Posted on • Originally published atthe-guild.dev

GraphQL Response Caching with Envelop

This article was published on Thursday, August 19, 2021 byLaurin Quast @The Guild Blog

A Brief Introduction to Caching

Huge GraphQL query operations can slow down your server as deeply nested selection sets can cause a
lot of subsequent database reads or calls to other remote services. Tools likeDataLoader can
reduce the amount of concurrent and subsequent requests via batching and caching during the
execution of a single GraphQL operation. Features like@defer and@stream can help with
streaming slow-to-retrieve result partials to the clients progressively. However, for subsequent
requests we hit the same bottle-neck over and over again.

What if we don't need to go through the execution phase at all for subsequent requests that execute
the same query operation with the same variables?

A common practice for reducing slow requests is to leverage caching. There are many types of caching
available. E.g. We could cache the whole HTTP responses based on the POST body of the request or an
in memory cache within our GraphQL field resolver business logic in order to hit slow services less
frequently.

Having a cache comes with the drawback of requiring some kind of cache invalidation mechanism.
Expiring the cache via a TTL (time to live) is a widespread practice, but can result in hitting the
cache too often or too scarcely. Another popular strategy is to incorporate cache invalidation logic
into the business logic. Writing such logic can potentially become too verbose and hard to maintain.
Other systems might use database write log observers for invalidating entities based on updated
database rows.

In a strict REST API environment, caching entities is significantly easier, as each endpoint
represents one resource, and thus aGET method can be cached and aPATCH method can be used for
automatically invalidating the cache for the correspondingGET request, which is described via the
HTTP path (/api/user/12).

With GraphQL such things become much harder and complicated. First, we usually only have a single
HTTP endpoint/graphql that only acceptsPOST requests. A query operation execution result could
contain many different types of entities, thus, we need different strategies for caching GraphQL
APIs.

SaaS services like FastQL and GraphCDN started popping providing proxies for your existing GraphQL
API, that magically add response based caching. But how does this even work?

How Does GraphQL Response Caching Work?

Caching Query Operations

In order to cache a GraphQL execution result (response) we need to build an identifier based on the
input that can be used to identify whether a response can be served from the cache or must be
executed and then stored within the cache.

Example: GraphQL Query Operation

queryUserProfileQuery($id:ID!){user(id:$id){__typenameidloginrepositoriesfriends(first:2){__typenameidlogin}}}
Enter fullscreen modeExit fullscreen mode

Example: GraphQL Variables

{"id":"1"}
Enter fullscreen modeExit fullscreen mode

Usually those inputs are the Query operation document and the variables for such an operation
document.

Thus, a response cache can store the execution result under a cache key that is built from those
inputs:

OperationCacheKey (e.g. SHA1) = hash(GraphQLOperationString, Stringify(GraphQLVariables))
Enter fullscreen modeExit fullscreen mode

Under some circumstances it is also required to cache based on the request initiator. E.g. a user
requesting his profile should not receive the cached profile of another user. In such a scenario,
building the operation cache key should also include a partial that uniquely identifies the
requestor. This could be a user ID extracted from an authorization token.

OperationCacheKey (e.g. SHA1) = hash(GraphQLOperationString, Stringify(GraphQLVariables), RequestorId)
Enter fullscreen modeExit fullscreen mode

This allows us to identify recurring operations with the same variables and serve it from the cache
for subsequent requests. If we can serve a response from the cache we don't need to parse the
GraphQL operation document and furthermore can skip the expensive execution phase. That will result
in significant speed improvements.

But in order to make our cache smart we still need a suitable cache invalidation mechanism.

Invalidating Cached GraphQL Query Operations

Let's take a look at a possible execution result for the GraphQL operation.

Example: GraphQL Execution Result

{"data":{"user":{"__typename":"User","id":"1","login":"dotan","repositories":["codegen"],"friends":[{"__typename":"User","id":"2","login":"urigo"},{"__typename":"User","id":"3","login":"n1ru4l"}]}}}
Enter fullscreen modeExit fullscreen mode

Many frontend frameworks cache GraphQL operation results in a normalized cache. The identifier for
storing the single entities of a GraphQL operation result within the cache is usually theid field
of object types for schemas that use global unique IDs or a compound of the__typename andid
field for schemas that use non-global ID fields.

Example: Normalized GraphQL Client Cache

{"User:1":{"__typename":"User","id":"1","login":"dotan","repositories":["codegen"],"friends":["$$ref:User:2","$$ref:User:3"]},"User:2":{"__typename":"User","id":"2","login":"urigo"},"User:3":{"__typename":"User","id":"3","login":"n1ru4l"}}
Enter fullscreen modeExit fullscreen mode

Interestingly, the same strategy for constructing cache keys on the client can also be used on the
backend for tracking which GraphQL operations contain which entities. That allows invalidating
GraphQL query operation results based on entity IDs.

For the execution result entity IDS that could be used for invalidating the operation are the
following:User:1,User:2 andUser:3.

And also keep a register that maps entities to operation cache keys.

Entity   List of Operation cache keys that reference a entityUser:1   OperationCacheKey1, OperationCacheKey2, ...User:2   OperationCacheKey2, OperationCacheKey3, ...User:3   OperationCacheKey3, OperationCacheKey1, ...
Enter fullscreen modeExit fullscreen mode

This allows us to keep track of which GraphQL operations must be invalidated once a certain entity
becomes stale.

The remaining question is, how can we track an entity becoming stale?

As mentioned before, listening to a database write log is a possible option - but the implementation
is very specific and differs based on the chosen database type. Time to live is also a possible, but
a very inaccurate solution.

Another solution is to add invalidation logic within our GraphQL mutation resolver. By the GraphQL
Specification mutations are meant to modify our GraphQL graph.

A common pattern when sending mutations from clients is to select and return affected/mutated
entities with the selection set.

For our example from above the following could be a possible mutation for adding a new repository to
the repositories field on the user entity.

Example: GraphQL Mutation

mutationRepositoryAddMutation($userId:ID,$repositoryName:String!){repositoryAdd(userId:$userId,repositoryName:$repositoryName){user{idrepositories}}}
Enter fullscreen modeExit fullscreen mode

Example: GraphQL Mutation Execution Result

{"data":{"repositoryAdd":{"user":{"id":"1","repositories":["codegen","envelop"]}}}}
Enter fullscreen modeExit fullscreen mode

Similar to how we build entity identifiers from the execution result of query operations for
identifying what entities are referenced in which operations, we can extract the entity identifiers
from the mutation operation result for invalidating affected operations.

In this specific case all operations that selectUser:1 should be invalidated.

Such an implementation makes the assumption that all mutations by default select affected entities
and, furthermore, all mutations of underlying entities are done through the GraphQL gateway via
mutations. In a scenario where we have actors that are not GraphQL services or services that operate
directly on the database, we can use this approach in a hybrid model with other methods such as
listening to database write logs.

Envelop Response Cache

The Envelop response cache plugin now provides primitives and a reference in memory store
implementation for adopting such a cache with all the features mentioned above with any GraphQL
server.

The goal of the response cache plugin is to educate how such mechanisms are implemented and
furthermore give developers the building blocks for constructing their own global cache with their
cloud provider of choice.

Adding a response cache to an existing envelop GraphQL server setup is as easy as adding the plugin:

import{envelop}from'@envelop/core'import{useResponseCache}from'@envelop/response-cache'constgetEnveloped=envelop({plugins:[// ... other plugins ...useResponseCache()]})
Enter fullscreen modeExit fullscreen mode

If you need to imperatively invalidate you can do that by providing the cache to the plugin:

import{envelop}from'@envelop/core'import{createInMemoryCache,useResponseCache}from'@envelop/response-cache'import{emitter}from'./event-emitter'constcache=createInMemoryCache()emitter.on('invalidate',entity=>{cache.invalidate([{typename:entity.type,id:entity.id}])})constgetEnveloped=envelop({plugins:[// ... other plugins ...useResponseCache({cache})]})
Enter fullscreen modeExit fullscreen mode

The caching behavior can be fully customized. A TTL can be provided global or more granular per type
or schema coordinate.

import{envelop}from'@envelop/core'import{useResponseCache}from'@envelop/response-cache'constgetEnveloped=envelop({plugins:[// ... other plugins ...useResponseCache({// cache operations for 1 hour by defaultttl:60*1000*60,ttlPerType:{// cache operation containing Stock object type for 500msStock:500},ttlPerSchemaCoordinate:{// cache operation containing Query.rocketCoordinates selection for 100ms'Query.rocketCoordinates':100},// never cache responses that include a RefreshToken object type.ignoredTypes:['RefreshToken']})]})
Enter fullscreen modeExit fullscreen mode

Need to cache based on the user? No problem.

import{envelop}from'@envelop/core'import{useResponseCache}from'@envelop/response-cache'constgetEnveloped=envelop({plugins:[// ... other plugins ...useResponseCache({// context is the GraphQL context that would be used for executionsession:context=>(context.user?String(context.user.id):null),// never serve cache for admin usersenabled:context=>(context.user?isAdmin(context.user)===false:true)})]})
Enter fullscreen modeExit fullscreen mode

Don't want to automatically invalidate based on mutations? Also, configurable!

import{envelop}from'@envelop/core'import{useResponseCache}from'@envelop/response-cache'constgetEnveloped=envelop({plugins:[// ... other plugins ...useResponseCache({// some might prefer invalidating only based on a database write loginvalidateViaMutation:false})]})
Enter fullscreen modeExit fullscreen mode

Want a global cache on Redis? Build a cache that implements theCache interface and share it with
the community!

exporttypeCache={/** set a cache response */set(/** id/hash of the operation */id:string,/** the result that should be cached */data:ExecutionResult,/** array of entity records that were collected during execution */entities:Iterable<CacheEntityRecord>,/** how long the operation should be cached */ttl:number):PromiseOrValue<void>/** get a cached response */get(id:string):PromiseOrValue<Maybe<ExecutionResult>>/** invalidate operations via typename or id */invalidate(entities:Iterable<CacheEntityRecord>):PromiseOrValue<void>}
Enter fullscreen modeExit fullscreen mode

More information about all possible configuration options can be found on
the response cache docs on the Plugin Hub.

What Is Next?

We are excited to explore new directions and make enterprise solutions accessible for all kinds of
developers.

What if the response cache could be used as a proxy on edge cloud functions distributed around the
world, which would allow using envelop as a http proxy to your existing GraphQL server? This is
something we would love to explore more (or even see contributions and projects from other
open-source developers).

We also want to make other practices such as rate limits based on operation cost calculation as used
by huge corporations like Shopify available as envelop plugins.

Do you have any ideas, want to contribute or report issues? Start a GitHub discussion/issue or
contact us via the chat!

Top comments(0)

Subscribe
pic
Create template

Templates let you quickly answer FAQs or store snippets for re-use.

Dismiss

Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment'spermalink.

For further actions, you may consider blocking this person and/orreporting abuse

More fromThe Guild

DEV Community

We're a place where coders share, stay up-to-date and grow their careers.

Log in Create account

[8]ページ先頭

©2009-2025 Movatter.jp