Sangria is a ScalaGraphQL implementation.
Here is how you can add it to your SBT project:
libraryDependencies+="org.sangria-graphql"%%"sangria"%"4.0.0"
You can find an example application that uses akka-http withsangria here:
https://github.com/sangria-graphql/sangria-akka-http-example
I would also recommend that you check outhttps://github.com/sangria-graphql/sangria-playground.It is an example of a GraphQL server written with Play framework and Sangria. It also serves as a playground,where you can interactively execute GraphQL queries and play with some examples.
Apollo Client is a full featured, simple to use GraphQL client with convenient integrations for popular view layers. Apollo Client is an easy way to get started with Sangria as they’re 100% compatible.
If you want to use sangria with the react-relay framework, then you also need to includesangria-relay:
libraryDependencies+="org.sangria-graphql"%%"sangria-relay"%"2.0.0"
Sangria-relay Playground (https://github.com/sangria-graphql/sangria-relay-playground) is a nice place to start if you would like to see it in action.
I would also recommend that you check out“Videos” section of the community page.It has a lot of nice introduction videos.
Example usage:
importsangria.ast.Documentimportsangria.parser.QueryParserimportsangria.renderer.QueryRendererimportscala.util.Successvalquery=""" query FetchLukeAndLeiaAliased( $someVar: Int = 1.23 $anotherVar: Int = 123) @include(if: true) { luke: human(id: "1000")@include(if: true){ friends(sort: NAME) } leia: human(id: "10103\n \u00F6 ö") { name } ... on User { birth{day} } ...Foo } fragment Foo on User @foo(bar: 1) { baz } """// Parse GraphQL queryQueryParser.parse(query)match{caseSuccess(document)=>// Pretty rendering of the GraphQL query as a `String`println(document.renderPretty)caseFailure(error)=>println(s"Syntax error: ${error.getMessage}")}
Alternatively you can usegraphql
macro, which will ensure that your query is syntactically correct at compile time:
importsangria.macros._valqueryAst:Document=graphql""" { name friends { id name } } """
You can also parse and render the GraphQL input values independently from a query document:
importsangria.renderer.QueryRendererimportsangria.macros._importsangria.astvalparsed:ast.Value=graphqlInput""" { id: "1234345" version: 2 # changed 2 times deliveries: [ {id: 123, received: false, note: null, state: OPEN} ] } """println(parsed.renderPretty)
Here is an example of GraphQL schema DSL:
importsangria.schema._valEpisodeEnum=EnumType("Episode",Some("One of the films in the Star Wars Trilogy"),List(EnumValue("NEWHOPE",value=TestData.Episode.NEWHOPE,description=Some("Released in 1977.")),EnumValue("EMPIRE",value=TestData.Episode.EMPIRE,description=Some("Released in 1980.")),EnumValue("JEDI",value=TestData.Episode.JEDI,description=Some("Released in 1983."))))valCharacter:InterfaceType[Unit,TestData.Character]=InterfaceType("Character","A character in the Star Wars Trilogy",()=>fields[Unit,TestData.Character](Field("id",StringType,Some("The id of the character."),resolve=_.value.id),Field("name",OptionType(StringType),Some("The name of the character."),resolve=_.value.name),Field("friends",OptionType(ListType(OptionType(Character))),Some("The friends of the character, or an empty list if they have none."),resolve=ctx=>DeferFriends(ctx.value.friends)),Field("appearsIn",OptionType(ListType(OptionType(EpisodeEnum))),Some("Which movies they appear in."),resolve=_.value.appearsInmap(e=>Some(e)))))valHuman=ObjectType("Human","A humanoid creature in the Star Wars universe.",interfaces[Unit,Human](Character),fields[Unit,Human](Field("id",StringType,Some("The id of the human."),resolve=_.value.id),Field("name",OptionType(StringType),Some("The name of the human."),resolve=_.value.name),Field("friends",OptionType(ListType(OptionType(Character))),Some("The friends of the human, or an empty list if they have none."),resolve=ctx=>DeferFriends(ctx.value.friends)),Field("appearsIn",OptionType(ListType(OptionType(EpisodeEnum))),Some("Which movies they appear in."),resolve=_.value.appearsInmap(e=>Some(e))),Field("homePlanet",OptionType(StringType),Some("The home planet of the human, or null if unknown."),resolve=_.value.homePlanet)))valDroid=ObjectType("Droid","A mechanical creature in the Star Wars universe.",interfaces[Unit,Droid](Character),fields[Unit,Droid](Field("id",StringType,Some("The id of the droid."),tags=ProjectionName("_id")::Nil,resolve=_.value.id),Field("name",OptionType(StringType),Some("The name of the droid."),resolve=ctx=>Future.successful(ctx.value.name)),Field("friends",OptionType(ListType(OptionType(Character))),Some("The friends of the droid, or an empty list if they have none."),resolve=ctx=>DeferFriends(ctx.value.friends)),Field("appearsIn",OptionType(ListType(OptionType(EpisodeEnum))),Some("Which movies they appear in."),resolve=_.value.appearsInmap(e=>Some(e))),Field("primaryFunction",OptionType(StringType),Some("The primary function of the droid."),resolve=_.value.primaryFunction)))valID=Argument("id",StringType,description="id of the character")valEpisodeArg=Argument("episode",OptionInputType(EpisodeEnum),description="If omitted, returns the hero of the whole saga. If provided, returns the hero of that particular episode.")valQuery=ObjectType[CharacterRepo,Unit]("Query",fields[CharacterRepo,Unit](Field("hero",Character,arguments=EpisodeArg::Nil,resolve=ctx=>ctx.ctx.getHero(ctx.argOpt(EpisodeArg))),Field("human",OptionType(Human),arguments=ID::Nil,resolve=ctx=>ctx.ctx.getHuman(ctxargID)),Field("droid",Droid,arguments=ID::Nil,resolve=Projector((ctx,f)=>ctx.ctx.getDroid(ctxargID).get))))valStarWarsSchema=Schema(Query)
Theresolve
argument of aField
expects a function of typeContext[Ctx, Val] => Action[Ctx, Res]
.As you can see, the result of theresolve
is anAction
type which can take different shapes.Here is the list of supported actions:
Value
- a simple value result. If you want to indicate an error, you need to throw an exceptionTryValue
- ascala.util.Try
resultFutureValue
- aFuture
resultPartialValue
- a partially successful result with a list of errorsPartialFutureValue
- aFuture
of partially successful resultDeferredValue
- used to return aDeferred
result (see theDeferred Values and Resolver section for more details)DeferredFutureValue
- the same asDeferredValue
but allows you to returnDeferred
inside of aFuture
UpdateCtx
- allows you to transform aCtx
object. The transformed context object would be available for nested sub-objects and subsequent sibling fields in case of mutation (since execution of mutation queries is strictly sequential). You can find an example of its usage in theAuthentication and Authorisation section.Normally the library is able to automatically infer theAction
type, so that you don’t need to specify it explicitly.
Sangria also introduces the concept of projections. If you are fetching your data from the database (like let’s say MongoDB), then it can bevery helpful to know which fields are needed for the query ahead-of-time in order to make an efficient projection in the DB query.
Projector
allows you to do precisely this. It wraps aresolve
function and enhances itwith the list of projected fields (limited by depth). TheProjectionName
field tag allows you to customize projectedfield names (this is helpful if your DB field names are different from the GraphQL field names).TheProjectionExclude
field tag, on the other hand, allows you to exclude a field from the list of projected field names.
Many schema elements, likeObjectType
,Field
orSchema
itself, take two type parameters:Ctx
andVal
:
Val
- represent values that are returned by theresolve
function and given to theresolve
function as a part of theContext
. In the schema example,Val
can be aHuman
,Droid
,String
, etc.Ctx
- represents some contextual object that flows across the whole execution (and doesn’t change in most of the cases). It can be provided to execution by the userin order to help fulfill the GraphQL query. A typical example of such a context object is a service or repository object that is able to accessa database. In the example schema, some of the fields (likedroid
orhuman
) make use of it in order to access the character repository.After a schema is defined, the library tries to discover all of the supported GraphQL types by traversing the schema. Sometimes you have a situation where not allGraphQL types are explicitly reachable from the root of the schema. For instance, if the example schema had only thehero
field in theQuery
type, thenit would not be possible to automatically discover theHuman
and theDroid
type, since only theCharacter
interface type is referenced inside of the schema.
If you have a similar situation, then you need to provide additional types like this:
valHeroOnlyQuery=ObjectType[CharacterRepo,Unit]("HeroOnlyQuery",fields[CharacterRepo,Unit](Field("hero",TestSchema.Character,arguments=TestSchema.EpisodeArg::Nil,resolve=ctx=>ctx.ctx.getHero(ctx.argOpt(TestSchema.EpisodeArg)))))valheroOnlySchema=Schema(HeroOnlyQuery,additionalTypes=TestSchema.Human::TestSchema.Droid::Nil)
Alternatively you can usemanualPossibleTypes
on theField
andInterfaceType
to achieve the same effect.
In some cases you need to define a GraphQL schema that contains recursive types or has circular references in the object graph. Sangria supports such schemasby allowing you to provide a no-arg function that createsObjectType
fields instead of an eager list of fields. Here is an example of interdependent types:
caseclassA(b:Option[B],name:String)caseclassB(a:A,size:Int)lazyvalAType:ObjectType[Unit,A]=ObjectType("A",()=>fields[Unit,A](Field("name",StringType,resolve=_.value.name),Field("b",OptionType(BType),resolve=_.value.b)))lazyvalBType:ObjectType[Unit,B]=ObjectType("B",()=>fields[Unit,B](Field("size",IntType,resolve=_.value.size),Field("a",AType,resolve=_.value.a)))
In most cases you also need to define (at least one of) these types withlazy val
.
You can render a schema or an introspection result in human-readable form (SDL syntax) withSchemaRenderer
. Here is an example:
SchemaRenderer.renderSchema(SchemaDefinition.StarWarsSchema)
For a StarWars schema it will produce the following results:
interface Character { id: String! name: String friends: [Character] appearsIn: [Episode]}type Droid implements Character { id: String! name: String friends: [Character] appearsIn: [Episode] primaryFunction: String}enum Episode { NEWHOPE EMPIRE JEDI}type Human implements Character { id: String! name: String friends: [Character] appearsIn: [Episode] homePlanet: String}type Query { hero(episode: Episode): Character! human(id: String!): Human droid(id: String!): Droid!}
Defining schema withObjectType
,InputObjectType
andEnumType
can become quite verbose. They provide maximum flexibility, but sometimes you just have a simple case class which you would like to expose via GraphQL API.
For this, sangria provides a set of macros that are able to derive GraphQL types from normal Scala classes, case classes and enums:
deriveObjectType[Ctx, Val]
- constructs anObjectType[Ctx, Val]
with fields found inVal
class (case class accessors and members annotated with@GraphQLField
)deriveContextObjectType[Ctx, Target, Val]
- constructs anObjectType[Ctx, Val]
with fields found inTarget
class (case class accessors and members annotated with@GraphQLField
). You also need to provide it a functionCtx => Target
which the macro will use to get an instance ofTarget
type from a user context.deriveInputObjectType[T]
- constructs anInputObjectType[T]
with fields found inT
case class (only supports case class accessors)deriveEnumType[T]
- constructs anEnumType[T]
with values found inT
enumeration. It supports ScalaEnumeration
as well as sealed hierarchies of case objects.You need the following import to use them:
importsangria.macros.derive._
The use of these macros is completely optional, they just provide a bit of convenience when you need it.Schema Definition DSL is the primary way to define a schema.
You can also influence the derivation by either providing a list of settings to the macro or using@GraphQL*
annotations (these areStaticAnnotation
s and only used to customize a macro code generation - they are erased at the runtime). This provides a very flexible way to derive GraphQL types based on your domain model - you can customize almost any aspect of the resulting GraphQL type (change names, add descriptions, add fields, deprecate fields, etc.).
In order to discover other GraphQL types, the macros use implicits. So if you derive interdependent types, make sure to make them implicitly available in the scope.
deriveObjectType
andderiveContextObjectType
support arbitrary case classes as well as normal classes/traits.
Here is an example:
caseclassUser(id:String,permissions:List[String],password:String)valUserType=deriveObjectType[MyCtx,User](ObjectTypeName("AuthUser"),ObjectTypeDescription("A user of the system."),RenameField("id","identifier"),DocumentField("permissions","User permissions",deprecationReason=Some("Will not be exposed in future")),ExcludeFields("password"),AddFields(Field("reverse_name",BooleanType,resolve=_=>_.value.name.reverse) ))
It will generate anObjectType
which is equivalent to this one:
ObjectType("AuthUser","A user of the system.",fields[MyCtx,User](Field("identifier",StringType,resolve=_.value.id),Field("permissions",ListType(StringType),description=Some("User permissions"),deprecationReason=Some("Will not be exposed in future"),resolve=_.value.permissions),Field("reverse_name",BooleanType,resolve=_=>_.value.name.reverse)))
You can also use class methods as GraphQL fields. This will also correctly generate appropriateArgument
s.
Let’s look at the example:
caseclassUser(firstName:String,lastName:Option[String])traitMutation{@GraphQLFielddefaddUser(firstName:String,lastName:Option[String])={valuser=User(firstName,lastName)add(user)user}// ...}caseclassMyCtx(mutation:Mutation)implicitvalUserType=deriveObjectType[MyCtx,User]()valMutationType=deriveContextObjectType[MyCtx,Mutation,Unit](_.mutation)
Resulting mutation type would be equivalent to this one:
valFirstNameArg=Argument("firstName",StringType)valLastNameArg=Argument("lastName",OptionInputType(StringType))valMutationType=ObjectType("Mutation",fields[MyCtx,Unit](Field("addUser",UserType,arguments=FirstNameArg::LastNameArg::Nil,resolve=c=>c.ctx.mutation.addUser(c.arg(FirstNameArg),c.arg(LastNameArg)))))
You can also define a method argument of typeContext[Ctx, Val]
- it will not be treated as an argument, but instead a field execution context would be provided to a method in this argument.
Default values of method arguments would be ignored. If you would like to provide a default value to anArgument
, please use@GraphQLDefault
instead.
Instead of using the@GraphQLField
annotation, you can also provide theIncludeMethods
setting as an argument to the macro.
deriveInputObjectType
supports only case classes. Here is an example:
caseclassUser(id:String,permissions:List[String],password:String)valUserType=deriveInputObjectType[User](InputObjectTypeName("AuthUser"),InputObjectTypeDescription("A user of the system."),DocumentInputField("permissions","User permissions"),RenameInputField("id","identifier"),ExcludeInputFields("password"))
It will generate anInputObjectType
which is equivalent to this one:
InputObjectType[User]("AuthUser","A user of the system.",List(InputField("identifier",StringType),InputField("permissions",ListInputType(StringType),description="User permissions")))
You can use@GraphQLDefault
as well as normal Scala default values to provide a default value for anInputField
.The@GraphQLDefault
annotation will be used as a default of both are defined.
deriveEnumType
supports ScalaEnumeration
as well as sealed hierarchies of case objects.
First let’s look atEnumeration
example:
objectColorextendsEnumeration{valRed,LightGreen,DarkBlue=Value}valColorType=deriveEnumType[Color.Value](IncludeValues("Red","DarkBlue"))
It will generate anEnumType
which is equivalent to this one:
EnumType("Color",values=List(EnumValue("Red",value=Color.Red),EnumValue("DarkBlue",value=Color.DarkBlue)))
And here is an example of sealed hierarchy of case objects:
sealedtraitFruitcaseobjectRedAppleextendsFruitcaseobjectSuperBananaextendsFruitcaseobjectMegaOrangeextendsFruitsealedabstractclassExoticFruit(valscore:Int)extendsFruitcaseobjectGuaveextendsExoticFruit(123)valFruitType=deriveEnumType[Fruit](EnumTypeName("Foo"),EnumTypeDescription("It's foo"))
It will generate anEnumType
which is equivalent to this one:
EnumType("Foo",Some("It's foo"),List(EnumValue("RedApple",value=RedApple),EnumValue("SuperBanana",value=SuperBanana),EnumValue("MegaOrange",value=MegaOrange),EnumValue("Guave",value=Guave)))
It is important to usederiveEnumType
in thesame source file where you have defined your sealed traitafter all trait children are defined!Otherwise macro will not be able to find all of the enum values.
Sometimes you need to model recursive and interdependent types. The macro needs a little bit of help: you must replace fields that use recursive types and define them manually.
Here is an example of anObjectType
:
caseclassA(id:Int,b:B)caseclassB(name:String,a:A,b:B)implicitlazyvalAType=deriveObjectType[Unit,A](ReplaceField("b",Field("b",BType,resolve=_.value.b)))implicitlazyvalBType:ObjectType[Unit,B]=deriveObjectType(ReplaceField("a",Field("a",AType,resolve=_.value.a)),ReplaceField("b",Field("b",BType,resolve=_.value.b)))
An example ofInputObjectType
:
caseclassA(id:Int,b:Option[B])caseclassB(name:String,a:A,b:Option[B])implicitlazyvalAType:InputObjectType[A]=deriveInputObjectType[A](ReplaceInputField("b",InputField("b",OptionInputType(BType))))implicitlazyvalBType:InputObjectType[B]=deriveInputObjectType[B](ReplaceInputField("a",InputField("a",AType)),ReplaceInputField("b",InputField("b",OptionInputType(BType))))
You can use the following annotations to change different aspects of the resulting GraphQL types:
@GraphQLName
- use a different name for a type, field, enum value or an argument@GraphQLDescription
- provide a description for a type, field, enum value or an argument@GraphQLDeprecated
- deprecate anObjectType
field or an enum value@GraphQLFieldTags
- provide field tags or anObjectType
field@GraphQLExclude
- exclude a field, enum value or an argument@GraphQLField
- include a member of a class (val
ordef
) in the resultingObjectType
. This will also create the appropriateArgument
list if the method takes some arguments@GraphQLDefault
- provide a default value for anInputField
or anArgument
Here is an example:
@GraphQLName("AuthUser")@GraphQLDescription("A user of the system.")caseclassUser(@GraphQLDescription("User ID.")id:String,@GraphQLName("userPermissions")@GraphQLDeprecated("Will not be exposed in future")permissions:List[String],@GraphQLExcludepassword:String)valUserType=deriveObjectType[MyCtx,User]()valUserInputType=deriveInputObjectType[User](InputObjectTypeName("UserInput"))
As you can see,InputObjectTypeName
is also used in this case. Macro settings always take precedence over the annotations.
If you have an introspection result (coming from remote server, for instance) or an SDL-based schema definition, then you can create an executable in-memory schema representation out of it.
If you already got a full introspection result from a server, you can recreate an in-memory representation of the schema withIntrospectionSchemaMaterializer
. This feature has a lot of potential for client-side tools: testing, mocking, creating proxy/facade GraphQL servers, etc.
Here is a simple example of how you can use this feature (using circe in this particular example):
importio.circe._importsangria.marshalling.circe._valintrospectionResults:Json=???// coming from other server or filevalclientSchema:Schema[Any,Any]=Schema.buildFromIntrospection(introspectionResults)
It takes the result of a full introspection query (loaded from the server, file, etc.) and recreates the schema definition with stubs forresolve methods. You can customize a lot of aspects of the materialization by providing a customIntrospectionSchemaBuilder
implementation (you can also extendDefaultIntrospectionSchemaBuilder
class). This means that you can, for instance, plug in somegeneric field resolution logic or provide generic logic for custom scalars. Without these customizations, the materialized schema wouldonly be able to execute introspection queries.
In addition to normal query syntax, GraphQL allows you to define the schema itself. This is how the syntax look like:
interfaceCharacter{id:Int!name:String!}"""The human character"""typeHumanimplementsCharacter{id:Int!name:String!height:Float}"""The root query type"""typeQuery{"The main hero or the saga"hero:Character}schema{query:Query}
You can recreate an in-memory representation of the schema withAstSchemaMaterializer
(just like with the introspection-based one).This feature has a lot of potential for client-side tools: testing, mocking, creating proxy/facade GraphQL servers, etc.
Here is a simple example of how you can use this feature:
valast=graphql""" schema { query: Hello } type Hello { bar: Bar } type Bar { isColor: Boolean } """valclientSchema:Schema[Any,Any]=Schema.buildFromAst(ast)
It takes a schema AST (in this example thegraphql
macro is used, but you can also useQueryParser.parse
to parse the schema dynamically)and recreates the schema definition with stubs for resolve methods. You can customize a lot of aspects of the materialization by providing acustomAstSchemaBuilder
implementation (you can also extendDefaultAstSchemaBuilder
class). This means that you can, for instance,plug in some generic field resolution logic or provide generic logic for custom scalars. Without these customizations, the materializedschema would only be able to execute introspection queries.
AstSchemaBuilder
andDefaultAstSchemaBuilder
provide a lot of flexibility in how you build the schema based on the SDL AST, but the arealso quite low-level API and hard to work with directly. Sangria provides more higher-level API that is based onresolve
functions that aredefined in the scope of the whole schema based on the directives and other SDL elements.ResolverBasedAstSchemaBuilder
orAstSchemaBuilder.resolverBased
allow you to do this. They take a list ofAstSchemaResolver
s as an argument. Each resolver might contribute specific logic in the generated schema.
Let’s look a the simple example that primarily uses JSON as a data type (quite typical for GraphQL API that aggregates other JSON API).The schema definition looks like this:
valschemaAst=gql""" enum Color { Red, Green, Blue } interface Fruit { id: ID! } type Apple implements Fruit { id: ID! color: Color } type Banana implements Fruit { id: ID! length: Int } type Query @addSpecial { fruit: Fruit bananas: [Fruit] @generateBananas(count: 3) } """
It contains several interesting elements which we need to implement in the schema builder:
Fruit
is an interface type, so we need to properly define an instance check in order to select appropriateObjectType
based on thetype
JSON field (in this example)@generateBananas
directive defines the resolution logic forbananas
field. We need to define a resolver for it and generatea simple list with sizecount
.@addSpecial
directive needs to add new field in theQuery
type.Given all these requirements, we can defined a schema builder like this:
valCountArg=Argument("count",IntType)valGenerateBananasDir=Directive("generateBananas",arguments=CountArg::Nil,locations=Set(DL.FieldDefinition))valAddSpecialDir=Directive("addSpecial",locations=Set(DL.Object))valbuilder=AstSchemaBuilder.resolverBased[Unit](// Requirement #1 - provides appropriate instance check based on the `type` JSON fieldInstanceCheck.field[Unit,JsValue],// Requirement #2 - defined a resolution logic based on the `@generateBananas` directiveDirectiveResolver(GenerateBananasDir,c=>(1toc.arg(CountArg))map(id=>JsObject("type"->JsString("Banana"),"id"->JsString(id.toString),"length"->JsNumber(id*10)))),// Requirement #3 - add extra field based on the `@addSpecial` directiveDirectiveFieldProvider(AddSpecialDir,c=>MaterializedField(StandaloneOrigin,Field("specialFruit",c.objectType("Banana"),resolve=ResolverBasedAstSchemaBuilder.extractFieldValue[Unit,JsValue]))::Nil),// Requirement #4 - provides default behaviour for all other fieldsFieldResolver.defaultInput[Unit,JsValue])
Now that we have the schema builder, we can define the schema itself:
valschema=Schema.buildFromAst(schemaAst,builder.validateSchemaWithException(schemaAst))
Here we are also usingbuilder.validateSchemaWithException
to validate the AST and ensure that all directives are known and correct (builder.validateSchema
will just return a list of violations).
Now we are ready to execute the schema against a query and provide it with initial root JSON value:
valquery=gql""" { fruit { id ... on Apple {color} ... on Banana {length} } bananas { ... on Banana {length} } specialFruit {id} } """valinitialData=""" { "fruit": { "type": "Apple", "id": "1", "color": "Red" }, "specialFruit": { "type": "Apple", "id": "42", "color": "Blue" } } """.parseJsonExecutor.execute(schema,query,root=initialData)
The result of an execution would be JSON like this one:
{"data":{"fruit":{"id":"1","color":"Red"},"bananas":[{"length":10},{"length":20},{"length":30}],"specialFruit":{"id":"42"}}}
ResolverBasedAstSchemaBuilder
provides a lot of features. All supported resolvers are subtypes ofAstSchemaResolver
, so you can checkthese if you would like to learn about other features, like providing additional types withAdditionalTypes
, handling scalar values withScalarResolver
, etc.
Sometimes it might be very useful to analyze the schema AST document in order to collect information about some of the used directives in advance (before building the schema).This can be achieved withResolverBasedAstSchemaBuilder.resolveDirectives
. Here is a small example:
valValueArg=Argument("value",IntType)valNumDir=Directive("num",arguments=ValueArg::Nil,locations=Set(DirectiveLocation.Schema,DirectiveLocation.Object))valcollectedValue=schemaAst.analyzer.resolveDirectives(GenericDirectiveResolver(NumDir,resolve=c=>Some(cargValueArg))).sum
In this example, the resultingcollectedValue
will contain the sum of all numbers collected via@num
directive.
For another example of SDL-based schema materialization, please see the next section. I would also recommend to look atMaterializer class ingraphql-toolbox project. It contains much bigger and more comprehensive example.
If you have previously worked withapollo-tools and would like to know how SDL-based schema definition translates to sangria, thenResolverBasedAstSchemaBuilder
is a good place to start. Some apollo-tools examples use exact type/field name matches in order to define theresolve functions. You can achieve the same in sangria by usingFieldResolver
:
valbuilder=resolverBased[Any](FieldResolver.map("Query"->Map("posts"->(context=>...)),"Mutation"->Map("upvotePost"->(context=>...)),"Post"->Map("author"->(context=>...),"comments"->(context=>...))),AnyFieldResolver.defaultInput[Any,JsValue])
Though general recommendation is to useDirectiveResolver
instead of explicitFieldResolver
if possible, since it provides more robustmechanism to define the resolution logic.
Schema materialization can be overwhelming at first, so let’s go though a small example that combines static schema defined with scala code anddynamic SDL-based schema extensions.
First we need to define some model classes and repository that we will use in this example:
caseclassArticle(id:String,title:String,text:String,author:Option[String])classRepo{defloadArticle(id:String):Option[Article]=Some(Article(id,s"Test Article #$id","blah blah blah...",Some("Bob")))defloadComments:List[JsValue]=List(JsObject("text"->JsString("First!"),"author"->JsObject("name"->JsString("Jane"),"lastComment"->JsObject("text"->JsString("Boring...")))))}
In order to demonstrate different approaches, we representArticle
as a case class andcomments
as a JSONvalue (using spray-json in this example).
Now let’s define static part of the schema:
valArticleType=deriveObjectType[Repo,Article]()valIdArg=Argument("id",StringType)valQueryType=ObjectType("Query",fields[Repo,Unit](Field("article",OptionType(ArticleType),arguments=IdArg::Nil,resolve=c=>c.ctx.loadArticle(cargIdArg))))valstaticSchema=Schema(QueryType)
Nothing special going on here - just a standard schema definition. It becomes more interesting when we add schema extentions into the mix:
valextensions=gql""" extend type Article { comments: [Comment]! @loadComments } type Comment { text: String! author: CommentAuthor! } type CommentAuthor { name: String! lastComment: Comment } """valschema=staticSchema.extend(extensions,builder)
This code will extend anArticle
GraphQL type and addcomments
field. Also notice thatComment
andCommentAuthor
typesare mutually recursive. In order simplify the builder logic, we also use@loadComments
directive. The only missing pieceof the puzzle is thebuilder
itself:
valLoadCommentsDir=Directive("loadComments",locations=Set(DirectiveLocation.FieldDefinition))valbuilder=AstSchemaBuilder.resolverBased[Repo](DirectiveResolver(LoadCommentsDir,_.ctx.ctx.loadComments),FieldResolver.defaultInput[Repo,JsValue])
As you can see, we are using@loadComments
directive to define a specialresolve
function logic that loads all of the comments.In general it is recommended approach to handle field logic in the builder(an alternative would be to rely of the field/type names withFieldResolver {case (TypeName("Article"), FieldName("comments")) => ...}
which is quite fragile).
All other fields are defined in terms ofresolveJson
function. It just adopts contextual value (which is JSON in our example)to the field’s return type. This implementation is by no means complete - it just shows a short example.For production application you would need to improve this logic according to the needs of the application.
Now we are ready to execute query against out new shiny schema:
valquery=gql""" { article(id: "42") { title text comments { text author { name lastComment { text } } } } } """Executor.execute(schema,query,newRepo)
Result of the execution would look like this:
{"data":{"article":{"title":"Test Article #42","text":"blah blah blah...","comments":[{"text":"First!","author":{"name":"Jane","lastComment":{"text":"Boring..."}}}]}}}
Here is an example of how you can execute example schema:
importsangria.execution.ExecutorExecutor.execute(TestSchema.StarWarsSchema,queryAst,userContext=newCharacterRepo,deferredResolver=newFriendsResolver,variables=vars)
The result of the execution is aFuture
of marshaled GraphQL result (seeResult Marshalling and Input Unmarshalling section for more details)
In some situations, you may need to make a static query analysis and postpone the actual execution of the query. Later on, you may need to execute this query several times. A typical example is subscription queries: you first validate and prepare a query, and then you execute it several times for every event. This is precisely whatPreparedQuery
allows you to do.
Let’s look at the example:
valpreparedQueryFuture=Executor.prepare(StarWarsSchema,query,newCharacterRepo,deferredResolver=newFriendsResolver)preparedQueryFuture.map(preparedQuery=>preparedQuery.execute(userContext=someCustomCtx,root=event))
Executor.prepare
will return you aFuture
with a prepared query which you can execute several times later, possibly providing differentuserContext
orroot
values. In addition toexecute
,PreparedQuery
also gives you a lot of information about the query itself: operation, rootQueryType
, top-level fields with arguments, etc.
TheFuture
of marshaled result is not the only possible result of a query execution. By importing different implementation ofExecutionScheme
you canchange the result type of an execution. Here is an example:
importsangria.execution.ExecutionScheme.Extendedvalresult:Future[ExecutionResult[Ctx,JsValue]]=valExecutor.execute(schema,query)
Extended
execution scheme gives you the result of the execution together with additional information about the execution itself (like, for instance, the list of exceptions that happened during the execution).
Following execution schemes are available:
Default
- The default one. Returns aFuture
of marshaled resultExtended
- Returns aFuture
containingExecutionResult
.Stream
- Returns a stream of results. Very useful for subscription and batch queries, where the result is anObservable
or aSource
StreamExtended
- Returns a stream ofExecutionResult
sPlease use this feature with caution! It might be removed in future releases or have big semantic changes. In particular in the way how variables are inferred and merged.
Batch executor allow you to execute several inter-dependent queries and get an execution result as a stream.Dependencies are expressed via variables and@export
directive. It provides following features:
operationNames
when executing a GraphQL query document.All operations would be executed in order inferred from the dependencies between queries.@export(as: "foo")
directive. This directive allows you to save the results ofthe query execution and then use it as a variable in a different query within the same document.This provides a way to define data dependencies between queries.@export
directive, the variables would be automatically inferred by the executionengine, so you don’t need to declare them explicitlyinferVariableDefinitions
flagBatch executor implementation is inspired by this talk:
You can use anyexecution scheme with batch executor, butStream
andStreamExtended
are recommendedsince they will return execution results for all of the queries as a stream.
In the example below we are using monix for streaming and spray-json for data serialization. Also notice that we need to explicitlyaddBatchExecutor.ExportDirective
directive and useBatchExecutor.executeBatch
instead of standard executor:
importmonix.execution.Scheduler.Implicits.globalimportsangria.execution.ExecutionScheme.Streamimportsangria.marshalling.sprayJson._importsangria.streaming.monix._valschema=Schema(...,directives=BuiltinDirectives:+BatchExecutor.ExportDirective)valresult:Observable[JsValue]=BatchExecutor.executeBatch(schema,query,operationNames=List("StoryComments","NewsFeed"))
You can also useBatchExecutor.OperationNameExtension
middleware to include an operation name in the execution results (as an extension).This will make it easier for a client to distinguish between different execution results coming from the same response stream.
Here is an example of request that will execute 2 queries in batch:
GET/graphql?batchOperations=[StoryComments,NewsFeed]queryNewsFeed{feed{stories{id@export(as:"ids")actormessage}}}queryStoryComments{stories(ids:$ids){comments{actormessage}}}
In this exampleNewsFeed
query would be executed first and all story comments would be loaded in a separate query execution step (StoryComments
).This allows client to load all required data with one efficient request to the server, but data would be sent back to the client in chunks.
Sangria provides a lot of generic tools to work with a GraphQL query and schema. Aside from the actual query execution, you may need todo different things like analyze a query for breaking changes, introspection usage, deprecated field usage, validate query/schema withoutexecuting it, etc. This section describes some of the tools that will help you with these tasks.
Query validation consists of validation rules. You can pick and choose which rules you would like to use for a query validation. Youcan even create your own validation rules and validate queries against them. The list of standard validation rules is available inQueryValidator.allRules
.In order to validate queries against the list of rules you need to useRuleBasedQueryValidator
. The default query validator which uses all standard rulesis available underQueryValidator.default
. Here is an example of how you can use it:
valviolations=QueryValidator.default.validateQuery(schema,query)
You can also customise the list of validation rules when you are executing the query by providing a custom query validator like this:
Executor.execute(schema,query,queryValidator=...)
For instance, it can be useful to disable validation for production setup, where you have validated all possible queries upfront and wouldlike to save on CPU cycles during the execution.
Query validation can also be used for SDL validation. Let’s say we have following type definitions:
typeUser@auth(token:"TEST"){name:StringisAdmin:Boolean@permission(name:"ADMIN")}
We can validate it against a stub schema like this:
valvalidationSchema=Schema.buildStubFromAst(gql""" directive @permission(name: String) on FIELD_DEFINITION directive @auth(token: String!) on OBJECT """)valerrors:Vector[Violation]=QueryValidator.default.validateQuery(validationSchema,gql""" type User @auth(token: "TEST") { name: String isAdmin: Boolean @permission(name: "ADMIN") } """)
If we make a mistake somewhere (misplace the directive or forget to provide a directive argument):
valerrors:Vector[Violation]=QueryValidator.default.validateQuery(validationSchema,gql""" type User @auth { name: String @auth(token: "TEST") isAdmin: Boolean } """)
we will get following validation errors:
Directive'auth'maynotbeusedonfielddefinition.(line3,column22):name:String@auth(token:"TEST")^Field'auth'argument'token'oftype'String!'isrequiredbutnotprovided.(line2,column17):typeUser@auth{^
QueryValidator
also able to validateInputDocument
. Here is a small example:
valschema=Schema.buildStubFromAst(gql""" enum Color { RED GREEN BLUE } input Foo { baz: Color! } input Config { foo: String bar: Int list: [Foo] } """)valinp=gqlInpDoc""" { foo: "bar" bar: "foo" list: [ {baz: RED} {baz: FOO_BAR} {test: 1} {} ] } { doo: "hello" } """valerrors=QueryValidator.default.validateInputDocument(schema,inp,"Config")
In contrast to standard validation, you also need to provide an input type against which you would like to validate the inputdocument. Validation will result in following errors:
Atpath'bar'Intvalueexpected(line4,column13):bar:"foo"^Atpath'list[1].baz'Enumvalue'FOO_BAR'isundefinedinenumtype'Color'.Knownvaluesare:RED,GREEN,BLUE.(line5,column13):list:[^(line5,column19):list:[^(line7,column16):{baz:FOO_BAR}^Atpath'list[2].test'Field'test'isnotdefinedintheinputtype'Foo'.(line5,column13):list:[^...
Just like query validation, schema validation consists of validation rules. You can pick and choose which rules you would like to use for a schema validation. Youcan even create your own validation rules and validate schema against them. The list of standard validation rules is available inSchemaValidationRule.default
.
You can also customise the list of validation rules when you are creating a schema by providing a custom list of rules like this:
Schema(QueryType,validationRules=...)
Query reducers provide a lot useful analysis tools, but they require 2 bits of information in addition to a query:operationName
andvariables
.Still, you can execute query reducers against query without executing it by usingExecutor.prepare
.prepare
will not execute the query,but instead it will prepare query for execution ensuring that query is validated and all query reducers are successful.
Here is how you can ensure that query complexity does not exceed the threshold:
valvariables:Json=...valprepared=query.operations.keySet.map{operationName=>Executor.prepare(schema,query,operationName=operationName,variables=variables,queryReducers=QueryReducer.rejectComplexQueries(1000,(_,_)=>TooExpansiveQuery))::Nil}valvalidated=Future.sequence(prepared).map(_=>Done)
You will need to initialize all required variable values with some stubs. All variable variable values that represent things that potentiallymay increase query complexity, like list limits, should be set to values that represent the worst-case scenario (like max limit).
If you don’t have the variables or don’t want to work with the stub value, then you can use another mechanism that does not require variables.QueryReducerExecutor.reduceQueryWithoutVariables
provides a convenient way to achieve this. In its signature it is similar toExecutor.prepare
, but it does not requirevariables
and designed to validate and execute query reducers for queries that arebeing analyzed ahead of time (e.g. in the context of persistent queries).
AstVisitor
provides an easy way to traverse and possibly transform allAstNode
s in a query. Here is how basic usage looks like:
valqueryWithoutComments=AstVisitor.visit(query,AstVisitor{case_:Comment=>VisitorCommand.Delete})
This visit will create a new queryDocument
that does not contain any comments.
AstVisitor
also provides several variations ofvisit
function that allow you to visit AST nodes with type info (from schema definition) and state.
If you would like to analyse field (and all of it’s selections/nested fields) inside of aresolve
function, you can access it withContext.astFields
and then useAstVisitor
to analyse it. You may also consider usingprojections feature for this.
Sangria provides several high-level tools to analyse ASTDocument
without reliance on schema. They are defined inDocumentAnalyzer
(which you can also access viaquery.analyzer
).
Here is an example of how you can separate query operations:
valquery=gql""" query One { ...A } query Two { foo bar ...B ...C } fragment A on T { field ...C } fragment B on T { fieldX } fragment C on T { fieldC } """query.analyzer.separateOperations.values.foreach{document=>println(document.renderPretty)}
separateOperations
will create twoDocument
s that look like this:
queryOne{...A}fragmentAonT{field...C}fragmentConT{fieldC}
and
queryTwo{foobar...B...C}fragmentBonT{fieldX}fragmentConT{fieldC}
WhileDocumentAnalyzer
does not rely on a schema information to analyse the queryDocument
,SchemaBasedDocumentAnalyzer
uses schemaprovide much deeper query analysis. It can be used to discover things like deprecated field, introspection, variable usage.
Here is an example of how you can find all deprecated field and enum value usages:
schema.analyzer(query).deprecatedUsages
Schema comparator provides an easy way to compare schemas between each other. You can, of course, compare unrelated schemas and get all of the differences as a list.Where it becomes really useful, is when you compare different versions of the same schema.
In this example I compare a schema loaded from staging environment against the schema from a production environment:
valprodSchema=Schema.buildFromIntrospection(loadSchema("http://prod.my-company.com/graphql"))valstagingSchema=Schema.buildFromIntrospection(loadSchema("http://staging.my-company.com/graphql"))valchanges:Vector[SchemaChange]=stagingSchemacompareprodSchema
Given this list of changes, we can do a few interesting thing with it. For instance, we can stop the deploymentto production if staging environment containsbreaking changes (you can run this somewhere in your CI environment):
valbreakingChanges=changes.filter(_.breakingChange)if(breakingChanges.nonEmpty){valrendered=breakingChanges.map(change=>s" * ${change.description}").mkString("\n","\n","")thrownewIllegalStateException(s"Staging environment has breaking changes in GraphQL schema! $rendered")}
You can also createrelease notes for all of the changes:
valreleaseNotes=if(changes.nonEmpty){valrendered=changes.map{change=>valbreaking=if(change.breakingChange)" (breaking change)"else""s" * ${change.description}$breaking"}.mkString("\n","\n","")s"Release Notes: $rendered"}else"No Changes"
As described inprevious section, you can handle subscription queries with prepared queries.This approach provides a lot of flexibility, but also means that you need to manually analyze subscription fields and appropriatelyexecute query for every event.
Stream-based subscriptions provide much easier and, in many respects, superior approach of handling subscription queries.In order to use it, you first need to choose one of available stream implementations:
sangria.streaming.akkaStreams._
-akka-streams implementation based onSource[T, NotUsed]
"org.sangria-graphql" %% "sangria-akka-streams" % "1.0.2"
akka.stream.Materializer
to be available in scopesangria.streaming.rxscala._
-RxScala implementation based onObservable[T]
"org.sangria-graphql" %% "sangria-rxscala" % "1.0.0"
scala.concurrent.ExecutionContext
to be available in scopesangria.streaming.monix._
-monix implementation based onObservable[T]
"org.sangria-graphql" %% "sangria-monix" % "2.0.0"
monix.execution.Scheduler
to be available in scopesangria.streaming.future._
- very simple implementation based onFuture[T]
which is treated as a stream with a single elementscala.concurrent.ExecutionContext
to be available in scopeYou can also easily create your own integration by implementing and providing an implicit instance ofSubscriptionStream[S]
type class.
If you prefer a hands-on approach, then you can take a look atsangria-subscriptions-example project. It demonstrates most of the concepts that are described in this section.
After you have imported a concrete stream implementation, you can define a subscription type fields withField.subs
.Here is an example that uses monix:
importmonix.execution.Scheduler.Implicits.globalimportmonix.reactive.Observableimportsangria.streaming.monix._valSubscriptionType=ObjectType("Subscription",fields[Unit,Unit](Field.subs("userEvents",UserEventType,resolve=_=>Observable(UserCreated(1,"Bob"),UserNameChanged(1,"John")).map(action(_))),Field.subs("messageEvents",MessageEventType,resolve=_=>Observable(MessagePosted(userId=20,text="Hello!")).map(action(_)))))
Please note that every element in a stream should be anAction[Ctx, Val]
. Anaction
helper function is used in this case totransform every element of a stream into anAction
. Also, it is important that either all fields of aSubscriptionType
or none of them arecreated with theField.subs
function (otherwise it would not be possible to merge them in a single stream).
Now you can execute subscription queries and get back a stream of query execution results like this:
importmonix.execution.Scheduler.Implicits.globalimportsangria.streaming.monix._importsangria.execution.ExecutionScheme.Streamvalschema=Schema(QueryType,subscription=Some(SubscriptionType))valquery=graphql""" subscription { userEvents { id __typename ... on UserCreated { name } } messageEvents { __typename ... on MessagePosted { user { id name } text } } } """valstream:Observable[JsValue]=Executor.execute(schema,query)
We are importingExecutionScheme.Stream
to instruct the executor to return a stream of results instead of aFuture
of a single result.The stream will emit the following elements (the order may be different):
{"data":{"userEvents":{"id":1,"__typename":"UserCreated","name":"Bob"}}}{"data":{"messageEvents":{"__typename":"MessagePosted","user":{"id":20,"name":"Test User"},"text":"Hello!"}}}{"data":{"userEvents":{"id":1,"__typename":"UserNameChanged","name":"John"}}}
Only the top-level subscription fields have special semantics associated with them (in this respect it is similar to the mutation queries).The execution engine merges the requested field streams into a single stream which is then returned as a result of the execution.All other fields (2nd level, 3rd level, etc.) have normal semantics and would be fully resolved.
Please note, that the semantics of subscription queries is not standardized or fully defined at the moment. It may change in future, so use this feature with caution.
In the example schema, you probably noticed that some of the resolve functions returnDeferFriends
. It is defined like this:
caseclassDeferFriends(friends:List[String])extendsDeferred[List[Character]]
The defer mechanism allows you to postpone the execution of particular fields and then batch them together in order to optimise object retrieval.This can be very useful when you want to avoid an N+1 problem. In the example schema all of the characters have a list of friends, but they only have their IDs.You need to fetch them from somewhere in order to progress query execution.Retrieving every friend one-by-one would be very inefficient, since you potentially need to access an external databasein order to do so. The defer mechanism allows you to batch all these friend list retrieval requests in one efficient request to the DB.In order to do it, you need to implement aDeferredResolver
that will get a list of deferred values:
classFriendsResolverextendsDeferredResolver[Any]{defresolve(deferred:Vector[Deferred[Any]],ctx:Any,queryState:Any)(implicitec:ExecutionContext)=// Here goes your resolution logic}
Theresolve
function gives you a list ofDeferred[A]
values and expects you to return a list of resolved valuesFuture[B]
.
It is important to note that the resulting list must have the same size. This allows an executor to figure out the relationbetween deferred values and results. The order of results also plays an important role.(Fetch API, which is described below, usesHasId
type class to match the entities, so the contract/restriction only relevant forDeferredResolver
)
After you have defined aDeferredResolver[T]
, you can provide it to an executor like this:
Executor.execute(schema,query,deferredResolver=newFriendsResolver)
DeferredResolver
will do its best to batch as many deferred values as possible. Let’s look at this example query to see how it works:
{hero{friends{friends{friends{friends{name}}}more:friends{friends{friends{name}}}}}}
During an execution of this query, the amount of producedDeferred
values grows exponentially. StillDeferredResolver.resolve
methodwould be called only4 times by the executor because the query has only 4 levels of fields that return deferred values (friends
in this case).
It is quite common requirement to transform the resolved deferred value before it is used by the execution engine. This can be easily achievedwithmap
method on theDeferredValue
action. here is an example:
Field("products",ListType(ProductType),resolve=c=>DeferredValue(fetcherProd.deferSeqOpt(c.value.products)).map(fetchedProducts=>...)),
In some cases you need to report errors that happened during deferred value resolution, but still preserving successful result. You can doit by usingmapWithErrors
instead ofmap
.
Functions likedefer
ordeferSeq
will trigger internal errors if one of the deferred values cannot be resolved (tags with id1
,2
, and3
are requested but the deferred resolver only returns tags with id1
and3
for example), whereas theirOpt
counterparts (deferOpt
,deferSeqOpt
, etc.), although having the same signatures, will ignore the missing values silently.
In some cases you may need to have some state inside of aDeferredResolver
for every query execution. This, for instance, is necessary when youimplement a cache inside of the resolver.
Internally, an executor managesDeferredResolver
state and provides it via thequeryState
argument to aresolve
method. You can provide an initialstate by overriding theinitialQueryState
method:
classMyResolver[Ctx]extendsDeferredResolver[Ctx]{definitialQueryState:Any=TrieMap[String,Any]()defresolve(deferred:Vector[Deferred[Any]],ctx:Ctx,queryState:Any)(implicitec:ExecutionContext)=// resolve deferred values by using cache from `queryState`}
As was mentioned before,DeferredResolver
will do its best to collect and batch as manyDeferred
values as possible. This means that itwill even wait for aFuture
to produce some values in order to find out whether they produce some deferred values.
In some cases this is not desired. You can override the following methods in order to customize this behaviour and define independentdeferred value groups:
includeDeferredFromField
- A function that decides whether deferred values from a particular field should be collected or processed independently.groupDeferred
- Provides a way to group deferred values in batches that would be processed independently. Useful for separating cheap and expensive deferred values.DeferredResolver
provides a very flexible mechanism to batch retrieval of objects from the external services or databases, but it provides avery low-level, unsafe, but efficient API for this. You certainly can use it directly, especially in more non-trivial cases, but most of the timeyou probably will work with isolated entity objects which you would like to load by ID or some relation to other entities. This is whereFetcher
comes into play.
Fetcher
provides a high-level API for deferred value resolution and is implemented as a specialized version ofDeferredResolver
.This API provides the following features:
maxBatchSize
DeferredResolver
Examples in this section will use the following data model of products and categories:
ID | Name |
---|---|
1 | Rusty sword |
2 | Health potion |
3 | Small mana potion |
ID | Name | Parent | Products |
---|---|---|---|
1 | Root | ||
2 | Equipment | 1 | [1] |
3 | Potions | 1 | [2, 3] |
As you can see, the product table (which also can be a document in a document DB or just JSON which is returned from an external service call)just has product information. Category, on the the other hand, also contains 2 relations - to the products within this category and to a parent category.First let’s look at how we can fetch entities by ID, and then we will look at how we can use a relation information for this.
First of all you need to define a Fetcher:
valproducts=Fetcher((ctx:MyCtx,ids:Seq[Int])=>ctx.loadProductsById(ids))valcategories=Fetcher((ctx:MyCtx,ids:Seq[Int])=>ctx.loadCategoriesById(ids))
Now you should be able to define aDeferredResolver
based on these fetchers:
valresolver:DeferredResolver[MyCtx]=DeferredResolver.fetchers(products,categories)
Every time you need to load a particular entity by ID, you can use the fetcher to create aDeferred
value for you:
Field("category",CategoryType,arguments=Argument("id",IntType)::Nil,resolve=c=>categories.defer(c.arg[Int]("id")))Field("categoryMaybe",OptionType(CategoryType),arguments=Argument("id",IntType)::Nil,resolve=c=>categories.deferOpt(c.arg[Int]("id")))Field("productsWithinCategory",ListType(ProductType),resolve=c=>categories.deferSeqOpt(c.value.products))
The deferred resolution mechanism will take care of the rest and will fetch products and categories in the most efficient way.
DeferredValue
Fetched entities can be further transformed as part of the deferred resolutionmechanism by wrapping the deferred value inDeferredValue
and using itsmap
method.
Field("categoryName",OptionType(StringType),resolve=c=>DeferredValue(categories.deferOpt(c.value.categoryId)).map(_.name))
In order to extract the ID from entities, the Fetch API uses theHasId
type class:
caseclassProduct(id:String,name:String)objectProduct{implicitvalhasId=HasId[Product,String](_.id)}
If you don’t want to define an implicit instance, you can also provide it directly to the fetcher like this:
Fetcher((ctx:MyCtx,ids:Seq[String])=>ctx.loadProductsById(ids))(HasId(_.id))
The Fetch API is also able to fetch entities based on their relation to other entities. In our example category has 2 relations, so let’s define these relations:
valbyParent=Relation[Category,Int]("byParent",c=>Seq(c.parent))valbyProduct=Relation[Category,Int]("byProduct",c=>c.products)
You need to useFetcher.rel
to define aFetcher
that supports relations:
valcategories=Fetcher.rel((repo,ids)=>repo.loadCategories(ids),(repo,ids)=>repo.loadCategoriesByRelation(ids))
In case of relation batch functionids
would be of typeRelationIds[Res]
which contains the list of IDs for every relation type.
Now you should be able to use the category fetcher to createDeferred
values like this:
Field("categoriesByProduct",ListType(CategoryType),arguments=Argument("productId",IntType)::Nil,resolve=c=>categories.deferRelSeq(byProduct,c.arg[Int]("productId")))Field("categoryChildren",ListType(CategoryType),resolve=c=>categories.deferRelSeq(byParent,c.value.id))
The Fetch API supports caching. You just need to define a fetcher withFetcher.caching
orrelCaching
and all of the entities will be cached on a per-query basis.This means that every query execution gets its own isolated cache instance.
You can provide an alternative cache implementation viaFetcherConfig
:
valcache=FetcherCache.simplevalcategories=Fetcher(config=FetcherConfig.caching(cache),fetch=(ctx,ids)=>ctx.loadCategoriesById(ids))
TheFetcherCache
will cache not only the entities themselves, but also relation information between entities.
In some cases you may want to split bigger batches into a set of batches of particular size. You can do this by providing an appropriateFetcherConfig
:
valcache=FetcherCache.simplevalcategories=Fetcher(config=FetcherConfig.maxBatchSize(10),fetch=(repo,ids)=>repo.loadCategories(ids))
If you already have an existingDeferredResolver
, you can still use it in combination with fetchers:
DeferredResolver.fetchersWithFallback(newExitingDeferredResolver,products,categoies)
TheincludeDeferredFromField
andgroupDeferred
would always be delegated to a fallback.
GraphQL is a very flexible data query language. Unfortunately with flexibility comes also a danger of misuse by malicious clients.Since typical GraphQL schemas contain recursive types and circular dependencies, clients are able to send infinitely deep querieswhich may have high impact on server performance. That’s because it’s important to analyze query complexity before executing it.Sangria provides two mechanisms to protect your GraphQL server from malicious or expensive queries which are described in the next sections.
Query complexity analysis makes a rough estimation of the query complexity before it is executed. The complexity is aDouble
number that iscalculated according to the simple rule described below.
Every field in the query gets a default score1
(includingObjectType
nodes). The “complexity” of the query is the sum of all field scores.
So the following instance query:
queryTest{droid(id:"1000"){idserialNumber}pets(limit:20){nameage}}
will have complexity6
. You probably noticed, that score is a bit unfair since thepets
field is actually a list which can contain a max of 20elements in the response.
You can customize the field score with acomplexity
argument in order to solve these kinds of issues:
Field("pets",OptionType(ListType(PetType)),arguments=Argument("limit",IntType)::Nil,complexity=Some((ctx,args,childScore)=>25.0D+args.arg[Int]("limit")*childScore),resolve=ctx=>...)
Now the query will get a score of68
which is a much better estimation.
In order to analyze the complexity of a query, you need to add a correspondingQueryReducer
to theExecutor
.In this examplerejectComplexQueries
will reject all queries with complexity higher than1000
:
valrejectComplexQueries=QueryReducer.rejectComplexQueries[Any](1000,(c,ctx)=>newIllegalArgumentException(s"Too complex query"))Executor.execute(schema,query,queryReducers=rejectComplexQueries::Nil)
If you just want to estimate the complexity and then perform different actions, then there is another helper function for this:
valcomplReducer=QueryReducer.measureComplexity[MyCtx]{(c,ctx)=>// do some analysisctx}
The complexity of a full introspection query (used by tools like GraphiQL) is around100
.
There is also another simpler mechanism to protect against malicious queries: limiting query depth. It can be done by providingthemaxQueryDepth
argument to theExecutor
:
valexecutor=Executor(schema=MySchema,maxQueryDepth=Some(7))
Bad things can happen during the query execution. When errors happen, thenFuture
would be resolved with some exception. Sangria allows you to distinguish between different types of errors that happen before actual query execution:
QueryReducingError
- an error happened in the query reducer. If you are throwing some exceptions within a customQueryReducer
, then they would be wrapped inQueryReducingError
QueryAnalysisError
- signifies issues in the query or variables. This means that client has made some error. If you are exposing a GraphQL HTTP endpoint, then you may want to return a 400 status code in this case.ErrorWithResolver
- unexpected errors before query executionAll mentioned, exception classes expose aresolveError
method which you can use to render an error in GraphQL-compliant format.
Let’s see how you can handle these error in a small example. In most cases it makes a lot of sense to return a 400 HTTP status code if the query validation failed:
executor.execute(query,...).map(Ok(_)).recover{caseerror:QueryAnalysisError=>BadRequest(error.resolveError)caseerror:ErrorWithResolver=>InternalServerError(error.resolveError)}
This code will produce status code 400 in case of any error caused by client (query validation, invalid operation name, error in query reducer, etc.).
When some unexpected error happens in theresolve
function, sangria handles it according to therules defined in the spec.If an exception implements theUserFacingError
trait, then the error message would be visible in the response.Otherwise the error message is obfuscated and the response will contain"Internal server error"
.
In order to define custom error handling mechanisms, you need to provide anExceptionHandler
toExecutor
. Here is an example:
valexceptionHandler=ExceptionHandler{case(m,e:IllegalStateException)=>HandledException(e.getMessage)}Executor(schema,exceptionHandler=exceptionHandler).execute(doc)
This example provides an errormessage
(which would be shown instead of “Internal server error”).
You can also add additional fields in the error object like this:
valexceptionHandler=ExceptionHandler{case(m,e:IllegalStateException)=>HandledException(e.getMessage,Map("foo"->m.arrayNode(Seq(m.scalarNode("bar","String",Set.empty),m.scalarNode("1234","Int",Set.empty))),"baz"->m.scalarNode("Test","String",Set.empty)))}
You can also provide a list of handled errors toHandledException
. This will result in several error elements in the execution result.
In addition to handling errors coming fromresolve
function,ExceptionHandler
also allows to handleViolation
s andUserFacingError
s:
onException
- all unexpected exceptions coming from theresolve
functionsonViolation
- handles violations (things like validation errors, argument/variable coercion, etc.)onUserFacingError
- handles standard sangria errors (errors like invalid operation name, max query depth, etc.)Here is an example if handling a violation, changing the message and adding extra fields:
valexceptionHandler=ExceptionHandler(onViolation={case(m,v:UndefinedFieldViolation)=>HandledException("Field is missing!!! D:",Map("fieldName"->m.scalarNode(v.fieldName,"String",Set.empty),"errorCode"->m.scalarNode("OOPS","String",Set.empty)))})
GraphQL query execution needs to know how to serialize the result of execution and how to deserialize arguments/variables.The specification itself does not define the data format, instead it uses abstract concepts like map and list.Sangria does not hard-code the serialization mechanism. Instead it provides two traits for this:
ResultMarshaller
- knows how to serialize results of executionInputUnmarshaller[Node]
- knows how to deserialize the arguments/variablesAt the moment Sangria provides implementations for these libraries:
sangria.marshalling.queryAst._
- native Query Value AST serializationsangria.marshalling.sprayJson._
- spray-json serialization"org.sangria-graphql" %% "sangria-spray-json" % "1.0.2"
sangria.marshalling.playJson._
- play-json serialization"org.sangria-graphql" %% "sangria-play-json" % "2.0.1"
sangria.marshalling.circe._
- circe serialization"org.sangria-graphql" %% "sangria-circe" % "1.3.0"
sangria.marshalling.argonaut._
- argonaut serialization"org.sangria-graphql" %% "sangria-argonaut" % "1.0.1"
sangria.marshalling.json4s.native._
- json4s-native serialization"org.sangria-graphql" %% "sangria-json4s-native" % "1.0.1"
sangria.marshalling.json4s.jackson._
- json4s-jackson serialization"org.sangria-graphql" %% "sangria-json4s-jackson" % "1.0.1"
sangria.marshalling.msgpack._
-MessagePack serialization"org.sangria-graphql" %% "sangria-msgpack" % "2.0.0"
sangria.marshalling.ion._
-Amazon Ion serialization"org.sangria-graphql" %% "sangria-ion" % "2.0.0"
In order to use one of these, just import it and the result of execution will be of the correct type:
importsangria.marshalling.sprayJson._valresult:Future[JsValue]=Executor.execute(TestSchema.StarWarsSchema,queryAst,variables=varsuserContext=newCharacterRepo,deferredResolver=newFriendsResolver)
Default values should now have an instance of theToInput
type-class which is defined for all supported input types like Scala map-like data structures, different JSON ASTs, etc. It even supports things likeWrites
from play-json orJsonFormat
from spray-json by default. This means that you can use your domain objects (likeUser
orApple
) as a default value for input fields or arguments as long as you haveWrites
orJsonFormat
defined for them.
The mechanism is very extensible: you just need to define an implicitToInput[T]
for a class you want to use as a default value.
FromInput
provides high-level and low-level ways to deserialize arbitrary input objects, just likeToInput
.
In order to use this feature, you need to provide a type parameter to theInputObjectType
:
caseclassArticle(title:String,text:Option[String])valArticleType:InputObjectType[Article]=InputObjectType[Article]("Article",List(InputField("title",StringType),InputField("text",OptionInputType(StringType))))valarg:Argument[Article]=Argument[Article@@InputObjectResult]("article",ArticleType)
This code will not compile unless you define an implicit instance ofFromInput
for theArticle
case class:
implicitvalmanual:FromInput[Article]=newFromInput[Article]{valmarshaller=CoercedScalaResultMarshaller.defaultdeffromResult(node:marshaller.Node)={valad=node.asInstanceOf[Map[String,Any]]Article(title=ad("title").asInstanceOf[String],text=ad.get("text").flatMap(_.asInstanceOf[Option[String]]))}}
As you can see, you need to provide aResultMarshaller
for the desired format and then use a marshaled value to create a domain object based on it. Many instances ofFromInput
are already provided out-of-the-box. For instanceFromInput[Map[String, Any]]
supports map-like data-structure format. All supported JSON libraries also provideFromInput[JsValue]
so that you can use JSON AST instead of working withMap[String, Any]
.
Moreover, libraries like sangria-play-json and sangria-spray-json already provide support for codecs likeReads
andJsonFormat
.This means that your domain objects are automatically supported as long as you haveReads
orJsonFormat
defined for them.For instance, this example should compile and work just fine without an explicitFromInput
declaration:
importsangria.marshalling.playJson._importplay.api.libs.json._caseclassArticle(title:String,text:Option[String])implicitvalarticleFormat=Json.format[Article]valArticleType:InputObjectType[Article]=InputObjectType[Article]("Article",List(InputField("title",StringType),InputField("text",OptionInputType(StringType))))valarg:Argument[Article]=Argument[Article@@InputObjectResult]("article",ArticleType)
A subset of GraphQL grammar that handles input object is also available as a standalone feature. You can read more about it in a following blog post:
This feature allows you to parse and render anyast.Value
independently from GraphQL query. You can also usegraphqlInput
macros for this:
importsangria.renderer.QueryRendererimportsangria.macros._importsangria.astvalparsed:ast.Value=graphqlInput""" { id: "1234345" version: 2 # changed 2 times deliveries: [ {id: 123, received: false, note: null, state: OPEN} ] } """valrendered:String=QueryRenderer.render(parsed,QueryRenderer.PrettyInput)println(rendered)
It will produce the following output:
{id:"1234345"version:2deliveries:[{id:123received:falsenote:nullstate:OPEN}]}
ProperInputUnmarshaller
andResultMarshaller
are available for it, so you can useast.Value
as a variable or as a resultof GraphQL query execution.
In addition to parsing, you can also deserialize anInputDocument
based on theFromInput
type class. Here is an example:
caseclassComment(author:String,text:Option[String])caseclassArticle(title:String,text:Option[String],tags:Option[Vector[String]],comments:Vector[Option[Comment]])valArticleType:InputObjectType[Article]=???valdocument=QueryParser.parseInputDocumentWithVariables(""" { title: "foo", tags: null, comments: [] } { title: "Article 2", text: "contents 2", tags: ["spring", "guitars"], comments: [{ author: "Me" text: $comm }] } """)valvars=scalaInput(Map("comm"->"from variable"))valarticles:Vector[Article]=document.to(ArticleType,vars)
as a result of this deserialization, you will get following list ofarticles
:
Vector(Article("foo",Some("Hello World!"),None,Vector.empty),Article("Article 2",Some("contents 2"),Some(Vector("spring","guitars")),Vector(Some(Comment("Me",Some("from variable"))))))
As a natural extension ofResultMarshaller
andInputUnmarshaller
abstractions, sangria allows you to convert between different formats at will.
Here is, for instance, how you can convert circeJson
into sprayJsonJsValue
:
importsangria.marshalling.circe._importsangria.marshalling.sprayJson._importsangria.marshalling.MarshallingUtil._valcirceJson=Json.array(Json.empty,Json.int(123),Json.array(Json.obj("foo"->Json.string("bar"))))valsprayJson=circeJson.convertMarshaled[JsValue]
If your favorite library is not supported yet, then it’s pretty easy to create an integration library yourself. All marshalling libraries depend on and implementsangria-marshalling-api
. You can include it together with the testkit like this:
libraryDependencies++=Seq("org.sangria-graphql"%%"sangria-marshalling-api"%"1.0.4","org.sangria-graphql"%%"sangria-marshalling-testkit"%"1.0.3"%"test")
After you implemented the actual integration code, you test whether it’s semantically correct with the help of a testkit. Testkit provides a set of ScalaTest-based tests to verify an implementation of marshalling library (so that you don’t need to write tests yourself). Here is an example from spray-json integration library that uses a testkit tests:
classSprayJsonSupportSpecextendsWordSpecwithMatcherswithMarshallingBehaviourwithInputHandlingBehaviourwithParsingBehaviour{objectJsonProtocolextendsDefaultJsonProtocol{implicitvalcommentFormat=jsonFormat2(Comment.apply)implicitvalarticleFormat=jsonFormat4(Article.apply)}"SprayJson integration"should{importJsonProtocol._behavelike`value (un)marshaller`(SprayJsonResultMarshaller)behavelike`AST-based input unmarshaller`(sprayJsonFromInput[JsValue])behavelike`AST-based input marshaller`(SprayJsonResultMarshaller)behavelike`case class input unmarshaller`behavelike`case class input marshaller`(SprayJsonResultMarshaller)behavelike`input parser`(ParseTestSubjects(complex="""{"a": [null, 123, [{"foo": "bar"}]], "b": {"c": true, "d": null}}""",simpleString="\"bar\"",simpleInt="12345",simpleNull="null",list="[\"bar\", 1, null, true, [1, 2, 3]]",syntaxError=List("[123, FOO BAR")))}}
Sangria supports generic middleware that can be used for different purposes, like performance measurement, metrics collection, security enforcement, etc. on a field and query level.Moreover it makes it much easier for people to share standard middleware in libraries. Middleware allows you to define callbacks before/after query and field.
Here is a small example of its usage:
classFieldMetricsextendsMiddleware[Any]withMiddlewareAfterField[Any]withMiddlewareErrorField[Any]{typeQueryVal=TrieMap[String,List[Long]]typeFieldVal=LongdefbeforeQuery(context:MiddlewareQueryContext[Any,_,_])=TrieMap()defafterQuery(queryVal:QueryVal,context:MiddlewareQueryContext[Any,_,_])=reportQueryMetrics(queryVal)defbeforeField(queryVal:QueryVal,mctx:MiddlewareQueryContext[Any,_,_],ctx:Context[Any,_])=continue(System.currentTimeMillis())defafterField(queryVal:QueryVal,fieldVal:FieldVal,value:Any,mctx:MiddlewareQueryContext[Any,_,_],ctx:Context[Any,_])={valkey=ctx.parentType.name+"."+ctx.field.namevallist=queryVal.getOrElse(key,Nil)queryVal.update(key,list:+(System.currentTimeMillis()-fieldVal))None}deffieldError(queryVal:QueryVal,fieldVal:FieldVal,error:Throwable,mctx:MiddlewareQueryContext[Any,_,_],ctx:Context[Any,_])={valkey=ctx.parentType.name+"."+ctx.field.namevallist=queryVal.getOrElse(key,Nil)valerrors=queryVal.getOrElse("ERROR",Nil)queryVal.update(key,list:+(System.currentTimeMillis()-fieldVal))queryVal.update("ERROR",errors:+1L)}}valresult=Executor.execute(schema,query,middleware=newFieldMetrics::Nil)
It will record the execution time of all fields in a query and then report it in some way.
Middleware supports 2 types state that you can use within a middleware instance:
QueryVal
- an instance of this type is create at the beginning of the query execution and then propagated to all other middleware methodsFieldVal
- an instance of this type may be returned frombeforeField
and will be given as a argument toafterField
andfieldError
for the same field.These two types of state provide a way to avoid shared mutable state in case some intermediate value need to be propagated between differentmethods of middleware.
afterField
also allows you to transform field values by returningSome
with a transformed value. You can also throw an exception frombeforeField
orafterField
in order to indicate a field error.
beforeField
returnsBeforeFieldResult
which allows you to add aMiddlewareAttachment
.This attachment then can be used in resolve function viaContext.attachment
/Context.attachments
.
In case several middleware objects are defined for the same execution,beforeField
would be called in the order middleware is defined.afterField
, on the other hand, would be call in reverse order. To demonstrate this, let’s look at this middleware as an example:
caseclassSuffixer(suffix:String)extendsMiddleware[Any]withMiddlewareAfterField[Any]{typeQueryVal=UnittypeFieldVal=UnitdefbeforeQuery(context:MiddlewareQueryContext[Any,_,_])=()defafterQuery(queryVal:QueryVal,context:MiddlewareQueryContext[Any,_,_])=()defbeforeField(cache:QueryVal,mctx:MiddlewareQueryContext[Any,_,_],ctx:Context[Any,_])=continuedefafterField(cache:QueryVal,fromCache:FieldVal,value:Any,mctx:MiddlewareQueryContext[Any,_,_],ctx:Context[Any,_])=valuematch{cases:String=>Some(s+suffix)case_=>None}}
it just adds a suffix to a string. When some field in a schema returns value"v"
and we execute query like this:
Executor.execute(schema,query,middleware=Suffixer(" s1")::Suffixer(" s2")::Nil)
Then the result would be"v s2 s1"
. Here is diagram that shows how different middleware methods are called:
In order to ensure generic classification of fields, every field contains a generic list orFieldTag
s which provides a user-definedmeta-information about this field (just to highlight a few examples:Permission("ViewOrders")
,Authorized
,Measured
,Cached
, etc.).You can find another example ofFieldTag
andMiddleware
usage inAuthentication and Authorisation section.
GraphQL spec allowsfree-form extensions to be added in the GraphQL response.These are quite useful for things like debug and profiling information, for example. Sangria provides a special middleware traitMiddlewareExtension
which provides an easy way forMiddleware
to add extensions in the GraphQL response.
Here is an example of a very simple middleware that adds a formatted query in the response:
objectFormattedextendsMiddleware[Any]withMiddlewareExtension[Any]{typeQueryVal=UnitdefbeforeQuery(context:MiddlewareQueryContext[Any,_,_])=()defafterQuery(queryVal:QueryVal,context:MiddlewareQueryContext[Any,_,_])=()defafterQueryExtensions(queryVal:QueryVal,context:MiddlewareQueryContext[Any,_,_]):Vector[Extension[_]]={importsangria.marshalling.queryAst._Vector(Extension(ObjectValue(Vector(ObjectField("formattedQuery",StringValue(context.queryAst.renderPretty)))):Value))}}
Now you can use it by just adding it in the list of middleware during the execution:
Executor.execute(schema,query,middleware=Formatted::Nil)
Here is an example of execution result JSON:
{"data":{"human":{"name":"Luke Skywalker"}},"extensions":{"formattedQuery":"{\n human(id:\"1000\") {\n name\n }\n}"}}
With middleware, Sangria provides a very convenient way to instrument GraphQL query execution and introduce profiling logic. Out-of-the-boxSangria provides a simple mechanism to log slow queries and show profiling information. To use it, your need to addsangria-slowlog
dependency:
libraryDependencies+="org.sangria-graphql"%%"sangria-slowlog"%"3.0.0"
Library provides a middleware that logs instrumented query information if execution exceeds specific threshold. An example:
importsangria.slowlog.SlowLogimportscala.concurrent.duration._Executor.execute(schema,query,middleware=SlowLog(logger,threshold=10seconds)::Nil)
If query execution takes more than 10 seconds to execute, then you will see similar info in the logs:
# [Execution Metrics] duration: 12362ms, validation: 0ms, reducers: 0ms## $id = "1000"query Test($id: String!){# [Query] count: 1, time: 2ms## $id = "1000" human(id:$id){# [Human] count: 1, time: 0ms name# [Human] count: 1, time: 11916ms appearsIn# [Human] count: 1, time: 358ms friends{# [Droid] count: 2, min: 0ms, max: 0ms, mean: 0ms, p75: 0ms, p95: 0ms, p99: 0ms# [Human] count: 2, min: 0ms, max: 0ms, mean: 0ms, p75: 0ms, p95: 0ms, p99: 0ms name}}}
sangria-slowlog
has full support for GraphQL fragments and polymorphic types, so you will always see metrics for concrete types.
In addition to logging,sangria-slowlog
also supports graphql extensions. Extensions will add a profiling info in the response underextensions
top-level field. In the most basic form, you can use it like this (this approach also disables the logging):
Executor.execute(schema,query,middleware=SlowLog.extension::Nil)
After middleware is added, you will see following JSON in the response:
{"data":{"human":{"name":"Luke Skywalker","appearsIn":["NEWHOPE","EMPIRE","JEDI"],"friends":[{"name":"Han Solo"},{"name":"Leia Organa"},{"name":"C-3PO"},{"name":"R2-D2"}]}},"extensions":{"metrics":{"executionMs":362,"validationMs":0,"reducersMs":0,"query":"# [Execution Metrics] duration: 362ms, validation: 0ms, reducers: 0ms\n#\n# $id =\"1000\"\nquery Test($id: String!) {\n # [Query] count: 1, time: 2ms\n #\n # $id =\"1000\"\n human(id: $id) {\n # [Human] count: 1, time: 0ms\n name\n\n # [Human] count: 1, time: 216ms\n appearsIn\n\n # [Human] count: 1, time: 358ms\n friends {\n # [Droid] count: 2, min: 0ms, max: 0ms, mean: 0ms, p75: 0ms, p95: 0ms, p99: 0ms\n # [Human] count: 2, min: 0ms, max: 0ms, mean: 0ms, p75: 0ms, p95: 0ms, p99: 0ms\n name\n }\n }\n}","types":{"Human":{"friends":{"count":1,"minMs":358,"maxMs":358,"meanMs":358,"p75Ms":358,"p95Ms":358,"p99Ms":358},"appearsIn":{"count":1,"minMs":216,"maxMs":216,"meanMs":216,"p75Ms":216,"p95Ms":216,"p99Ms":216},"name":{"count":3,"minMs":0,"maxMs":0,"meanMs":0,"p75Ms":0,"p95Ms":0,"p99Ms":0}},"Query":{"human":{"count":1,"minMs":2,"maxMs":2,"meanMs":2,"p75Ms":2,"p95Ms":2,"p99Ms":2}},"Droid":{"name":{"count":2,"minMs":0,"maxMs":0,"meanMs":0,"p75Ms":0,"p95Ms":0,"p99Ms":0}}}}}}
AllSlowLog
methods acceptaddExtentions
argument which allows you to include these extensions along the way.
With a small tweaking, you can also include “Profile” button in GraphiQL. On the server you just need to conditionally includeSlowLog.extension
middleware to make it work. Here is an example of how this integration might look like:
Sometimes it can be helpful to perform some analysis on a query before executing it. An example is complexity analysis: it aggregates the complexityof all fields in the query and then rejects the query without executing it if complexity is too high. Another example is gathering allPermission
field tags and then fetching extra user auth data from an external service if query contains protected fields. This need to be done before the querystarts to execute.
Sangria provides a mechanism for this kind of query analysis withQueryReducer
. The query reducer implementation will go through all of the fieldsin the query and aggregate them to a single value.Executor
will then callreduceCtx
with this aggregated value which gives you anopportunity to perform some logic and update theCtx
before query is executed.
Out-of-the-box sangria comes with severalQueryReducer
s for common use-cases:
QueryReducer.measureComplexity
- measures a complexity of the queryQueryReducer.rejectComplexQueries
- rejects queries with complexity above provided thresholdQueryReducer.collectTags
- collectsFieldTag
s based on a partial functionQueryReducer.measureDepth
- measures max query depthQueryReducer.rejectMaxDepth
- rejects queries that are deeper than provided thresholdQueryReducer.hasIntrospection
- verifies whether query contains an introspection fieldsQueryReducer.rejectIntrospection
- rejects queries that contain an introspection fields. This may be useful for production environments where introspection can potentially be abused.Here is a small example ofQueryReducer.collectTags
:
valfetchUserProfile=QueryReducer.collectTags[MyContext,String]{casePermission(name)=>name}{(permissionNames,ctx)=>if(permissionNames.nonEmpty){valuserProfile:Future[UserProfile]=externalService.getUserProfile()userProfile.map(profile=>ctx.copy(profile=Some(profile))}elsectx}Executor.execute(schema,query,userContext=newMyContext,queryReducers=fetchUserProfile::Nil)
This allows you to avoid fetching a user profile if it’s not needed based on the query fields. You can find more information about theQueryReducer
that analyses query complexity in theQuery Complexity Analysis section.
Sangria supports all standard GraphQL scalars likeString
,Int
,ID
, etc. In addition, sangria introduces the following built-in scalar types:
Long
- a 64 bit integer value which is represented as aLong
in Scala codeBigInt
- similar toInt
scalar values, but allows you to transfer big integer values and represents them in code as Scala’sBigInt
classBigDecimal
- similar toFloat
scalar values, but allows you to transfer big decimal values and represents them in code as Scala’sBigDecimal
classYou can also create your own custom scalar types. The input and output of scalar types should always be a value that the GraphQL grammar supports, like string,number, boolean, etc. Here is an example of aDateTime
(from joda-time) scalar type implementation:
caseobjectDateCoercionViolationextendsValueCoercionViolation("Date value expected")defparseDate(s:String)=Try(newDateTime(s,DateTimeZone.UTC))match{caseSuccess(date)=>Right(date)caseFailure(_)=>Left(DateCoercionViolation)}valDateTimeType=ScalarType[DateTime]("DateTime",coerceOutput=(d,caps)=>if(caps.contains(DateSupport))d.toDateelseISODateTimeFormat.dateTime().print(d),coerceUserInput={cases:String=>parseDate(s)case_=>Left(DateCoercionViolation)},coerceInput={caseast.StringValue(s,_,_,_,_)=>parseDate(s)case_=>Left(DateCoercionViolation)})
Some marshalling formats natively supportjava.util.Date
, so we check for marshaller capabilities here and either return aDate
oraString
in ISO format.
Sometime you want to use a standard scalar type, but add a validation on top of it which may be also represented by different scala type.Example can be aUserId
value class that represent aString
-based user ID orInt Refined Positive
fromrefined scala library.
This is exactly what scalar aliases allow you to do. Here is how you can define a scalar alias for mentioned scenarios:
implicitvalUserIdType=ScalarAlias[UserId,String](StringType,_.id,id=>Right(UserId(id)))implicitvalPositiveIntType=ScalarAlias[IntRefinedPositive,Int](IntType,_.value,i=>refineV[Positive](i).left.map(RefineViolation))
You can useUserIdType
andPositiveIntType
in all places where you can use a scalar types. In introspection results they would be seen asjustString
andInt
, but behind the scenes values would be validated and transformed in correspondent scala types.
GraphQL schema allows you to declare fields and enum values as deprecated. When you execute a query, you can provide your custom implementation of theDeprecationTracker
trait to theExecutor
in order to track deprecated fields and enum values (you can, for instance, log all usages or send metrics to graphite):
traitDeprecationTracker{defdeprecatedFieldUsed[Ctx](ctx:Context[Ctx,_]):UnitdefdeprecatedEnumValueUsed[T,Ctx](enum:EnumType[T],value:T,userContext:Ctx):Unit}
Even though sangria does not provide security primitives explicitly, it’s pretty straightforward to implement it in different ways. It’s a pretty commonrequirement of modern web-applications, so this section was written to demonstrate several possible approaches of handling authentication and authorisation.
First let’s define some basic infrastructure for this example:
caseclassUser(userName:String,permissions:List[String])traitUserRepo{/** Gives back a token or sessionId or anything else that identifies the user session */defauthenticate(userName:String,password:String):Option[String]/** Gives `User` object with his/her permissions */defauthorise(token:String):Option[User]}classColorRepo{defcolors:List[String]defaddColor(color:String):Unit}
In order to indicate an auth error, we need to define some exceptions:
caseclassAuthenticationException(message:String)extendsException(message)caseclassAuthorisationException(message:String)extendsException(message)
We also want the user to see proper error messages in a response, so let’s define an error handler for this:
valerrorHandler=ExceptionHandler{case(m,AuthenticationException(message))=>HandledException(message)case(m,AuthorisationException(message))=>HandledException(message)}
Now that we defined a base for a secure application, let’s create a context class, which will provide GraphQL schema with all necessary helper functions:
caseclassSecureContext(token:Option[String],userRepo:UserRepo,colorRepo:ColorRepo){deflogin(userName:String,password:String)=userRepo.authenticate(userName,password)getOrElse(thrownewAuthenticationException("UserName or password is incorrect"))defauthorised[T](permissions:String*)(fn:User=>T)=token.flatMap(userRepo.authorise).fold(throwAuthorisationException("Invalid token")){user=>if(permissions.forall(user.permissions.contains))fn(user)elsethrowAuthorisationException("You do not have permission to do this operation")}defensurePermissions(permissions:List[String]):Unit=token.flatMap(userRepo.authorise).fold(throwAuthorisationException("Invalid token")){user=>if(!permissions.forall(user.permissions.contains))throwAuthorisationException("You do not have permission to do this operation")}defuser=token.flatMap(userRepo.authorise).fold(throwAuthorisationException("Invalid token"))(identity)}
Now we should be able to execute queries:
Executor.execute(schema,queryAst,userContext=newSecureContext(token,userRepo,colorRepo),exceptionHandler=errorHandler)
As a last step, we need to define a schema. You can do it in two different ways:
resolve
function itselfMiddleware
andFieldTag
s to ensure that user has permissions to access fieldsvalUserNameArg=Argument("userName",StringType)valPasswordArg=Argument("password",StringType)valColorArg=Argument("color",StringType)valUserType=ObjectType("User",fields[SecureContext,User](Field("userName",StringType,resolve=_.value.userName),Field("permissions",OptionType(ListType(StringType)),resolve=ctx=>ctx.ctx.authorised("VIEW_PERMISSIONS"){_=>ctx.value.permissions})))valQueryType=ObjectType("Query",fields[SecureContext,Unit](Field("me",OptionType(UserType),resolve=ctx=>ctx.ctx.authorised()(user=>user)),Field("colors",OptionType(ListType(StringType)),resolve=ctx=>ctx.ctx.authorised("VIEW_COLORS"){_=>ctx.ctx.colorRepo.colors})))valMutationType=ObjectType("Mutation",fields[SecureContext,Unit](Field("login",OptionType(StringType),arguments=UserNameArg::PasswordArg::Nil,resolve=ctx=>UpdateCtx(ctx.ctx.login(ctx.arg(UserNameArg),ctx.arg(PasswordArg))){token=>ctx.ctx.copy(token=Some(token))}),Field("addColor",OptionType(ListType(StringType)),arguments=ColorArg::Nil,resolve=ctx=>ctx.ctx.authorised("EDIT_COLORS"){_=>ctx.ctx.colorRepo.addColor(ctx.arg(ColorArg))ctx.ctx.colorRepo.colors})))defschema=Schema(QueryType,Some(MutationType))
As you can see on this example, we are using context object to authorise user with theauthorised
function. An interesting thing to noticehere is that thelogin
field uses theUpdateCtx
action in order make login information available for sibling mutation fields. This makes querieslike this possible:
mutationLoginAndMutate{login(userName:"admin",password:"secret")withMagenta:addColor(color:"magenta")withOrange:addColor(color:"orange")}
Here we login and add colors in the same GraphQL query. It will produce a result like this one:
{"data":{"login":"a4d7fc91-e490-446e-9d4c-90b5bb22e51d","withMagenta":["red","green","blue","magenta"],"withOrange":["red","green","blue","magenta","orange"]}}
If the user does not have sufficient permissions, they will see a result like this:
{"data":{"me":{"userName":"john","permissions":null},"colors":["red","green","blue"]},"errors":[{"message":"You do not have permission to do this operation","field":"me.permissions","locations":[{"line":3,"column":25}]}]}
An alternative approach is to use middleware. This can provide a more declarative way to define field permissions.
First let’s defineFieldTag
s:
caseobjectAuthorisedextendsFieldTagcaseclassPermission(name:String)extendsFieldTag
This allows us to define a schema like this:
valUserType=ObjectType("User",fields[SecureContext,User](Field("userName",StringType,resolve=_.value.userName),Field("permissions",OptionType(ListType(StringType)),tags=Permission("VIEW_PERMISSIONS")::Nil,resolve=_.value.permissions)))valQueryType=ObjectType("Query",fields[SecureContext,Unit](Field("me",OptionType(UserType),tags=Authorised::Nil,resolve=_.ctx.user),Field("colors",OptionType(ListType(StringType)),tags=Permission("VIEW_COLORS")::Nil,resolve=_.ctx.colorRepo.colors)))valMutationType=ObjectType("Mutation",fields[SecureContext,Unit](Field("login",OptionType(StringType),arguments=UserNameArg::PasswordArg::Nil,resolve=ctx=>UpdateCtx(ctx.ctx.login(ctx.arg(UserNameArg),ctx.arg(PasswordArg))){token=>ctx.ctx.copy(token=Some(token))}),Field("addColor",OptionType(ListType(StringType)),arguments=ColorArg::Nil,tags=Permission("EDIT_COLORS")::Nil,resolve=ctx=>{ctx.ctx.colorRepo.addColor(ctx.arg(ColorArg))ctx.ctx.colorRepo.colors})))defschema=Schema(QueryType,Some(MutationType))
As you can see, security constraints are now defined as field’stags
. In order to enforce these security constraints we need implementMiddleware
like this:
objectSecurityEnforcerextendsMiddleware[SecureContext]withMiddlewareBeforeField[SecureContext]{typeQueryVal=UnittypeFieldVal=UnitdefbeforeQuery(context:MiddlewareQueryContext[SecureContext,_,_])=()defafterQuery(queryVal:QueryVal,context:MiddlewareQueryContext[SecureContext,_,_])=()defbeforeField(queryVal:QueryVal,mctx:MiddlewareQueryContext[SecureContext,_,_],ctx:Context[SecureContext,_])={valpermissions=ctx.field.tags.collect{casePermission(p)=>p}valrequireAuth=ctx.field.tagscontainsAuthorisedvalsecurityCtx=ctx.ctxif(requireAuth)securityCtx.userif(permissions.nonEmpty)securityCtx.ensurePermissions(permissions)continue}}
If you want to useGraphQL federation, you can use sangria to provide a service subgraph.For that, use thesangria-federated library that supports Federation v1 and v2.
There are quite a few helpers available which you may find useful in different situations.
Sometimes you would like to work with the results of an introspection query. This can be necessary in some client-side tools, for instance. Instead of workingdirectly with JSON (or other raw representations), you can parse it into a set of case classes that allow you to easily work with the whole schema introspection.
You can find a parser function insangria.introspection.IntrospectionParser
.
Sometimes it can be very useful to know the type of query operation. For example you need it if you want to return a different response for subscription queries.ast.Document
exposesoperationType
andoperation
for this.
Sangria is being used by several companies on production since several years and capable of handling a lot of traffic.
Sangria is indeed a fast library. If you want to take the maximum out of it, here are some guidelines and tips.
Make sure that you only compute the schema once. This easiest way is to use a singleton object to define the schema:
objectGraphQLSchema{valQueryType=???valMutationType=???valschema=Schema(QueryType,Some(MutationType))}
If you compute the schema at each request, you will lose a lot of performances.
If you are using a web server, make sure that you load the schema before the first request.This way, all classes will be loaded before the web server starts and the first request will not be slower than the others.
objectMainextendsApp{valschema=GraphQLSchema.schema// start the server}
parasitic
ExecutionContext (expert)Sangria usesFuture
to handle asynchronous operations. When you execute a query, you need to pass anExecutionContext
.
One way to improve performances is to use thescala.concurrent.ExecutionContext.parasitic
ExecutionContext.But be careful that this ExecutionContext will propagate everywhere, including in theDeferredResolver
where it might not be the best option. You might be using the wrong thread pool for IO operations.
To avoid this, you can pack the ExecutionContext in your Context and use it in theDeferredResolver
:
objectDeferredReferenceResolverextendsDeferredResolver[Context]{defresolve(deferred:Vector[Deferred[Any]],ctx:Context,queryState:Any)(implicitec:ExecutionContext):Vector[Future[Any]]=resolveInternal(deferred,ctx)privatedefresolveInternal(deferred:Vector[Deferred[Any]],ctx:Context):Vector[Future[Any]]={// for IO, uses non-parasitic ExecutionContextimplicitvalec:ExecutionContext=ctx.executionContext???}}ObjectGraphQLSchema{valschema=Schema(QueryType,deferredResolver=DeferredReferenceResolver)defexecute(ctx:Context,query:Document):Future[Json]{// just for internal execution, uses parasitic ExecutionContextimplicitvalec=scala.concurrent.ExecutionContext.parasiticExecutor.execute(schema,query)}}