Motivation
Implement custom JS resolvers in GraphQL that will help the user execute arbitrary business logic in addition to using the auto-generated resolvers.
User Impact
Users can directly use these JS resolvers instead of writing another NodeJS server to wrap around Dgraph. This will allow them to process data at the server end and transform them. This can be used in a range of cases like:
- Applying auth rules on fields. Based on the query and JWT values, the user could decide to hide some fields when returning the result.
- Applying some pre or post-processing logic before calling the auto-generated resolver.
- Example of pre-processing logic would be to automatically add
created_at
orupdated_at
fields for a type. - A post-processing step might be used to calculate the count or avg and return the final result to the user.
- Example of pre-processing logic would be to automatically add
Implementation
Instead of executing a query via one of the auto-generated resolvers, we could also allow the resolver to be a JS function. This function can make HTTP calls to arbitrary endpoints or a DQL query/mutation, transform the result and give us back the response to work with.
type User {
id: ID!
firstName: String!
lastName: String!
updatedAt: DateTime!
fullName: String
followersCount: Int
followers: [User]
}
type Query {
getCustomUser(firstName: String!): User @custom({
js: "fetchUser"
})
}
type Mutation {
updateUserLastName(id: ID!, lastName: String!) User @custom({
js: "updateLastName"
})
}
JS resolver
Custom mutation
function updateLastName(parent, args, context, info) {
// similar to context.dgraph.graphql, we would also have context.dgraph.dql
// which would allow you to run DQL queries and mutations on the underlying
// Dgraph instance using dgraph-js
now := time.Now() // or something similar in JS
var data = context.dgraph.graphql({
query: ```
mutation($id: ID!, $name: String!, $now: DateTime! ) {
updateUser(filter: {
ids: [$id],
},
set: {
lastName: $name,
updatedAt: $now
}
) {
firstName
lastName
updatedAt
}
}
```,
variables: {
id: args.id,
name: args.LastName,
now: now,
}
})
return data
}
Custom query
function fetchUser(parent, args, context, info) {
// similar to context.dgraph.graphql, we would also have context.dgraph.dql
// which would allow you to run DQL queries and mutations on the underlying
// Dgraph instance using dgraph-js
var data = context.dgraph.graphql({
query: ```
query($id: String!) {
getUser(id: $id) {
firstName
lastName
followers {
id
}
}
}
```,
variables: {
first_name: args.firstName
}
})
data.fullName = data.firstName + data.LastName
data.followersCount = data.followers.length
return data
}
Arguments (similar to Apollo client so that user’s have to change minimal code)
- parent : Empty for custom queries and mutations. Would be used later to have the * parent object when we support resolving custom fields.
- args : GraphQL arguments for the request.
- context : Contains auth info of the user (custom claims) and also provides access to calling internal GraphQL resolvers or DQL query/mutation.
- info : Query AST and execution information
So the custom query, can call a predefined resolver like getUser
or queryUser
and then transform the result before returning to the user. Similar things are possible for mutations. This would allow us to define mutations like updateUserName
, updateUserLocation
etc. where the validation can be done before to make sure that we only allow updating certain properties and then we can fall back to calling an internal resolver.
The JS resolvers would be stored as data inside Dgraph through an HTTP API.
Execution
Since hooks will be written in JS we need a way to execute them.
Solution 1: Execute JS in a separate NodeJS server (preferred)
Run Nodejs
server in sandbox mode and send the JS code to it via RPC to execute it there. NodeJS already has a sandbox mode. This gives us support for running ES6 and also the ability to import and use external libraries within the JS code. The only limitation is we have to make network calls but that should be faster as it will typically be running in the same machine.
Example code of how this might work. GitHub - arijitAD/Golang_Node_Executor: Executes Nodejs via GRPC from Golang client.
Solution 2: Use a Go library to execute JS
Example code: Sample program that takes the input to the JS function and executes it and prints the output.
Note: It is also possible to send a Golang Struct as input params and retrieve it back.
vm := otto.New()
if _, err := vm.Run(
`function JSHook(name) {
if (name === "Arijit") {
name = "Friends"
}
name = 'hello, ' + name + '!'
return name;
}`); err != nil {
panic(err)
}
output, err := vm.Call("JSHook", nil, "Arijit")
if err != nil {
panic(err)
}
fmt.Println(output)
output, err = vm.Call("JSHook", nil, "Friends")
if err != nil {
panic(err)
}
fmt.Println(output)
Otto limitations
- Doesn’t have a good solution for importing external libraries.
- Cannot issue fetch request which is a non-starter.
- Doesn’t support ES6. Only supports ES5.
- Old library and not actively maintained.
Validating and Storing Resolvers:
Once hooks are validated, we can store them in memory and as a key in badger similar to the schema.
Otto allows us to validate JS. In the case of NodeJS server
we can expose a validation endpoint.
filename := "" // A filename is optional
src := `
(function(){
console.log("Hello, World.");
return;
})();
`
// Parse some JavaScript, yielding a *ast.Program and/or an ErrorList
program, err := parser.ParseFile(nil, filename, src, 0)
Unknowns/limitations
-
Resolving a field through a JS function. We’ll only support custom queries and mutations for now. We can of course later support resolving fields as well in batch mode. Single-mode won’t make much sense.
-
The set of libraries that the user can use within their JS code would be limited and their versions would be fixed and controlled by us.
-
How do we store the JS functions inside Dgraph as some metadata which isn’t affected by DROP_ALL and DROP_DATA operations.
-
Support for other languages like RUST, Go etc. by exposing a gRPC interface.