Implement custom JS resolvers in GraphQL

Wouldn’t having this pre/post hooks work similarly to the custom resolvers be a simpler and more generic solution?, It would require a timeout to be configurable per hook, but it should satisfy most use cases and it would not be tied to any particular programming language or platform.

The user can already use a Lambda function and execute the pre, GraphQL resolver and post logic by using a custom query/mutation. Are you expecting anything more there which can’t be done right now?

That would be easier to support yes but I am afraid it won’t be fast enough if the pre and post hooks have to be executed as HTTP calls to remote servers. Having it be executed in memory or over a Node server running locally would be more performant.


Wait, what?
Can you provide an example of that capability?

We are still trialling this out but it seems like we could provide an architecture where the user can supply us JS code. This code could be provided as part of pre or post hook or could be used to resolve a custom query/mutation. We would pass the data to a Node server running locally over gRPC and execute the code with the data. Then the final transformed response returned by the post hook can be returned to the user as part of their GraphQL API.

This is also similar to what a user would do if they wanted to write some business logic over what the GraphQL server provides.


Right now with the custom directive, you cannot leave the middle part (GraphQL resolver) to Dgraph and only do a pre/post hook. The downside to this is that the function has to get all the data that may ever be requested and return it and let Dgrqph then limit what it wants based on the query.

I cannot create just a pre hook and work the request and then turn it back over to dgraph to process normally. Nor can I let dgraph process like normal and then catch it with a post to transform the final data.

With a custom directive I have to do the entire thing. That is sort of the point of getting these hooks.

You answered already though why doing it inline with dgraph would be more performant then calling am external script.

Then having the two options might be good idea, it will certainly serve itself well for the serverless environment. It would be up to the user to decide if they would like to take the performance penalty vs the convenience.

We are trying to execute JS code in a node server. We will be sending the code to the node server via Golang client over gRPC. Here is a sample code we wrote to test it out.

1 Like

Would implementing this gRPC interface in a generic way be possible?. It would be interesting to see where would the community take an interface like that. I particularly would love to make it work with Deno (although they dont have support for gRPC yet)

We are planning to serialize the data as bytes and transfer it and reconstruct it at client side. We are currently more inclined towards NodeJS than Deno due to it’s stability and library support. But since the service to execute the JS code will be seperate, it will be easy to implement the same in Deno or any other language. Users just have to implement the gRPC server stub.

1 Like

For this thing, we can use make queries to the Dgraph in JS hooks and validate the request. Once that is done we can move ahead with the update request. I don’t see a use-case where the user would want to split the request.

In this case, we can take the help of the info argument that stores the information about the query AST.


type Post {
  id: ID
  title: String
  text: String
  datePublished: DateTime
  author: Author

type Mutation {
  addPost(input: [AddPostInput!]!): AddPostPayload
  newPost(title: String, text: String): addPost @JSHooks....

If a user sends a mutation request for newPost then its pre-hook will have logic to add the additional fields like datePublished and author and in the schema, since we want to map it to addPost we would parse the AST accordingly and update all the custom mutation to their mapped mutation. In this case newPost would be converted to addPost and we will use the updated AST to do the mutation.

Coming back around to this to explain in detail. I think it could be doable from what I am seeing so far in the RFC, depending on one factor.


type User {
  username: String! @id
  isActive: Boolean!

So I know that the pre hook would not have access to the uid from dgraph, because the request has not hit Badger yet to get the uid.

What I am looking to do for reasons not fully explained here, is to have the uid available in the post hook. In the translation of the GQL to DQL the uid is within reach very easily. What would be nice is if it was able to send it through to the post hook.

More about use case: I want to be able to do some RDF set script for view/edit history tracking upon certain actions taking place by the post hook. Having a uid would make the links easier to build then needing to do an upsert to get the uid in the post script. It would just seem like extra work when a uid was so close already.

And to sum up my idea with a relative thought, would it be possible to do an asynchronous action in a post script that the post script does not wait until it is completed to return to the user?

Using the example above, I would want to send the mutation with RDF set, but don’t really care on waiting for that to return completed before returning the response to the user. I know it is talking about really small wait times, but every microsecond matters. I know javascript will handle it, I guess just checking if there will be a hard kill on any pending processes on return.

Had a bunch of calls with @pawan and @mrjn.

I think we might tweak the way the javascript is written, and try to look a little bit more like a webworker, so that we can make use of existing tools like webpack to include libraries.

A simple Javascript will look like this:

async getVirtualName({parent, graphql, args}) {
  const [arg1, arg2] = args;
  const data1 = await fetch(`${}`);
  const data2 = await graphql(`query { foo { bar } }`);
  return {...data1, ...data2 }

self.addEventListener("Query.getFullName", event => event.respondWith(getFullName(event)))
self.addEventListener("Mutation.updateName", event => event.respondWith(getFullName(event)))
self.addEventListener("User.virtualName", event => event.respondWith(getVirtualName(event)))

Alternatively, we can provide a function to get rid of the boiler plate at the end, but typescript will complain a bit

  "Query.getFullName": getFullName,
  "Mutation.updateName": updateName,
  "User.virtualName": virtualName,

@pawan @abhimanyusinghgaur @arijit

Quick question. Do you think it’s worth it to force batching at this stage itself? Instead of parent, you should always accept parents? And root resolvers will accept [null] in the parents list.

As a result, the various functions must always return an array with exactly the same length as the parents.

Using graphql JS resolvers in the same context, parent refers to the object of received data. This data may contain properties that are arrays as would be the case depending on tue schema query itself. Look at a regular graphql request and look at the data object. From my understanding that would be the parent in the post hook and a pre hook would be null because no data has yet been received.

It may be interesting how this plays out though with subgraphs (subqueries in a graphql layer on a rdbs). On the subquery level resolvers, the parent is the actual parent object at query time allowing the querying of the child based upon the parent. This is not really needed in a graph db, but the interesting question this leads to is if the hooks will be fired off on every layer? And if so, will the parent be the parent of the higher level? Again, this is not a requirement due to the nature of the graphdb, but just thinking of how lower levels hooks may or may not work.

If I have a field that is compiled with a post hook, does that hook still get fired off if the type is called on a lower level and not the root?

What’s the status of this?
What’s the flow for an RFC to become an item on the roadmap?
At what stage will there be an issue/PR to track this? Is there already, maybe?

I think most people don't realize that RFC's should be voted on. Could you discuss this in the monthly call, and write a blog article about this process? <3
1 Like

This RFC needs to be updated but we have basically decided that we’ll not be starting another Node server for this.

There is a draft PR for this What happens here is that Alpha would accept a lambda_url. Custom queries/mutations and fields can be resolved from this lambda_url which can run the JS code for you. We would make it easier for the user to achieve this in Slash GraphQL where they can give us the JS resolver and then tie it up to custom query/mutation or fields.

Usually, there is an RFC only for items that are already on the roadmap and where we have plans to work on them soon after the RFC is finalized. This one took a while to be finalized.

Yes sure, we’ll do that. We’ll mention it in the monthly call and see how can we communicate this better.


You mention a roadmap here…
Is this visible to us, non core team commoners? The project roadmap that used to be on github is long obsolete, and new stuff is being added by the day. I feel like moving to Discuss has been a step BACKWARDS <-- <-- for project transparency and keeping the community as a whole informed of what’s brewing.

Another systemic issue that i see coming up is the documentation, or lack thereof.
It seems that many new features don’t get covered. @secret, @lambda, and the other autogeneration directives, they all lack documentation.
I see that unit tests are part of the PR, so why not just change the definition of done to also include documentation for a newly developed feature?

Tagging @mrjn as well, as this is higher level stuff :slight_smile:

edits: Seems that the PR owner requested reviewers moments before i published this post, so removing a comment about the PR being in limbo.

That’s some very useful feedback there @davidLeonardi. I can understand the frustration with not being able to see whats on the roadmap. We do have a quarterly roadmap which is not exposed to the users right now. I’ll talk to the team and see how can we have a roadmap that is visible to the community from the next quarter.

Another very good point. This is something that we have struggled with a bit. Two very good technical writers have joined our team and they along with the engineers will help us be better at this.


Thanks for the feedback. We are actively sharing the new individual features as rfc with the community: We have fallen behind in updating the consolidated roadmap post Product Roadmap 2020 though. I have taken an action item to update it to reflect the rfc’s we are pursuing in 2020.

fully agree. we already have this framework in place but as you had correctly called out not doing a good job of it. We are actively working on identified gaps and addressing them in the last few weeks. Thanks for calling this out and highlighting where we need to improve.