Since we currently having issues with lambda, I’d have a few questions which I hope somebody can answer.
1. Is it possible to have multiple lambda servers or multiple lambda sources which are assigned to one specific namespace?
We are able to upload one lambda source per namespace on our dedicated cluster. So far I could not find any information if the same is possible with the community version. If it is, how would we have to set this up? Would we have multiple lambda servers running or could we have multiple lambda sources and route requests from alpha to the right source?
2. What is the idea behind ACL and the X-Dgraph-AccessToken?
When activating ACL, only namespace 0 will be exposed to public GraphQL requests. If one wants to access a schema on a different namespace, you would have to get the
X-Dgraph-AccessToken first. From the docs, the only way of doing so is via running the
login mutation against the
/admin endpoint of alpha.
Whenever we make requests against the
/admin endpoint we need an Admin API Key (sent via the
DG-Auth header) in order to successfully connect. Thus we would have to bundle the Admin Key with the frontend source if the FE wants to make requests to a namespace other than 0!
Therefore my question? How was this thought to be used?
Lambda and ACL
As mentioned before, when using ACL we need the
X-Dgraph-AccessToken to query data from namespaces other than 0. Unfortunately this header never gets forwarded inside custom lambda resolvers and thus we need to run the
login mutation inside every single custom resolver if we require data from any other namespace than 0. This is weird and also has the effect that the internal fetch wrappers
dql, which are submitted as arguments to every custom resolver, cannot be used. Thus we have to write a custom fetch wrapper which obviously causes additional bundle size. Considering we only have 500KB disk space and 128MB memory, this is an unnecessary overhead.
I’d be very interested how other people use Dgraph with ACL and multi tenancy!
Thank you in advance!