I hope this is the right category for this question.
I was recently experimenting with the @recurse
directive in combination with expand(_all_)
. I found that doing so very quickly gets you in big troubles since (at least for the Shared Cluster Instance), the chance of getting a timeout error is pretty high. This obviously depends on the amount of leaves a the root node has you want to query.
However, I found that
{
expRec(func: uid(0x1)) @recurse (loop: false, depth: 2) {
uid
expand(_all_)
}
}
leads to substantially more timeouts than doing
{
expAll(func: uid(0x1)) {
uid
expand(_all_) {
uid
expand(_all_)
}
}
}
which, at least in my opinion, should produce exactly the same results and take more or less the same amount of time. Since I cannot really give any examples here (my types are way to big an to nested to put them in here), I know it might be tricky to reproduce.
Motivation
The motivation behind this relies in the mutation response from a custom lambda, in combination with Relay. E.g. if I want to update a type User
, the return type has to be of type User
as well. If, like in my case, you have a huge object, it becomes pretty hard to maintain which fields should be returned by the lambda and which not. So I have tried to simply return the most fields possible for a certain type in order not to run in any Relay errors.
Is there anyone else having the same issues?