That doesn’t seem like a good practice to me. If you are doing something sporadic. You should use Liveload as mentioned. Otherwise, you should obey the limits of the language. This message is from JS Runtime, not Dgraph. So there is nothing to do but look for another way out like reducing the Payload size into small pieces.
Web applications in general do not go beyond the basic limits of the Engine. If you wanna something brutal, use Go, C++ or something.
I didn’t know the actual size of the payload. It’s dynamic size for every request. I think there is no right way to split a tree into chunks because a tree would be wider or deeper. I hope I’m wrong.
You can do a lot of things. Like send several transactions and control the data context locally. Or you can send several uncommitted transactions and commit all of them in the end. This would preserve the Blank nodes UID context.
What is your need to send big payloads? there is a reason?
@lbwa I’ve accomplished something like what you’re describing in a prior project (golang-based).
The design featured an updater service which accepted bulk mutations from other services. It queued them up and sent the bulk mutation when either a limit (~5kb) was reached or a timeout expired.