What I want to do
I got message about JS Array maximum length exceed when I want to commit a large payload through dgraph-js.
My question is what is the best practice for large paylaod committing? What should I do to solve this problem?
What I did
Thanks for any help
Right now the answer would be to convert it to RDF and then use the live-loader which was built for use cases like this.
Note, you can also live/bulk load JSON formatted data as well.
That doesn’t seem like a good practice to me. If you are doing something sporadic. You should use Liveload as mentioned. Otherwise, you should obey the limits of the language. This message is from JS Runtime, not Dgraph. So there is nothing to do but look for another way out like reducing the Payload size into small pieces.
Web applications in general do not go beyond the basic limits of the Engine. If you wanna something brutal, use Go, C++ or something.
You are correct. I’m also felt that is not a good practice. I try to find other way to commit these data. May be multiple chunks just like you said or stream connection like How to Use Live Loader in a more convenient way? - #9 by MichelDiz?
For chunks solution
I didn’t know the actual size of the payload. It’s dynamic size for every request. I think there is no right way to split a tree into chunks because a tree would be wider or deeper. I hope I’m wrong.
For stream solution
As you said How to Use Live Loader in a more convenient way? - #9 by MichelDiz, maybe stream connection is the final answer.
Does dgraph-js support stream connection? or maybe I should implement my own stream connection based on JS runtime?
You can do a lot of things. Like send several transactions and control the data context locally. Or you can send several uncommitted transactions and commit all of them in the end. This would preserve the Blank nodes UID context.
What is your need to send big payloads? there is a reason?
Nope, all is transaction based.
I used dgraph to store the analysis result of package/application dependencies. The result could be super small or big.
@lbwa I’ve accomplished something like what you’re describing in a prior project (golang-based).
The design featured an updater service which accepted bulk mutations from other services. It queued them up and sent the bulk mutation when either a limit (~5kb) was reached or a timeout expired.
Brilliant idea. Thanks for your sharing