Mutation Performance!

I wonder if there is a performance bottleneck about mutation with dgraph4j。
And in dgraph4j,the batch mutation is different from “dgraph bulk" or “graph live”?

I think you have a typo there. dgraph4j vs dgraph4j??

They use Dgo. They may do some different methods. But in general are are 90% close in terms of approaches.

I’m sorry about that. Then is there a size limit about JSON?

            DgraphProto.Mutation build = DgraphProto.Mutation.newBuilder().setSetJson(ByteString.copyFromUtf8(json.toString())).build();

Hum, I’m not sure about the case. In general(I’m not a Java Dev) JSON’s limit in Java should be the available memory. The best to do in any language is to break any data into chunks and manage the insertion.

Liveloader and Bulkloader has their own algo to do this. Bulk is faster cuz it is converting the data into RAW badger data. And liveloader breaks the data into small chunks and send them in small batches on each transaction.

You should never try to push big chunks of data into a server without knowing if any parts can handle the load. Test first and later check what you can do.

  1. Check if your client can handle big JSON.
  2. Check if your cluster has enough resources available.
  3. Using compression to send the data can help.

Thanks!

Graph live loader only supports file(*rdf.gz or *.json.gz) ? About json string?

Not sure what you mean by JSON String.
Live or bulk accepts JSON. You can send RDF, JSON Gzipped or not.

I am performing mutations through dgraph4j,I wonder know how many the performance threshold of JSON。How many predicates can I use in json string?

[
    {},
	{},
    .
    . "how many?"
    .
	{}
]

Sorry, but I can’t help you with that and currently we don’t have Java devs to do some profiling with this.

For sure Java has some IDE or tool that can profile the RAM usage.

Ok, thanks!