Regarding very less throughput in dgraph

Hi, We have setup a 3 node cluster for dgraph - (12 CPU’s, 13GB RAM - all disks are SSD’s).
But Throughput is very very less while writing the data to dgraph.

We are following below steps to update data in dgraph -
1)Check if a particular subject/Object exists or not - if yes get the uid’s
2)Update the attributes of the subject/object and if not exist - create the subject & object
3)Check if a predicate exists - if yes get its facets.
4)Update the predicate with the new facets.

The throughput for the above is 2K/min which is very very less. Not sure of what are we missing here. Can someone please help here.
Total data size is ~5GB only. (we added the same in 12-hour window via streaming process)

Also, at times we have faced multiple issues due to which we had to clean and re-install the dgraph setup :
1)Caused by: io.grpc.StatusRuntimeException: UNKNOWN: Uid: [110601] cannot be greater than lease: [10000]
2)io.dgraph.DgraphException: startTs mismatch (Ludicrous option was set when we got this issue)
3)message": “: cannot retrieve postings from list with key 0: readTs: 1539081 less than minTs: 3702387 for key:”",
“extensions”: {
“code”: “ErrorInvalidRequest”
4)At times CPU is 100% when nothing is none of the query or mutation is running in dgraph.
5) message": “: cannot retrieve postings from list with key 0: readTs: 1539081 less than minTs: 3702387 for key:”",
“extensions”: {
“code”: “ErrorInvalidRequest”
While proposing delta with MaxAssigned: 3710010 and num txns: 1179163. Error=Server overloaded with pending proposals. Please retry later. Retrying…
https://discuss.dgraph.io/

If you “re-install” dgraph it will clean-up the UIDs leased, Timestamps and other values. This means, if you try to upsert or mutate a UID that doesn’t exists it will throw that error “cannot be greater than lease”.

This sometimes happens when you delete everything in Zero folders and try to query some data in Alpha. Or the other way around. When you delete everything in Alpha but don’t clean the Zero instances. This is a kind of “collision” as far I can tell. You should clean everything and always use the same Zero instance you have started with. Some users do Bulkload and delete de zero instance after it. So this will, for sure, cause those issues.

Maybe it is some background task. Like snapshots.

How about trying out ludicrous mode?