Sorry, I think we have cross conversation here. This get me a bit confused. I need details of what makes you think something is wrong (Based on the proposal of this topic). And now you’re talking about performance and asking about configs. I need context to be able to help you. I can’t help in the dark.
I saw that in another topic you’re using HDD. Well switch to a higher IOPS hardware is always good. No doubt of it.
BUT, if you’re running the whole setup in a single machine simulating a HA Cluster, well you gonna have bad time to extract the maximum performance from Dgraph because you are sharing resources from a single machine and demanding performance. As I don’t have context and you said “no use Docker”, I’ve to assume you’re running like this (HA in single machine). Please confirm if you don’t.
Ideally, each node should be physically isolated if you need a Cluster with the maximum performance(by logic - or at least each node should have its own HDD). Each node needs a significant amount of memory, cores and high IOPS. If in these conditions you find a bottleneck, please let us know.
Running with HDD runs just fine, but does not work miracles. Unless you setup Dgraph to be RAM first. e.g. Let’s say you set the flag “badger.tables” to “RAM” and keep “badger.vlog” defaults “mmap”. You’re setting up Dgraph to be RAM first. This is good if you have a lot of RAM available.
But I think RAM can be more expensive than SSD or NVMe right?
So, do that. Isolate nodes, try to add SSDs or NVMe and redoo the tests.