What are the simplest ways to improve write throughput in a live dgraph cluster? I’m looking for some rules of thumb to follow as the write volume we must accommodate increases. For instance, which of might make a difference?
- Avoiding certain types of indexes that take a long time to build
- Splitting large mutations into chunks
- Increasing memory allocated to server
- Having many relatively small predicates (would this help the cluster to execute multi-predicate writes in parallel?)
- Increasing the number of servers in the cluster (when would more help and when would the returns diminish?)