Thanks @dmai for pointing that out!
Disabling compression saved around 1GB. I looked at the flags and disabling cache has a much higher impact. By disabling both, I was able to get similar memory usage during export (and queries) as before, as seen in the table below.
||–badger.compression_level=0 --cache_mb 0
This was tested on a single node machine. Cache does not seem to have an impact on export times. Multi node clusters may perform differently. Used live loader, as bulk loader ignored the compression flag.
Using those both options I am able to continue running dgraph on low-memory machines.
I would like to add, that dgraph v20.07.1 (and .0) was impressively stable for me memory wise. Runs smoothly for months with predictable memory usage.
Seems that cache/compression adds a layer of unpredictability to RAM usage during queries also, and I’d love to see a mention in the deploy docs for running dgraph with low-memory (at the cost of performance).
This is a common use case for me when running apps / microservices, where additional performance doesn’t matter. Like development machines, unit tests or applications, where I’d like to have dgraphs scale potential for the future, but don’t want to run 32GB nodes just yet.