Offer an option to use less memory when developing

Hi, when developing I run a heavy IDE, other databases, a front end compiler, a web browser with a million tabs etc, however I’m only developing using a very small graph, mostly to run unit tests. Compared to e.g. PostgreSQL, Dgraph uses a lot of memory in this scenario. I also don’t care about throughput in this scenario.

I would like to be able to specify a much smaller cache size so that Dgraph won’t use a full GB of memory on my development machine. Maybe this can be hidden behind a --dev flag which also disables cluster features so people aren’t temped to use it in production?

2020/08/13 19:55:51 LRU memory (–lru_mb) must be at least 1024 MB. Currently set to: 500.000000

Hey @hkeide, this is an interesting feature request but I don’t think it’s easy to implement. We would have to keep track of all the objects created by dgraph and limit the memory they use. Adding a cache would add an added level of complexity in the system (current caches don’t cache everything).

However, if you’re running a single node dgraph alpha/zero cluster, you shouldn’t see very high memory usage. If that’s not the case, we can try to optimize dgraph for a low memory use cases.

1 Like

Isn’t the lru_mb flag a cache size flag? Can’t it just allow lower numbers than 1024?

I had a similar issue and found no solution except restarting dgraph from times to times as its memory keeps growing but never reduces.

That’s interesting @myo. I assumed Alpha started out taking 1GB of memory, but I just tested and apparently that is not the case, so I guess it’s a Go GC thing that causes it to take up 1GB after a few hours. I guess I can restart the server in dev regularly as a workaround.