Seems like constantly doing queries and mutations memory that Alpha nodes use is extremely hight.
(some discussion is Can Dgraph do 10 Billion Nodes? as well)
In our case to handle about 6M nodes needs 30GB of memory on each of 3 Apha’s running in the cluster. This is too much and it increase as amount if data increases.
As shown on the picture below, extreme value was with about 6M nodes in database. Memory dropped when I dropped data from database. This shows that memory is getting up and up as more and more data are in database.
All details and tests can be found here:
Is there possibility to someone take a look of this and do memory profiling or testing on your test environment?
This is show stopper for us as we will have a way more data than 6M and having 200GB of RAM to handle only 20M nodes will be too expensive and non sense.