Currently, when posting a new schema to dgraph, the RAM usage spikes (unpredictable) high which can lead to swapping and unresponsive servers.
Are there any considerations being made to reduce RAM usage during such operations? Swapping is a no-go in my opinion and dgraph should just spread out the operations to a larger time frame to reduce RAM usage.
I am looking into it and will discuss it with team.
How big is the schema ? How big is the size of existing data ?
While doing a schema update, Dgraph may need to reindex some predicates. This involves opening and writing to many files simultaneosly.
We had limited the number of threads opened simultaneously with this PR. This limits the number of threads opened simultaneously to 1/8th of maximum file descriptor limit.
Our schema has roughly 50 different types, 12 interfaces and ~600 fields.
Do you think that providing an option like for example
--limit-ram-usage-to 2gb would be possible?
@rajas it occurs when we have an empty database and post a larger schema for the first time