Currently, when posting a new schema to dgraph, the RAM usage spikes (unpredictable) high which can lead to swapping and unresponsive servers.
Are there any considerations being made to reduce RAM usage during such operations? Swapping is a no-go in my opinion and dgraph should just spread out the operations to a larger time frame to reduce RAM usage.
How big is the schema ? How big is the size of existing data ?
While doing a schema update, Dgraph may need to reindex some predicates. This involves opening and writing to many files simultaneosly.
We had limited the number of threads opened simultaneously with this PR. This limits the number of threads opened simultaneously to 1/8th of maximum file descriptor limit.