Badger compaction results in OOM and loss of data

@mrjn I reviewed some of the latest changes, and I don’t think they will solve the problem. The root cause is that badger picks an arbitrary number of tables for compaction, which may be 3, or may be 800 at a time, who knows.

Compactions require heavy usage of memory (all newly built tables are kept in memory simultaneously before being flushed to disk), file descriptors, and other resources.

That’s why you’re seeing so many problems related to compaction, with different manifestations:

Until badger can limit the amount of tables it processes at once, it will continue losing data as it is righit now (many of our users are now complaining about this).

This can be solved by either making badger (a) estimate how much memory and resources it has available, and sizing the compaction appropriately, OR (b) by allowing the user to specify the maximum number of tables to compact at once.

(b) would be good enough for now, (a) is likely an optimisation.