Badger running out of memory during compaction

Hey, normally my application runs using around 40% of the system memory available (most of that is used by badger), but when compaction starts it runs out of memory, what exactly is the memory requirements for using badger? is there any way to decrease usage during compaction?

You can set NumCompactors to 1 if memory is an issue. This would slow down write throughput, but would decrease memory usage, because only one compaction would happen at a time.

Even with that set to 1 it still runs out of memory when compaction starts, and as i said previously it normally hovers at around 30-40% (mostly used by badger) so that should be enough availble?

When compaction starts, more memory is used. Setting NumCompactors to 1 will just make the increase in memory smaller.

How much memory does your system have? That information will help give context to “%40 of system memory”. If you have a small amount of system RAM (such as 512MB) then badger may not run well.

There is a relatively low amount of ram available on the system, 1GB, the reason i chose badger (as opposed to just loading the entire dataset in memory) was because i thought i would be able to run it on low budget servers for a couple of my hobby projects

Some more info: the heap alloc hovers at around 300MB normally (95% by badger)
Another 100MB is used by the os and various other things, which leaves 600MB left

If you can tell us a way to reproduce this issue, we can try it in docker with a 1GB RAM setting. By reproduce, I mean the actual code which produces relevant data set, so we can figure what’s causing a big memory spike.

Or, you can give a heap profile, just before the crash happens, so we know what’s causing it. Otherwise, it’s hard to know what could be going on.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.