I ran into an issue where, upon compaction, my application would crash because Badger panics in doCompact because the operation would exceed MaxLevels:
y.AssertTrue(l+1 < s.kv.opt.MaxLevels) // Sanity check.
Considering the response to this issue: Assertion error when compaction · Issue #620 · dgraph-io/badger · GitHub
I was wondering whether this is really intended (“unpredictable” panics aren’t very nice) or if I’m doing something wrong.
Use-case: I have fairly small keys (~32 bytes) with fairly small values (<256 bytes) for a task queue system. My operations are limited to insert, move/rename (from “queued” to “active”), followed by delete (“completed”). The number of entries in this queue are all over the place, but let’s say that generally this would be between 10k-100k. Apart from some tasks having higher priority than others, the queue is mostly FIFO.
I noticed with the default parameters seeking/iteration would start taking longer, so I unscientifically tweaked some of the parameters to what I thought would better fit my workload.
E.g.:
opts.MaxTableSize = 256 << 15
opts.LevelOneSize = 256 << 9
opts.ValueLogMaxEntries = 32000
opts.LevelSizeMultiplier = 2
opts.NumMemtables = 2
opts.NumLevelZeroTables = 2
opts.NumLevelZeroTablesStall = 4
Which leads me to the following questions:
- Does the MaxLevels assertion make sense on compaction? Do I have to pick a value for MaxLevels myself, and based on what metric should I do this?
- Is tweaking these options on an existing database supported, or will this cause unforeseen issues?
- Are there any other tips for e.g. reducing memory usage and picking params to better discard deleted data for my use-case?