I am running a custom benchmark against BadgerDB. When I have a high write throughput, the DB “stutters”. These stutters last for 10 or more seconds. During a stutter, all read and write operations block.
- The workload is write heavy.
- Each key is 32 bytes, and each value is 1 megabyte.
- Once written, keys are not modified.
- Keys have an expiration time
- I batch write operations using transactions. Each transaction contains 100 key-value pairs, with an aggregate size of all values in a batch being 100mb.
I am currently running with mostly default options:
opts := badger.DefaultOptions(directory)
opts.Compression = options.None
On my current hardware, I can achieve a sustained average write throughput of ~100mb/s. But the performance is not smooth. The DB does all of its work in frenetic, short bursts. For two or three seconds it accepts a large amount of work. Then, the entire system locks up for ~10 seconds. During the period of time when it is locked up, all read/write calls are fully blocked.
Although the overall write throughput is not bad, the jittery behavior has a lot of undesirable consequences for my intended use case. I don’t need single digit microsecond latencies, but waiting 10+ seconds for a read operation is problematic.
Has anybody observed this sort of behavior before? Are there ways I could configure badger to improve this stop-and-go stuttering behavior?