Compaction uses a lot of memory and is frequent

What version of Go are you using (go version)?

$ go version
go version go1.17 linux/amd64

What operating system are you using?

linux/amd64

What version of Badger are you using?

badgerv3

Does this issue reproduce with the latest master?

yes

Steps to Reproduce the issue

I did a test and used bdagerdb as the storage engine of geth (replacing leveldb). The test results found that badgerdb has a serious memory jitter problem, and as time goes by, the performance on geth will gradually slow down. I All guesses have a big relationship with compaction, because I grabbed pprof’s heap analysis and compaction will occupy 3G-7G of memory at the peak

What Badger options were set?

    opts.BlockCacheSize = 512 << 20
    opts.IndexCacheSize = 128 << 20
    opts.MaxLevels = 7
    opts.MemTableSize = 67108864
    opts.SyncWrites = false
    opts.NumCompactors = 10
    opts.NumLevelZeroTables = 20
    opts.NumLevelZeroTablesStall = 40
    opts.ValueThreshold = 128
    opts.WithCompression(options.Snappy)

What did you do?

What did you expect to see?

control compaction nagtive effect

What did you see instead?

Have you tried something closer to the default settings? Dgraph uses it with a default of 2 compactors for example. Any difference?

(by the way, each compactor thread runs with a ticker on 50ms possibly running doCompact() on every tick (though it will probably miss ticks if its busy). Check out levels.go:levelsController.runCompactor() - that is the goroutine that is spun up one per NumCompactor option.

At the beginning I used the default NumCompactors parameter, which has serious memory jitter. I tried to increase the number of this compactor to reduce the impact of compact, but it had no effect.

More number of compactors == more memory usage.

What are you trying to accomplish @Leo_Chen ?