Badger panics after db reaches 1.1T

panic: Base level can’t be zero.

DB size:

$ du -h /data/badger/
1.1T	/data/badger/

Badger settings:

badgerOptions := badger.DefaultOptions("/data/badger")
badgerDB, err := badger.Open(badgerOptions)

Badger version: v3.2103.2

Stack trace:

[2021-11-18 09:00:24.588 UTC] info (logutil/zap_raft.go:77) 21650 tables out of 24770 opened in 3s
[2021-11-18 09:00:25.000 UTC] info (logutil/zap_raft.go:77) All 24770 tables opened in 3.412s
[2021-11-18 09:00:25.015 UTC] info (logutil/zap_raft.go:77) Discard stats nextEmptySlot: 0
[2021-11-18 09:00:25.037 UTC] info (logutil/zap_raft.go:77) Set nextTxnTs to 3887394350 
[2021-11-18 09:00:25.092 UTC] info (logutil/zap_raft.go:77) Deleting empty file: /data/badger/000696.vlog
panic: Base level can't be zero.
goroutine 24954 [running]:*levelsController).fillTablesL0ToLbase(0x0, 0x0)
	/Users/cae/go/pkg/mod/ +0x8f1*levelsController).fillTablesL0(0xfa31c8, 0xc000034068)
	/Users/cae/go/pkg/mod/ +0x25*levelsController).doCompact(0xc000268000, 0x3, {0x0, 0x3ff6666666666666, 0x3ff05af864031d71, {0x0, 0x0, 0x0}, {0x0, {0xc00320a6c0, ...}, ...}})
	/Users/cae/go/pkg/mod/ +0x2e5*levelsController).runCompactor.func2({0x0, 0x3ff6666666666666, 0x3ff05af864031d71, {0x0, 0x0, 0x0}, {0x0, {0xc00320a640, 0x7, 0x7}, ...}})
	/Users/cae/go/pkg/mod/ +0x78*levelsController).runCompactor.func3()
	/Users/cae/go/pkg/mod/ +0x158*levelsController).runCompactor(0xc000268000, 0x3, 0xc001a90090)
	/Users/cae/go/pkg/mod/ +0x3a9
created by*levelsController).startCompact
	/Users/cae/go/pkg/mod/ +0x53


Changing default to WithMaxLevels(8) fixes, not sure if only temporarily.

I think this a permanent fix and should allow the db to grow to ~11.1 TiB. Some feedback from the Dgraph team on this would be nice. Especially on the performance implications.
As described in the linked thread, the documentation for Badger and Dgraph should be extended to cover this behavior in more detail.

1 Like

I’ll possibly outgrow 11.1T, how far can I go on Max Levels?

I think there is no technical limit only a practical. A db with with 11 TiB is already hard to manage, I can only imagine what you would do at 100 TiB or more.

Instead of the max level you can also increase the level multiplier (defaults to 10), but I am not sure how that would impact performance.