Moved from GitHub badger/1361
Posted by C-ollins:
What version of Go are you using (go version
)?
$ go version go version go1.13.6 darwin/amd64
What operating system are you using?
android
What version of Badger are you using?
v2.0.3
Does this issue reproduce with the latest master?
Yes
What Badger options were set?
opts.LevelSizeMultiplier = 10
opts.TableLoadingMode = options.FileIO
opts.ValueLogLoadingMode = options.FileIO
opts.MaxLevels = 2
opts.MaxTableSize = 32 << 20
opts.NumCompactors = 1
opts.NumLevelZeroTables = 1
opts.NumLevelZeroTablesStall = 2
opts.NumMemtables = 1
opts.BloomFalsePositive = 0.01
opts.BlockSize = 4 * 1024
opts.SyncWrites = false
opts.NumVersionsToKeep = 1
opts.CompactL0OnClose = false
opts.KeepL0InMemory = false
opts.VerifyValueChecksum = false
opts.MaxCacheSize = 20 << 20
opts.ZSTDCompressionLevel = 1
opts.Compression = options.None
opts.ValueLogFileSize = 50 << 20
opts.ValueLogMaxEntries = 100000
opts.ValueThreshold = 15
opts.LogRotatesToFlush = 1
What did you do?
Based on my comment, I’ve done some tweaking to badger config + some ideas I found while searching through some issues and I’ve been able to reduce the amount of time the app stays frozen. The app does a bunch of writes while downloading blockchain data and it’s usually going normally till I get this log
07:54:04 DEBUG: Storing value log head: {Fid:0 Len:30 Offset:77385529}
07:54:04 DEBUG: Flushing memtable, mt.size=40444755 size of flushChan: 0
07:54:06 INFO: Got compaction priority: {level:0 score:1 dropPrefix:[]}
07:54:06 INFO: Running for level: 0
07:54:07 DEBUG: LOG Compact. Added 492163 keys. Skipped 0 keys. Iteration took: 886.757475ms
07:54:07 DEBUG: Discard stats: map[]
07:54:07 INFO: LOG Compact 0->1, del 1 tables, add 1 tables, took 1.104381668s
07:54:07 INFO: Compaction for level: 0 DONE
Then it slows down from saving ~4000 block headers per second to about 2000 block headers every 5 seconds. It gets to 290k blocks then freezes the app indefinitely. I’ve been able to reduce this freeze time to ~2minutes by using the options I added above. I’m not sure of what data is being saved that triggers the freeze every time at 290k blocks but this wasn’t occurring when I used these options with v1.5.4 release. My major problem with my v1.5.4 implementation is that it usually runs out of memory and panics instead of returning an error.
Here’s a log I took while the when the speed dropped slow writes - Pastebin.com, you can see the time stamp difference for the connected blocks before and after the badger log came in.