We are trying to index blockchain data to BadgerDB, initially the write were pretty fast and it was able to index 6 million block within a day (recent blocks stand close to 13 million), after which the write speed on the DB decreased drastically, falling to around 15 seconds to index a single block. We observed that the compactions of DB was taking 1 minute on average. This is when the size of the DB has reached 600GB.
We have done this level of writes on LevelDB as-well, and we were able to write 12 million blocks in a span of 2.5 days, size of the DB reached 2TB after the sync.
Our configuration for BadgerDB is at default options, we tried tweaking it to reduce the memory usage, but that too resulted in slower writes.
What can be configuration changes that can be done to increase the write speeds to the DB.
You may wish to share here the attempted configurations and results at each configuration.
The disks that back badger are extremely important to be low-latency, but if you had a similar test with LevelDB (very similar read/write pattern) on the same disks I would expect relatively similar access performance in go.
edit: also the equivalent leveldb that succeeded in your tests would be interesting. EG: compression settings
Right now it is on default configuration. Taking an average of 10sec to write a single block
The below configuration was taking 5sec to write a single block without any existing data, so we did not go ahead and use these configuration.
opts.MemTableSize = 1 << 20
opts.BaseTableSize = 1 << 10
opts.NumMemtables = 3
opts.NumLevelZeroTables = 3
opts.NumLevelZeroTablesStall = 6
opts.BlockCacheSize = 10 << 20
opts.ValueThreshold = 1 << 10
We also tried this configuration which reduced our memory usage on the cost of write speed, but our priority was write speed so we did not go ahead with these as-well. Not a major change, just having lesser tables in memory
opts.MemTableSize = 1 << 20
opts.BaseTableSize = 1 << 20
opts.NumCompactors = 2
opts.NumMemtables = 1
opts.NumLevelZeroTables = 1
opts.NumLevelZeroTablesStall = 2
opts.BlockCacheSize = 10 << 20
opts.ValueThreshold = 1 << 10
We are using GP3 SSD on AWD with and IOPS of 3000 as disks for both leveldb and badgerdb.
The configurations we use for Leveldb.
BlockCacheCapacity: 128 * opt.MiB,
WriteBuffer: 1024 * opt.MiB,
DisableSeeksCompaction: true,
OpenFilesCacheCapacity: 1024,
You may wish to try: (one at a time)
- more compactors
- more levels
- no compression
I am not sure if snappy is the default in leveldb but it is in badger. I have seen other posts here where more levels helped giant databases. The idea with more compactors is maybe compactions will be smaller and happen more concurrently.
Also you may want to peek at the index cache and block cache metrics - see if they are helping you at all with your write heavy pattern or not
Tried this configuration by doubling the compactors and level
opts.BlockCacheSize = 10 << 20
opts.NumCompactors = 8
opts.MaxLevels = 14
This did not help with the write speeds. Taking 40-60 seconds to write a single block.
It also did not let me revert to the older levels
panic: runtime error: index out of range [13] with length 7
We also tried by turning the compression off, this did not give a major impact, taking 25-30 seconds to write a block.
One more thing that we noticed was that it took 30 minutes just to open the DB.
@abbas-unmarshal Is there a reason for using such small sizes for memtables? Your writes would become slow if badger cannot push data to lower levels.
I have a couple of suggestions to increase the write speed
- Set memtable size back to 64 MB (the default value)
- Use more compactors. More compactors mean badger can move data at a faster speed to lower levels and it won’t block writes.
- Use
BatchWriter
instead of the Txn. Your write speed would also depend on how you’re doing writes. If you commit for every small entry, your write speed would be lower. The batch writer is much much faster. - Disable compression/encryption and disable the cache. This is very important. Compression/encryption are CPU-intensive processes and they will slow down your writes.
I am happy to help you optimize for write speed if you can help me with a sample program that replicates your usecase. There might be other options that can help you speed up.
Initially we kept the memtable size less to decrease the RAM, then we reverted that back to the default settings because it gave better performance.
We have tried by keeping 2x the number of compactors and disabling compression as-well, but did not give any significant impact on the performance
As you mentioned about the usage of BatchWriter, we are using that itself for storing data whenever required
Can you share some CPU profiles and logs? We can try to figure out why are badger writes slow.
If you have a test that can reproduce the behavior, that would be very helpful.
What is average value size?