Hello folks,
We are building a service that uses badger as a backend database. We noticed that on a high write throughput, log compaction happens a lot. From the logs, it shows:
badger 2021/02/01 18:37:24 INFO: [2] [E] LOG Compact 5->6 (1, 4 -> 5 tables with 2 splits). [163035 . 158279 158265 158268 165641 .] -> [165731 165736 165739 165729 165730 .], took 2.402s
badger 2021/02/01 18:37:24 INFO: L0 was stalled for 14.419s
badger 2021/02/01 18:37:25 INFO: [0] [E] LOG Compact 0->0 (5, 0 -> 1 tables with 1 splits). [165596 165477 165602 165604 165608 . .] -> [165737 .], took 4.724s
badger 2021/02/01 18:37:26 INFO: [2] [E] LOG Compact 5->6 (1, 3 -> 3 tables with 1 splits). [162786 . 159487 159493 164558 .] -> [165745 165756 165758 .], took 2.293s
When this happens, both inserts and reads are blocked. Is there a way in code to know if the database is blocked? Our plan is to run this check before starting a transaction to prevent a lot of HTTP requests waiting for a response.
We are using badger v3.
EDIT:
Badger option used is the default.