How to know if log compaction is happening

Hello folks,

We are building a service that uses badger as a backend database. We noticed that on a high write throughput, log compaction happens a lot. From the logs, it shows:

badger 2021/02/01 18:37:24 INFO: [2] [E] LOG Compact 5->6 (1, 4 -> 5 tables with 2 splits). [163035 . 158279 158265 158268 165641 .] -> [165731 165736 165739 165729 165730 .], took 2.402s
badger 2021/02/01 18:37:24 INFO: L0 was stalled for 14.419s
badger 2021/02/01 18:37:25 INFO: [0] [E] LOG Compact 0->0 (5, 0 -> 1 tables with 1 splits). [165596 165477 165602 165604 165608 . .] -> [165737 .], took 4.724s
badger 2021/02/01 18:37:26 INFO: [2] [E] LOG Compact 5->6 (1, 3 -> 3 tables with 1 splits). [162786 . 159487 159493 164558 .] -> [165745 165756 165758 .], took 2.293s

When this happens, both inserts and reads are blocked. Is there a way in code to know if the database is blocked? Our plan is to run this check before starting a transaction to prevent a lot of HTTP requests waiting for a response.

We are using badger v3.


Badger option used is the default.

No, there’s no way to know when the writes get blocked. In the background, Badger is trying to push those writes out to L0, but L0 is blocked as it is filled.

One way to know this would be to have a thread outside Badger writing to it periodically (say every second), to see if the write can go through or not. That can be used to determine how to deal with incoming transactions.

Update: This is the code that you need to determine if L0 is full or not. But, it’s not exposed yet. Could be exposed, if you need it. Feel free to send a PR.

1 Like

Thanks for the tip! Is it the same code to check if a read will be blocked?

Actually, we have no problems with writes blocking because we plan to batch inserts periodically. The problem is for reading clients.

The setup will be a single server, where both batches of writes and single reads happen concurrently.

Can you clarify what are the possible causes of blocking (read/write)? Or that blocking is normal in our situation?


The reads should not be blocking already. Writes have to be throttled by Badger to avoid big skews in the LSM tree structure. IF you don’t avoid those skews, then writes actually end up becoming even slower over time.