We are building a service that uses badger as a backend database. We noticed that on a high write throughput, log compaction happens a lot. From the logs, it shows:
When this happens, both inserts and reads are blocked. Is there a way in code to know if the database is blocked? Our plan is to run this check before starting a transaction to prevent a lot of HTTP requests waiting for a response.
No, there’s no way to know when the writes get blocked. In the background, Badger is trying to push those writes out to L0, but L0 is blocked as it is filled.
One way to know this would be to have a thread outside Badger writing to it periodically (say every second), to see if the write can go through or not. That can be used to determine how to deal with incoming transactions.
Update: This is the code that you need to determine if L0 is full or not. But, it’s not exposed yet. Could be exposed, if you need it. Feel free to send a PR.
The reads should not be blocking already. Writes have to be throttled by Badger to avoid big skews in the LSM tree structure. IF you don’t avoid those skews, then writes actually end up becoming even slower over time.