Probably, yes, I can’t test it in my current setup.
Steps to Reproduce the issue
I have badger DB working as in-memory key-value store. Entry size is 3kb on avarage, there’s approx. 400k entries with 150k read requests/minute, but they change frequently: 15k/minute.
I’ve spent some time with documentation and sources, but failed to reproduce issue locally and to find desired combination to get rid of them. Setting WithSyncWrites(false), for example, fixes issue, but memory usage keeps growing until OOM. Tweaking MaxTableSize or NumCompactors affects response time, but doesn’t fix issue completely. Interestingly, read perfomance during spike degrades than the writing load increases, as I understood, this is somehow related to periodic compaction of entries, but I have no idea how to approach the problem.
The desired scenario for me is stable read time under 100ms, I don’t care about write perfomance since it’s always happens
asynchronously. Also does not care pretty much about higher memory or cpu usage, as long as it’s stable and predictable.
Which configuration should I use for my use case?
Thanks you for awesome work! Sorry for any grammar mistakes, I’m not native speaker.
as I understood, this is somehow related to periodic compaction of entries, but I have no idea how to approach the problem.
Compactions are background tasks and they shouldn’t affect the reads. Badger has snapshot isolation so your reads shouldn’t be blocked by anything (except the last write that hasn’t finished yet).
@hqhs Can you try out the master version of badger? It has some bugs but it has the in-memory bug fix as well. If you can run your application on master version of badger and check if the latency improves, that will be useful.
What are out next steps? What badger metrics could help to investigate it? I’m thinking about turning on block/mutex profiling in isolated environment using golang’s pprof.