Looking fo ValueLog GC Insight for Interesting Badger Use Case

I’m currently developing a project where we need the ability to resend messages to client who have briefly disconnected from our service. For arguments sake, let’s say we need to keep 10 minutes of data cache, arranged by topic id. Data replay requests will come in and ask for items anywhere within that time window.

I have this working, using BadgerDB in managed mode. Rather than using a composite key for each item, I’m setting the max versions of the key to infinity and using the topic id as the key. When someone asks for data on a topic, I create a new key iterator which returns all the versions of the key. It works fantastically! Badger is performing very well, and most replay request are fulfilled in microseconds.

I’m using the managed db features to discard data that is outside the cache time window (also setting expiration on the items themselves). Again this works well

The issue I am facing is with the value log gc. With this approach, it seems that If I do not do frequent value log GCs, application memory seems to get out of whack- continually growing until a a gc is performed.

This is not a huge issue in and of itself, I just run GCs periodically. However, the GC process itself seems very processor intensive. If I’m understanding the value logs correctly, with the pattern I’m using, each individual value log file will become invalid all at once. There would be no need to re-write it as all the values within would be discardable.

Is there a way to optimize the GC for this scenario? I’m using 128MB for the value log file size, hoping that a smaller file would help badger recognize this scenario. Even with that, each successful gc takes around 4-5 seconds. Not a huge deal, except that with the rate data is coming in, I am basically in perpetual value log gcs, tying up and entire CPU

Fo reference, my dbConfig Options look like :

dbOpts := badger.DefaultOptions(dbName).
		WithNumVersionsToKeep(math.MaxInt32). // using these version to track messages per channel
		WithSyncWrites(false).
		WithMaxCacheSize(2 << 30). // 2GB cache to accomodate encryption latency
		WithValueLogFileSize(128 << 20).
		WithTruncate(true).
		WithNumCompactors(int(txnConcurrency)).
		WithLogger(log.StandardLogger()).
		WithEncryptionKey(randEncryptionKey()).
		WithEncryptionKeyRotationDuration(time.Hour)

Any suggestions?

Hey @salsorrentino, the value log GC tries to look up the value in the LSM tree which causes the CPU spike badger/value.go at 9459a240bdf3cf1bf59a172558b37455c742d3bf · dgraph-io/badger · GitHub .

This shouldn’t be the case. The disk usage of badger depends on the value log gc, memory utilization should not be affected by it. Do you have any memory profiles that I can look at? The memory usage should be bounded is you have set the bloom filter cache.
badger/options.go at 9459a240bdf3cf1bf59a172558b37455c742d3bf · dgraph-io/badger · GitHub
This should reduce the amount of memory used by bloom filters (about 5 MB per SST) which grows linearly with the number of tables.

GC is an expensive operation and we currently don’t have a way to limit its CPU consumption. We’re working on improving the GC. I see that you’re set NumCompactions to txnConcurrency. Compaction is an equally expensive process and you can try to reduce its CPU consumption by reducing the number of compactors. The default value for number of compactors is 2.

Thanks for the response. I did do a memory profile, though it did not reveal much. The reported memory seemed fine (a couple of GB) though the system was reporting total process memory of ~20GB. Still trying to figure out that discrepancy. I must have a leak somewhere, probably in some cgo lib.

As for the GC performance, I look forward to the optimizations :slight_smile: With the amount of data I’m processing on a single db, I don’t see how I can get around a constant GC loop:

At peak, a single host in my fleet is processing around 43MB/sec, which would be around 80k records. Badger is performing extremely well at that load and only consuming around 2-3 cores on the host.
Steady state is ~20k/second or 10.6MB/second. I know it can handle a lot more, as I did have it consuming > 120k records/second at one point, but my record ingestion has slowed down a bit

At this rate, if I pause GCs for any significant length of time, I may never catch up, and eventually run out of disk space.

My GC Loop looks like this:

for {
	if db.RunValueLogGC(0.5) == nil {
		time.Sleep(100 * time.Millisecond)
	} else {
		time.Sleep(30 * time.Second)
	}
}

While at steady state, this usually yields ~10 consecutive GCs before it takes a 30 second break
If I bump up the ratio to 0.99, it it will do fewer consecutive GCs and each successful GC is likely to be more fruitful If I’m understanding the ratio correctly. Ideally, I’d like to be able to pass a ratio of 1.0 in hopes the LSM could be optimized for finding completely dead value log files…though I think that is very specific to my use-case. I would imagine that most badger users will have a mixed bag of valid/invalid data in every value log.

Anyway, I’m going to keep digging into the memory issue, though I’ve been able to stabilize application memory at ~9GB which is totally acceptable for me.

I did take your advice on the config, for reference my new configs are:

WithNumVersionsToKeep(math.MaxInt32). // using these version to track messages per channel
WithSyncWrites(false).
WithMaxCacheSize(2 << 30). // 2GB cache to accomodate encryption latency
WithValueLogFileSize(128 << 20).
WithTruncate(true).
WithLogger(log.StandardLogger()).
WithEncryptionKey(randEncryptionKey()).
WithEncryptionKeyRotationDuration(time.Hour).
WithMaxBfCacheSize(1 << 30).
WithTableLoadingMode(options.LoadToRAM)

I’ve got the alloc profile, looking into that. I think I’ve found the issue…don’t think it has anything to do with badger, but more the way I’m writing values to it. I will let you know if I find anything

Looking at the memory profile - memory in use is about 2GB which is what I would expect based on my config. The allocated memory is much larger of course. I’m not an expert at reading memory profiles, but I would think a lot of allocated memory is virtual, but the top offenders there are bytes.makeSlice and y.XORBlock both coming from logFile.encodeEntry

This could be because of Golang runtime. The Golang GC doesn’t release memory fast. You can use runtime package - runtime - Go Packages ReadMemStats to get memory status from the runtime.

Another issue I see is that we create a new buffer every time we encrypt the key,value pair

We could use a sync.Pool to reuse the memory here.

That makes sense. Though what I find interesting is I leave the db un-attended (value log GC-wise), application memory will grow until it consumes about 90% of system memory, but stops short of causing on OOM condition. Then it seems the runtime will swoop in and evict some unused memory. While IMO, this is fine, it does make it hard to identify real operational issues. Memory is something we’re always looking at on our dashboards, and this behavior makes it look like there is a problem. When I do frequent value log GCs on the other hand, it appears the runtime is able better able to keep res memory at a lower value.

Maybe this is tied to the encryption in some weird way, not sure. Unfortunately encryption at rest is a hard requirement for me. Looking forward to the enhancement you’re proposing.

Thanks for all your help! Great product!

1 Like