Reduce GC overhead by using custom memory allocator

Moved from GitHub ristretto/8

Posted by mangalaman93:

ben-manes commented :

You will likely want to use a SeqLock when reading from / writing into the byte value. See Java’s StampedLock API and internal algorithm notes for details

souvikhaldar commented :

What is the GC overhead here?

souvikhaldar commented :

Are you planning on using memory ballast for reducing GC cycles? @mangalaman93

erikdubbelboer commented :

Memory ballast doesn’t make sense for ristretto and if it would make sense it’s still something the application should do by itself, not the library.

erikdubbelboer commented :

A custom memory allocator doesn’t make sense either seeing as ristretto only stores interface{}. You could build a library on top of ristretto that serializes all data you put in into []byte that uses a custom allocator. But that’s not the use case for ristretto. ristretto allows you to put in unseralizable data like channels as well.

souvikhaldar commented :

My intention here is to understand what is the overhead of GC that this issue tries to reduce by the memory allocator. @mangalaman93
When I see someone trying to reduce unnecessary GC cycles to keep heap memory size in check (which isn’t much important) I can’t help but suggest using memory ballast. Sure there needs to be fine-tuning further and it’s better not to do it at the library level, but it’s pretty much doable and helpful nonetheless. Also, I think it is going to be part of Go runtime soon (refer- . @erikdubbelboer Can you suggest some alternative?

erikdubbelboer commented :

I don’t think this issue was created to reduce the amount of GC cycles. This is not something a custom memory allocator would change.

What this issue is about is that you want to have less pointers for the GC to check, so it runs faster. Using a custom memory allocator you can put everything in a []byte so GC only has to check 1 pointer to the []byte instead of all pointer to all objects.

souvikhaldar commented :

I see. I would love to learn more about it. Is this what you are talking about? Allocation efficiency in high-performance Go services | Segment Blog

If not, can you suggest some reading?

erikdubbelboer commented :

Both bigcache and use different techniques to avoid having too many pointers.

proyb6 commented :

@erikdubbelboer I think you know VictoriaMetric have FastCache repo as well, is it a good example for reference reading?

erikdubbelboer commented :

Yes fastcache is also very good. I didn’t mention it because it has a bit more complex way to deal with lowering GC overhead.

martinmr commented :

@mangalaman93 is this issue still relevant?

mangalaman93 commented :

I think so, because the way Ristretto stores data is all pointers. The more the data, GC effects are gonna be more and more visible.