I’m working on a project that will use badger for caching. When a record expires, however, I need to make an update to it and publish it to kafka. My first thought was to run a function ahead of garbage collection and check each key to see if it is expired. What I found, however, was that the expired keys were not returned when I iterated through the keys. I’ve learned through research and trial and error that I can see the expired keys if I set the AllVersions option to true. I’m wondering if this is guaranteed, however, or if the potential is there to still lose keys if compaction occurs before I iterate. My other thought was to manage expiration myself and possibly iterate through the keys in reverse with SinceTS to reduce the number of keys I have to take a look at (assuming that’s feasible). The downfall with managing expiration myself is now I have to unmarshal to access the last seen timestamp to see if the record has expired, needs to be updated, and then published to kafka. Any guidance on how to achieve this in the best method possible would be greatly appreciated.
With prefix iteration, you would store key with timestamp as prefix. You can iterate with timestamp as prefix to find all the expired keys.
You would now have something like:
t1~k1 --> v1 t1~k2 --> v2 t2~k3 --> v3 . . . tn~km --> vm
Hope this helps!