Need suggestion on data modeling using badgerdb


I am planning to use badgerdb as key value store.
I need suggestion on how to model the data inside. value type I am planning to store is object(Struct). Each tenant can have upto 1M value objects and each object contains 30 attributes. Value object is updated very frequently.

  1. Shall I store the entire object as byte array?. Will there be the performance cost in serializing and deserializing the value multiple time.
  2. Shall I store each attribute of object as value with key prefixed like objid:attrname?
  3. Also I have requirement to find all the key, value pairs modified in time window. How can I get this info.
  4. Shall I create multiple baderdb instance for each tenant or use prefix based keys?

–Madhu C S

You can try with both, and see what works better in your case. It’s hard to give a suggestion here from outside.

You can store the edit time as a key-value as well. And then iterate to find all keys which were modified.

Each Badger instance would have RAM and processing overhead. So, better to have just one instance. But, again, depends upon your usage.

Do you suggest batching set of objects and then write them together in one transaction? What should be the batch size?