Moved from GitHub badger/1368
Posted by eloff:
I’m considering running BadgerDB on HDDs for a use case where I only insert items, and then sometimes scan the entire database. Obviously with limited random access abilities of a HDD, these scans would be many orders of magnitude more efficient if they could walk the value log sequentially in file order.
Am I right that there isn’t currently a way to do this or did I miss something?
Could such an iterator be implemented easily in BadgerDB? I think so from what I know of the architecture, but I’m not familiar with the internals.
Interestingly, if there was a feature like Feature Request: add a hook to customize SST compaction · Issue #1367 · dgraph-io/badger · GitHub one could maybe abuse the callback and the compaction to not only scan the entire value log in order, but to compact it at the same time.