QUESTION: Best way to create snapshots from badger

Hello, we are currently working on an implementation of badger with the raft algorithm and we were wondering whether there is a suggested implementation for creating snapshots from badger.

The current implementation we are looking at comes from Cete but we are not sure whether this is an optimal implementation especially considering that our database will grow beyond the 100 million records.

There one goroutine is responsible for reading the key-value pairs from badger:

...

db.View(func(txn *badger.Txn) error {
    opts := badger.DefaultIteratorOptions
    opts.PrefetchSize = 100
    
    it := txn.NewIterator(opts)
    defer it.Close()

    for it.Rewind(); it.Valid(); it.Next() {
        item := it.Item()
        key := string(item.Key())

        var value []byte
        if err := item.Value(func(val []byte) error {
            value = append([]byte{}, val...)
            return nil
        })

        ...

        ch <- &protobuf.KeyValuePair{
            Key:   key,
            Value: append([]byte{}, value...),
        }
    }
})

...

While another go routing writes the items on disk:

...

for {
	kvp := <-ch
	if kvp == nil {
		break
	}

	buff := proto.NewBuffer([]byte{})
	err := buff.EncodeMessage(kvp)
	if err != nil {
		return err
	}

	_, err = sink.Write(buff.Bytes())
	if err != nil {
		return err
	}
}

...

Both functions run as part of the raft implemention by hashicorp.

We have observed that the Backup() / Load() functionalities allow for incremental backups which could be used as database snapshots.

I would like therefore to ask whether this is the right approach or whether there is a better implementation.

Thank you!