Used pogreb-bench to compare badger v1.5 against badger v2:
v1.5 (serial transactions):
Number of keys: 1000000
Minimum key size: 16, maximum key size: 64
Minimum value size: 128, maximum value size: 512
Concurrency: 2
Running badgerdb benchmark...
Put: 20.065 sec, 49839 ops/sec
Get: 2.813 sec, 355552 ops/sec
Put + Get time: 22.877 sec
File size: 1.94GB
v2 (serial transactions):
Number of keys: 1000000
Minimum key size: 16, maximum key size: 64
Minimum value size: 128, maximum value size: 512
Concurrency: 2
Running badgerdb benchmark...
Put: 28.065 sec, 35632 ops/sec
Get: 2.860 sec, 349689 ops/sec
Put + Get time: 30.924 sec
File size: 2.41GB
v2 (with WriteBatch):
Number of keys: 1000000
Minimum key size: 16, maximum key size: 64
Minimum value size: 128, maximum value size: 512
Concurrency: 2
Running badgerdb benchmark...
Put: 4.076 sec, 245349 ops/sec
Get: 2.851 sec, 350804 ops/sec
Put + Get time: 6.926 sec
File size: 1.30GB
So it seems that v2 is slower than v1.5 when using a transaction per put, but those results are negated by the fact that using WriteBatch is roughly 4x faster than v1.5 (where WriteBatch is unavailable) and 7x faster than v2 when using a transaction per put.
We should make it clear in the documentation that there are much better and faster ways to write a lot of data to badger than using transactions.