Does badger use data compression ?

It is so important to test before your graph used in a production environment
so I test badger with java-leveldb . with the same logic , level take ten percents of disk compare with badger
why this happen? leveldb has data compression?
during inserting data ,badger db use up my memory while java-level db only use 30% of memory , is quite difficult to understand ,do you think so ?

I have check for badger ,it is obvious memory table and immutable memory table is not for reason.

Badger doesn’t use compression. But, the design of separating keys and values results in much smaller LSM tree. So, we don’t need to pay for the cost of compression and decompression – it performs better. Badger is a lot more concurrent system than LevelDB and built around great read-write throughput and performance.

You can tweak the memory usage of Badger, by decreasing the number of memtables, or the number of compactors, etc., by playing with badger.Options.

1 Like

Badger is a simple, efficient, and persistent key-value storing database. It is designed to give high performance for both reading and writing.
It provide concurrent access to the database and in programming languages to implement transactional memory. It runs concurrency along with serializable snapshot isolation guaranteed with it.
It uses Log-structured merge-tree along with a value log which separates keys from values, hence results in reducing both write amplification and the size of tree.
This allows LSM tree to be served entirely from RAM whereas values are served from SSD memory.