Encryption at Rest in Dgraph and Badger - Dgraph Blog

We built “Encryption at Rest” in Badger v2. Encryption is complex, but important. With this blog post, we not only want to introduce this feature to our users, but also dive into the details of how we implemented encryption in Badger, so the reader can gain enough understanding about introducing AES encryption in their own systems.

Background

In 2008, Microsoft released Transparent Data Encryption (TDE) to provide Encryption at Rest to SQL server. Since then, TDE has become an expectation, if not a requirement for databases. Oracle supports it, MongoDB supports it, PostgreSQL is considering adding it in v14. TDE is important because it is a stepping stone towards achieving security compliances like HIPAA and PCI DSS, which require the protection of data at rest. With data protection standards such as GDPR and the sheer mass of data that companies collect and accumulate, the protection and control of information has become increasingly important.

Dgraph is dealing with similar expectations. But, instead of directly building encryption into Dgraph, we decided to offload that complexity onto Badger. This benefits not only Dgraph users, but the wider community using Badger actively.

Badger is implemented as an embeddable library, which makes it especially powerful for building more specialized or complex systems on top of it. Layering systems on top of Badger provides a number of significant benefits. Most importantly, it provides separation of concerns. For example:

  • Badger provides fast access to data by key on a single server, and it manages the actual data on disk so it is in charge of data security.
  • Dgraph builds a graph database on top of Badger, and is also in charge of scaling to multiple distributed servers.

This means that we were able to add encryption to Badger with minimal changes to Dgraph, while adding encryption to Dgraph as well. The same benefit can accrue to other layered systems that use Badger. Furthermore, because Badger is a smaller, independent system, new features like encryption can be built, tested, and verified more easily and with more confidence.

This is similar to the Internet Protocol Suite, which provides high level protocols like TCP on top of lower level protocols like IP. This allows other high level protocols (e.g. UDP, DCCP, SCCP, RSVP) to be built on top of IP. When improvements are made to IP (such as IPv4 to IPv6), higher level protocols can all take advantage of them easily.

What is Badger?

BadgerDB is a key-value database (KVDB) designed for performance and resilience. In benchmarks running on today’s typical cloud server hardware, Badger provides great performance compared to other popular KVDBs. Badger can run concurrent ACID transactions using multiversion concurrency control (MVCC) providing serializable snapshot isolation (SSI) guarantees. Badger provides resilience in the event of process, filesystem or even hardware crashes.

In addition to speed and resilience, Badger’s rich feature set and ease of use has made it a popular library. It has quickly become the most active KVDB written in Go. On GitHub, Badger has over 7.3K stars and is used in over 850 projects, including Uber’s Jaeger and IPFS. Most notable is UsenetExpress, which uses Badger to store petabytes of data.

Advanced Encryption Standard

Badger uses the highly regarded AES encryption algorithm, standardized by the US NIST and used by MongoDB, SQLite, and many other databases and systems. Even RocksDB, the KVDB used by Dgraph before Badger was created, has since been enhanced to support AES. In fact, AES has become an industry standard as it is the most secure and widely used algorithm for encrypting data. Wide use is critical for encryption standards for ensuring that potential security flaws are found and fixed.

AES is symmetric; the same encryption key is used for both encrypting and decrypting data. Badger supports key rotation to further secure access to data. This allows Badger to be used in systems that need to meet the various data protection regulations and requirements.

The Need for Key Rotation

To encrypt and decrypt data requires access to an encryption key. AES keys come in three sizes: 128, 192, and 256 bits. Which size you use is actually not very important, as a brute force crack of a 128-bit key would take the fastest computer in the world over a hundred-quadrillion (10 to the 17th power) years. Even if you could get 10 million supercomputers to work together to do the crack, it would still take longer than the current age of the universe.

Much more important is maintaining the security of the key to avoid key leak. Obviously, encryption keys must be stored securely by the user and should not be easy to guess. However, there are other sources of key leak. For example, “side channel” attacks have been demonstrated that it is possible to crack even the most secure (256-bit) AES keys by measuring electromagnetic radiation coming from a computer, using a device that costs around $200 and can fit into a jacket pocket. Computers doing encryption must be physically secure to prevent this.

Even without physical access, if the same key is used too often there are known ways that attackers can determine the value of the key by analyzing large amounts of data encrypted using that key. In order to avoid this, the key must be changed regularly, which is referred to as key rotation. However, when the key is changed, existing encrypted data must be decrypted using the old key and then re-encrypted using the new key. These computations would significantly reduce the performance for encrypted data.

One Key to Rule Them All, Many Keys to Find Them

Instead of using the AES encryption key directly to encrypt data, Badger uses two types of keys:

  • A user provided AES encryption key, called the master key, is used to encrypt auto-generated data keys.
  • Data keys are used to encrypt and decrypt data on disk. Each encrypted data key is stored on disk along with the encrypted data.

The length of your master key must be 16, 24 or 32 characters. This determines what version of AES encryption is used (AES-128, AES-192, and AES-256 respectively).

Note that you should never use a predictable string as your master key. If you have a password manager (such as 1Password, LastPass, etc.), you can use its built-in password generator to generate a strong encryption key. Even if you don’t have a password manager, you can use a reputable online password generator, such as 1Password to generate your master key.

You should rotate your master key on a regular schedule. Fortunately, because the master key is used to encrypt only data keys, which are (even all together) much smaller compared to the data stored in the database, the master key does not need to be rotated as often (as data keys) to prevent key leak. Even better, when the master key is rotated, only the data keys need to be decrypted using the old master key, and then re-encrypted using the new master key. This is tremendously faster than re-encrypting all of the data on disk.

Avoiding Encrypted Duplicates

When data keys are used to encrypt data stored in the database, the same data will often be encrypted multiple times before the rotation period for the data key expires. Encrypting the same data with the same data key always generates the same encrypted text to be stored in the database. This increases the ability of an attacker to predict the original plaintext data.

To reduce the predictability of the original data, Badger incorporates a standard encryption technique that doesn’t use the data key directly to encrypt the data. Instead, Badger generates a random 16-byte array called an Initialization Vector (IV). The data key is used to encrypt the IV and then the encrypted IV is XORed with the original data, and the result is stored on disk. This means that even if the same block is encrypted multiple times, the random value of the IV ensures that the stored text will be different each time.

Storing IVs with Minimal Overhead

Badger is based on LSM trees. An advanced feature of Badger’s implementation of LSM trees is that for values larger than a certain user specified threshold size, it can separate the key-value pairs and store the values in a value log (vlog). The LSM tree stores only the keys and a pointer to the values. This separation results in much smaller LSM trees and reduces both read and write amplification factors typically involved with them. Assuming 16 bytes per key and 16 bytes per value pointer, a single 64MB file can store two million key-value pairs. For various datasets, the entire LSM tree can fit in memory making the task of searching for a key much faster.

The LSM tree is composed of many equally sized files called SSTables, arranged into a pyramid-like structure. Each lower level in the pyramid is 10-15x the size of its upper level. Each SSTable is further divided up into block structures, where each block holds 4KB of data. Badger uses a unique IV to encrypt each of these blocks and stores the IV in plaintext at the end of the encrypted block. The storage overhead of a 16B IV over a 4KB is 0.4%.

Note that it is OK to store the IVs in plaintext. Assume that a cracker gets access to the IV and encrypted block. To decrypt the block, they’d need to encrypt the IV with the data key (to XOR the encrypted data back to plaintext). But to get access to the data key, they’d need to decrypt it using the master key. If master key is safe and secure, that won’t be accessible, rendering the effort futile. Thus, knowing the IV is not sufficient to decrypt the data.

Next, we need to encrypt the values stored in the vlog files. Each value is read individually from a vlog file, using the value pointer from the LSM tree, which stores the vlog file id, the offset in the file and the length of the value. Aggregating multiple values into a block would cause a performance slowdown because then more data would need to be decrypted to read one value. Therefore, we decided to encrypt each value individually, keeping it in sync with the access pattern of the value logs. But IVs are supposed to be random per encrypted data block, in this case per value, so using one IV for the whole value log file isn’t ideal.

How does Badger avoid the bloat of attaching a 16-byte IV to every value? To optimize the encryption of the vlog entries, Badger uses a unique technique. Instead of generating a 16-byte IV and storing it at the end of each value in the vlog, Badger generates a 12-byte IV that is used for all values in a single vlog file. Along with it, Badger attaches a 4-byte unique value that is the offset of the value in the vlog file, which together make up the required 16-byte IV. For decryption, the 4-byte vlog offsets are available from the value pointer stored with each key.

This technique saves 16 bytes of space on disk for every value in a vlog file. For example, for a vlog file that contains 10,000 entries, storing an IV with each value would require 160,000 bytes, while this technique only requires 12 bytes. Furthermore, for a typical 1GB of value log, this technique only adds a 12B overhead, even lower than an SSTable.

Key Rotation Revisited

By default, Badger rotates the data keys every ten days, automatically generating new data keys whenever the old data keys expire. The user can change this schedule using the Options.WithEncryptionKeyRotationDuration function. All data keys are stored together in a file and loaded into memory when Badger is opened.

All data keys ever used are always stored. Badger does not determine which key is not being used and can be discarded. Here’s why: Each data key is 32B, so even a thousand of these keys only consume 32KB — a small size considering how big Badger DBs can get (TBs/PBs). A thousand keys correspond to ten thousand days ~ 27 years worth of data keys assuming a 10 day rotation cycle. So, we felt it was OK for us to avoid the logical complexity of actively garbage collecting data keys.

While Badger rotates the data keys automatically, it is up to the user to rotate the master key. The user is encouraged to rotate the master encryption key frequently in order to ensure a higher level of security. This is done with the rotate subcommand:

badger rotate --dir=badger_dir --old-key-path=old/path --new-key-path=new/path

Note that currently, the Badger datastore must be offline in order to rotate the master key. Because rotating the master key requires only that the data keys be re-encrypted, this doesn’t take very long and should not be a significant problem. In the future, the requirement that the datastore be offline might be removed.

Enabling Encryption on an Existing Datastore

You can enable encryption on a Badger DB instance using these options:

opts := badger.DefaultOptions("/tmp/badger").
    WithEncryptionKey(masterKey).
    WithEncryptionKeyRotationDuration(dataKeyRotationDuration) # defaults to 10 days

If you have an existing Badger datastore that is not encrypted, enabling encryption on it will not immediately encrypt your existing data all at once. Instead, only new files are encrypted, which will happen as new data is added. As older data gets compacted and newer files generated, those would also get encrypted over time. Badger can run in this hybrid mode easily, as each SSTable and value log file stores the information about the data key used for encryption.

In order to immediately encrypt all of an existing Badger datastore, you should:

  1. Export your Badger datastore
  2. Start a new instance of Badger with encryption enabled
  3. Import your data into the new Badger datastore.

This can be done using badger backup and badger restore tools already available. Otherwise, a simple tool could be written using Stream Framework and StreamWriter interface to allow this to happen without exporting and with a stunning 1.6Gbps throughput.

Conclusion

Badger supports Encryption at Rest (TDE) using established and proven standards and best practices, implemented in a way that works well with the existing features and benefits of Badger. And because Dgraph is built on top of Badger, you can also use strong encryption with Dgraph.

We are dedicated to adding important new features to Badger and Dgraph. Give Encryption at Rest a try, and let us know how we can make it even better.

We’d like to thank the Dgraph Developer Relations team who worked tirelessly in helping write this blog post. It was an incredible team effort.

Top image: Glittering Milky Way Photo by ESO/B. Tafreshi


This is a companion discussion topic for the original entry at https://blog.dgraph.io/post/encryption-at-rest-dgraph-badger/

In Dgraphs writings it often mentions UsenetExpress using Badger to store large amounts of data. Are there any talks about how they use Badger, what type of architecture are they using it in? Are they sharding/federating it etc?

Any technical details that you should share about their usage would be interesting.

It will be good if change master keys regularly. How to do this?

Great piece! Right now, I’m in the process of putting together an MVP for an application that would need to be HIPAA-compliant and since it’s just a side project at the moment, I need to keep costs low - what’s the best way to test out encryption at rest with Dgraph (since that’s serving as my database/backend API) using the community addition, if that’s even possible?

I’m also interested in backups/restores, but that’s out of scope for this particular post. Thanks!

The Enterprise version does come with a 1 month free-trial license which will allow you full access to all Enterprise features including Encryption-at-rest, Backup / Restores and ACLs.

Okay, and does the same go backups/restores? This would be another HIPAA-compliant requirement of the app and unless there’s a way to build a workaround for reading out/writing in all data in-bulk, this MVP could be somewhat limited.

@forstmeier yes, the trial license enables all Enterprise features .

1 Like