Ask Dgraph Founder Anything

My apologies sir.

Sure thing, so from your latest blog:

There are two major things I see here outside of the amazing :star_struck: performance boosts.

In this change, we also got rid of ludicrous mode, which was riddled with bugs. Concurrent transactions provide the same performance as ludicrous mode, while also providing strict transactional guarantees.

  1. So for users that were using ludicrous mode before, how do they do that now? Do they need to spin up multiple transactions? That always seemed like a nice feature even though it was “buggy” which was known from the beginning to not be 100% data safe anyways.

In v21.12, we have added a flag to forbid any key which has greater than 1000 splits by marking it forbidden in Badger. Once a key is forbidden, that key would continue to drop data and would always return an empty result.

  1. I understand this, but does this hence put a hard limit on the endless scalability of Dgraph?

I believe that types are stored as actual predicates so what happens when the predicate dgraph.type overflows? Do I stop having indexing on my types or maybe just a specific type? I assume this overflow would vary by the length of the value indexed and the size of the uids actually stored assuming that posting lists does not reserve 64 bits for every uid but rather just the size needed on disk.

Could this have been a problem recently discovered here:

1 Like