How are we tracking on our priorities?

I’m very hopeful that if Hypermode is relying on DGraph to be part of their core solution, similar to the way customers of DGraph have relied on it, then Hypermode will begin ‘eating their own dogfood’ with Dgraph. I find when that happens, resiliency of a product goes up, simple ‘gotchas’ that were overlooked for years are addressed, and operation of the product is made easier.

For example, the team I work with runs DGraph on bare metal servers. When those servers take an unexpected power hit, it’s like rolling the dice if DGraph is going to be able to come back online or not without importing a nightly backup of the full database. Twice we’ve lost data because we didn’t have exports/imports working to recover. We have nightly backups now, but they don’t run when Badger Log Compaction is occurring - there’s no easy way to check for this and guarantee a backup is created without additional custom tooling.

Another example is planning to scale the database. If you exceed 1.1TB of data, you have to set the maxLevels in Badger to 8. We took an outage to figure this out. Why isn’t this handled automatically without the database breaking?

Ratel is our view into the operations of the database, but I can’t view all the tablets within a Group in ratel. Is there another way to view all the tablets, sure, but it’s not as easy for an end-user as using ratel.

While these are our experiences, I look forward to Hypermode identifying their own list of improvements to the core operation and day-to-day running of DGraph and improving it for the community.

5 Likes