Continuing the discussion from Query sever timestamps in GraphQL?:
What?!? Please tell me you are going to share this!?!
I used it to import 600+ MySQL databases into one Dgraph. Now my implementation is not a live stream right now, but with using timestamps from MySQL and eventually timestamps on Dgraph, I hope it can be one day soon to support users transitioning between our app versions. This docs describes how to import from a .rdf file, so the process would be client data → rdf file → live load to Dgraph.
Wondering though, that if you are running two exact dgraph databases, if there is a better way to sync. Are these at sync exact duplicates or does the client just contain part of the server? If these are exact duplicates after a sync then it may be possible to do some kind of advanced configuration and running the client as an alpha of the server and then when they come into contact the server alpha(s) get updated from the client alpha. Hmm… I wonder… if maybe there is a better way using grpc directly instead of using graphql or live loader directly.
If you are running Dgraph client side, then you must also be running a client side zero that is handling uids. This would probably be the hard part because how would the server zero know that the client zero has been utilizing a chunk of uids to know not to reuse those. This is probably the harder part.
Kinda reminds me of when I was doing client side MySQL slaves that when they came into contact with the master would push up changes. For the uid problem here, I just has a single slave and a single master, and the slave could use even AUTO_INCREMENT and the master used the odds. Maybe the team has already dealt with syncing offline/online dgraph clusters.