There are 3 related questions inline below:
We are trying to listen to changes to dgraph data and then do something with that data (e.g. replicate it elsewhere in a different serialization format). We were told that some people have tried listening to the badger files. Question 1: Is this a good approach?
If we want to copy 100% of dgraph data, no more, and no less then I imagine we’d want to deploy exactly 1 listener per shard, regardless of number of replicas. Question 2: Is that right?
To accomplish the above, we tried using the go lib github.com/dgraph-io/badger/v2
, but we couldn’t get Subscribe
to work. Our subscription callback is never called and this panic is never hit when we add to or remove from the database: Playing around with badger subscription · GitHub
Question 3: Any idea what we’re doing wrong?
program output:
2020/03/03 14:03:43 starting
badger 2020/03/03 14:03:43 INFO: All 0 tables opened in 0s
badger 2020/03/03 14:03:43 INFO: Replaying file id: 0 at offset: 0
badger 2020/03/03 14:03:43 INFO: Replay took: 30.650425ms
badger 2020/03/03 14:03:43 DEBUG: Value log discard stats empty
So the database is opened, but the subscription callback never gets called