Hello,
We are planning to use dGraph to analyse a dataset of about 9 billions properties spread in about 200 millions of nodes (and this is just a first step).
Some of them can be “super node” in the sense that they would be connected to about 30 million nodes.
- How dGraph / Badger manage that ?
- Does anyone have any experience with comparable volumes ?