… and I guess what I am really asking about is more like-- RAFT in dgraph vs ??? in gundb.
Gundb stores its graph across a swarm.
For the implementation I’m pursuing, this is ideal, because it’s a pretty large graph and a pretty large swarm. I think RAFT might get slow and whiny at >1k nodes.
is there a way to not replicate the entire database on every server? I’m guessing not, but figured it was worth asking.
The application will look familiar to everyone because it’s not so far off from what I’d originally set out to do with dgraph:
Ingest blockchain data and use the linjkages etween nodes to learn about the data, for:
- ethereum classic
And then of course it would be just insane if one did not attempt some useful correlations after doing the parsing. And the more real-time-ish, the likely better. I’ll post some more detail an info as this concept percolates.
and PS: If this is something that has been added to the docs, then great! I’m thumbing through now.
Lastly-- what’s the maximum sized cluster these days?