Can Dgraph do 10 Billion Nodes?

Hey Igor, Manish in his response to you said what this was about.

Manish Jain 6 days ago
Go GC isn’t the best. In latest versions we are running manual GC.

Manish Jain 5 days ago
I doubt the dataset is using that much. Go can be slow in giving the space back. But the only way to know is to run memory profiles.

I realize this happens, but you can mitigate this by creating a balanced cluster with balanced loads.

This is not accurate. I already did Liveload (which is the same as using a client) several times with larger datasets. But each instance having 21GB of RAM (3 Zeros, 6 Alphas - 129 GB of RAM totaling all instances). It takes a while to load, certainly, but does not consume 200GB of RAM - I always do balanced loads. I don’t send everything to one instance only. As Manish said, the dataset inside Dgraph does not consume this RAM. Any RAM larger than 10GB is the result of problems with GC. However, this has to do with writing only. Queries are safe and you can use best-effort query.

I talked to some devs and we gonna prioritize an analysis on that.

If eventually your 3 million Nodes inserts are consuming 15GB of RAM. This must be because your entities are loaded with some heavy data. They should not be small and have lots of indexes. Making 3 million inserts per second of large entities is not done in simple clusters. I think even MySQL would have a problem with that (MySQL writing is at most 10K per second).

For you to do that with MySQL you might need a GraphDB that uses MySQL but with 300 MySQL instances. It would be possible. but if it would be practical I don’t know. It is best if you load this by obeying the DB write limits currently in your cluster configuration. Or use Bulkload that you will be able up to billion++ Nodes in less than 2 hours (With a good instance e.g Loading close to 1M edges/sec into Dgraph - Dgraph Blog).

Another example

Take the example of this benchmark here GitHub - linuxerwang/dgraph-bench: A benchmark program for dgraph.
It’s a bit old (one year), He works with 10,000,000 person nodes. The total edges all together exceed 500,000,000. It only uses one Alpha and one Zero with 64G memory and 500GB SATA SSD.

That is, 64G RAM can easily handle 10 million or 500 million. However, he used Bulkload. This means that the GC effect does not happen because it is avoided when using this tool.

In his upgrade from v1.0.9 to v1.0.10 the throughput increases about 50% (One-Hop Friends).

That is, the simplest configuration of Dgraph can handle millions of nodes. And a well-balanced cluster can go much more further. It is a matter of planning. While we analyze this.

Cheers.