A current use case came up where using DGraph would be nice. More or less I was wondering what would be the things I would have to be concerned with if we pushed like 3 million + events into it per day.
Mainly I’m worried about how hard it would be on our ops guys.
3K/min is very acceptable I don’t see what worry about. You can go much further. Even more with a cluster of generous specs.
For you to have a very performative setup, you need to have good IOPS, reasonable memory level and also do load distribution for all Alpha nodes helps keep the cluster on the line. I think this is the basic “recipe” for a good balance.
But a simple SSD, 16GB RAM and few cores per node are good to go.
Well, due performance we limit most queries to 1k nodes per block. And you can exceed this limit by using “first: 100000000” for example.
There are no limits that you can perform. This will depend on whether your browser will hold up (in the case of Ratel) or whether your program/app will hold up.