cd contrib/config/kubernetes/helm
helm install my-release ./ --set alpha.service.type="LoadBalancer"
Use JS Flock and point the endpoint to the Dgraph alpha load balancer IP.
Expected behaviour and actual result.
OOMs of pods after a while of running the JS Flock app, multiple times.
There doesn’t seem to be any limit set on the CPU or memory. It is worth looking into what is causing it.
For the helm charts, you definitely want to put some limits on the pods, or else it can consume enough memory and disrupt services like kubelet and taint the node so that other pods cannot be scheduled on it.
I also wonder about the behavior using NEG instead of LoadBalancer service type. This can be configured in service annotations.