Load Balancing in Kubernetes environment

I am trying to set up dgraph in Azure Kubernetes environment. Everything seems to be working, all alphas and zeros are healthy and I can load data in it.

$ kubectl get all -n dgraph
NAME READY STATUS RESTARTS AGE
pod/dgraph-alpha-0 1/1 Running 0 31h
pod/dgraph-alpha-1 1/1 Running 0 31h
pod/dgraph-alpha-2 1/1 Running 0 31h
pod/dgraph-ratel-6f795595b5-pww7t 1/1 Running 0 44h
pod/dgraph-zero-0 1/1 Running 0 44h
pod/dgraph-zero-1 1/1 Running 0 44h
pod/dgraph-zero-2 1/1 Running 0 44h

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dgraph-alpha ClusterIP None 7080/TCP 44h
service/dgraph-alpha-public LoadBalancer 10.3.155.152 xxx.xxx.xxx.xxx 8080:31248/TCP,9080:32623/TCP 44h
service/dgraph-ratel-public LoadBalancer 10.3.174.75 xxx.xxx.xxx.xxx 8000:31982/TCP 44h
service/dgraph-zero ClusterIP None 5080/TCP 44h
service/dgraph-zero-public LoadBalancer 10.3.226.249 xxx.xxx.xxx.xxx 5080:30917/TCP,6080:30237/TCP 44h

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/dgraph-ratel 1/1 1 1 44h

NAME DESIRED CURRENT READY AGE
replicaset.apps/dgraph-ratel-6f795595b5 1 1 1 44h

NAME READY AGE
statefulset.apps/dgraph-alpha 3/3 44h
statefulset.apps/dgraph-zero 3/3 44h

I have set up a load test to send 100 requests to the cluster at a time and I notice that only 1 of the 3 alphas (not the same one every time) cpu/memory usage jumps up high, the other 2 are not used.

Should the requests be balanced among all 3 alphas? What is wrong with my setup? Any help is apprecieated.

If you have shard replicas set to 3, this all represents one “group” where state is replicated amongst all 3. There will be an elected leader node who will have higher CPU/MEM.

If you have shard replicas set to 1, then this represents 3 groups and dgraph will try to balance the predicates between them, eventually getting closer to equal work (for perfectly distributed predicate usage, anyway)

@iluminae I have shard replicas set to 3.

exec dgraph zero --my=$(hostname -f):5080 --peer dgraph-zero-0.dgraph-zero.${POD_NAMESPACE}.svc.cluster.local:5080 --idx $idx --replicas 3

We use a load balancer to access the alphas, and the load looks like this.

The leader node has a higher CPU, do you know why one of the other 2 nodes seems to have no load?