When looking at the resource occupancy of each pod in k8s, I found that there is a serious data skew in the alpha pod resource occupancy.What is the reason for this kind of skew and how to avoid this kind of skew?
The difference in storage size might be related to unbalanced predicates. The automatically balancing happens with time but you can accelerate it with move a tablet procedure. See this link https://dgraph.io/docs/deploy/dgraph-zero/#endpoints
In that case, the solution is simple. Just put a load balancer in front of the pods. This works well with HTTP requests. gRPC is quite hard to deal with in K8s. NGINX works nicely with gRPC balancing(Docker).
Also, if you are using Live loader. You can insert all the exposed addresses of the Alphas to Liveloader. So the liveloader will balance it for you. But all addresses needs to be exposed. Or you do it via an init pod or sidecar.
After I restarted, the memory consumption seems to be the same. I will continue to observe the usage later. Is there a tool that can observe the distribution of memory consumption?