Alpha data skew

What I want to do

When looking at the resource occupancy of each pod in k8s, I found that there is a serious data skew in the alpha pod resource occupancy.What is the reason for this kind of skew and how to avoid this kind of skew?

What I did

Imported more than 20 million nodes (there is no way to count the specific amount of data).

Dgraph metadata

dgraph version
Dgraph version   : v20.11.2
Dgraph codename  : tchalla-2
Commit timestamp : 2021-02-23 13:07:17 +0530
Branch           : HEAD
Go version       : go1.15.5
jemalloc enabled : true

The difference in storage size might be related to unbalanced predicates. The automatically balancing happens with time but you can accelerate it with move a tablet procedure. See this link https://dgraph.io/docs/deploy/dgraph-zero/#endpoints

Cheers.

@MichelDiz - @musiciansLyf is showing the output of kubectl top pods but cut off the headers - the final column is memory, not storage.

@musiciansLyf I assume this is the leader of your group with 3 pod replicas - he will always do more.

In that case, the solution is simple. Just put a load balancer in front of the pods. This works well with HTTP requests. gRPC is quite hard to deal with in K8s. NGINX works nicely with gRPC balancing(Docker).

Also, if you are using Live loader. You can insert all the exposed addresses of the Alphas to Liveloader. So the liveloader will balance it for you. But all addresses needs to be exposed. Or you do it via an init pod or sidecar.

Yes, dgraph-alpha-2 is the alpha leader.Can you please tell me how to check the specific usage of each pvc?

But there were no queries or mutations at the time, so would its memory usage be so high?

You said “Imported more than 20 million nodes”, the RAM takes some time to free.

Uhhh, the node was imported about a month ago, but the memory usage is still so high now.

So I don’t know what it is. Put down the pods and restart it. If increases again it might happening something else.

After I restarted, the memory consumption seems to be the same. I will continue to observe the usage later. Is there a tool that can observe the distribution of memory consumption?

When can the fragmentation strategy support fragmentation by predicate?

Dunno, it is on the roadmap tho.

Okay, please sync after support.

To be clear, when you say this you mean “Sharding at predicate level” right? - You won’t see this in the short term, probably mid 2022.

1 Like

Alright, got it.