Data inconsistency between nodes

I have a 6 nodes cluster (3 zeros, 3 alphas). I have 1 group.
I run dgraph version 20.11.3 on kubernetes cluster.

I found out some data inconsistency between nodes - the result differs when I run the query: 2 alphas returns the same data, 1 different:

for ip in 10.220.21.176 10.220.10.18 10.220.41.131; do curl -X POST -H 'Content-Type: application/dql' -s -d @dgraph_query.json http://$ip:8080/query | jq . | wc -l; done
    3291
    1585
    3291

The /state shows the same for all nodes. No errors in logs.
The only difference about the inconsistent node is "forceGroupId": true iin the status. What does it mean BTW?

How is it even possible? How to fix such situation? How to diagnose it in the future if I don’t see any errors?

Possible important info: the data was populated on the nodes with dgraph bulk. Is the output of this command consistent? Because I ran this on each alpha. Maybe I should run it only on 1 node and copy the p dir over all nodes? Is it possible this caused this situation?

A lot of questions but I’m quite new in dgraph and trying to understand what’s going on there :slight_smile:

Thank you!

1 Like

I believe you either bulk load to one alpha and then when you join other alpha nodes to the cluster for replication the data is sent from the original alpha node, but that might not be true, and you may need to copy the p drive.

I do believe that bulk loading (even the same data set) to multiple alphas is not correct. Bulk load will generate uids for any blank uids in the data and it is possible for these uids to be inconsistent between passes if bulk loading multiple times. This probably explains the data you are seeing.

It is interesting that the nodes all joined the same cluster with a possible mismatch of data. Wondering how there could be a check on identical data before allowing the cluster join. The only difference most likely is the uids mapped so even if you counted predicates you would appear identical. Maybe there should be some kind of hash generated on bulk import somewhere that would be unique to that specific ingest so even if you bulk loaded the data to another alpha it would not have the same hash, but if you copied the p directory it would of course have the same hash. That is if you have to copy the p directory to the new alphas. (Copying the p directory sounds familiar)

1 Like

Agreed - the bulk load instructions say to run it once and copy the files to the appropriate places.

https://dgraph.io/docs/master/deploy/fast-data-loading/bulk-loader/#for-bigger-datasets

Alternatively for small amounts of data you can let dgraph replication handle the copying:

https://dgraph.io/docs/master/deploy/fast-data-loading/bulk-loader/#for-small-datasets

3 Likes

Thank you very much, I’ll try to do the migration once again, this time properly.
What is a bit strange for me - there is no repair procedure when the snapshot is created on the leader. I’d rather expect such situation to be fixed automatically of at least manually (like for example in Cassandra).

Anyway - thanks again for your interest!

1 Like

Yea the snapshot is only sent over if a secondary node is detected to be too far behind - I think maybe the commit timestamps were very confused because of the loading pattern - but they are assuming they are talking about the exact same commits and here they were not.

Not sure if this was clear from the docs or not, but bulk loading is done against a fresh zero cluster that remains as the zeros connected to by the alphas after they start up.

If you dropped the other alphas and then added new empty alphas it would rebuilt it. Just would have to pick which one you wanted to keep

Yes, I was aware of the need to clear the zeros and did it.
Thank you.

1 Like

This was my idea to just clear the data on the “bad” node but it’s a replica set on k8s cluster, so it’s a bit tricky (it’s not a last cluster).
I’d better like to do the bulk loading once again but properly this time.

1 Like

@amaster507 @gkocur @iluminae, I am trying to execute dgraph bulk load with smaller dataset and as per documentation you don’t need to copy the p folder to all the alpha nodes, you can just copy to the 1st alpha and dgraph replication strategy will take care of copying p directory to the remaining alpha nodes.

But it is not working. did you guys put any comments on how to proceed?

I am using dgraph latest version v23.0.0