Auto deleting predicates

As mentioned in this issue panic-allocator-can-not-allocate-more-than-32-buffers,

after adjust badger cache to 100G, reduce process can successfully reach 100%, but still return this error:

Output p/ dict(total 126G) contains 2108 sst files and one 2G vlog file

Then I started alpha, cannot find any bulk load predicate:

Logs show dgraph auto deleted all predicates:

After that, all data lost:
image

Anybody can help me out? Already spent 1 week on this issue

Current dgraph version: v20.11.0-rc1-106-g47035439

Related to this, are you testing this for exploring, or are you using 20.11 builds (not released yet) in production? Feels like you are concern with time. Let me know if that is urgent. Cuz the Ristretto is a work in progress in Dgraph. Your help testing on this is appreciated. I’m sure the guys following this will come up later.

Thanks for replying, yes, I used 20.11 builds version.

Not trully for production, just some large test data. Since dgraph’s memory optimaziaion works pretty good after 20.11 version ( like this test data, 6 billion predicates only cost average 10G memory), I think it is good enough for production, as long as those small bugs fixed.

Once those import tests successfully passed, and if some special data queries work fine, we will be so willing to consider to purchase enterpise version for long-term usage.

Hence, yes, I am kind of urgent about how to slove above-mentioned issues.

2 Likes

Thanks for the clarification. I gonna check about it internally.

Cheers.

1 Like

@jokk33 The alpha is deleting predicate because it is trying to rebalance the cluster. The log says is_mobile does not belong to the group.

Dgraph does auto shard rebalancing and that’s why the alpha is dropping unnecessary data from one of the groups.

You can run a query for is_mobile predicate and it should work since another group also has this predicate.

1 Like