You are deleting the “E-mail” Node directly. In this case it would be necessary to delete as “@reverse”. But I do not remember if there’s a way to do that via Json. But there is a better way. The sure thing would be you build the complete obj JSON and then use “null”. Like this:
{
"uid": "0x01",
"emails": null # to delete all emails
}
{
"uid": "0x01",
"emails": {
"uid": "0x02"
#to delete only that link - This one is the right one
# and will delete all edges from 0x02.
}
}
{
"uid": "0x02"
# to delete all predicates, but will not delete the link if you have @reverse
# There will be a UID in the parent in case of @reverse
}
Yes, there will be empty nodes. But the Dgraph will not return them without links between nodes.
If you prefer you can “recycle” these nodes. But you should do it manually via application. Add a predicate of type “_deletedNode” and you can search it with “has (_deletedNode)”. But this is in case you find it really necessary.
Do not worry about UIDs, there will be trillions of them to use.
Let’s say you have an application facing the financial world. Using this approach. You can identify which user X had already used this data in an old account. And then you can raise a flag to analyze that case.
You could even trace documents that way. If someone used the same official document ID or “CPF” (Brazilian Document). Your application will “know” as that way and then you create a logic to “follow the money”.
Thanks Michel, you gave me another interesting point to think about: why delete it at all?!? Nice.
No, I was not worried about spending all of the uids of the world and damage the global wild life, I was more to the ancient worry about disk space. I know, I know… but I was there when you had to carefully choose between VARCHAR(5) or CHAR(5)…
Stoping to rethink about it, it doesnt make sense anyway that the amount of “deleted” data would grow to a huge size that, besides disk consume, would hurt performance, but THEN AGAIN, its a graph database silly me, size doesnt matter =] =] =D
Shall we rename it from CRUD to CRUL? Create Read Update and Leave it, God damn!!
Actually the performance would not be affected. If these nodes are not using specific indexing, the Dgraph would not see them. It would pass straight through.
The Dgraph can handle billions of Nodes smoothly. It will now depend on your resource limitations. If you think you need to keep the data to do a “triangulation”. Just do it lol