Apologies if my comment confused you, but it is actually quite straight-forward:)
Of course, S * * will only delete reverse edges outgoing from the specified node which is the subject S and not the object.
Obviously, we do not want edges incoming from other nodes to be deleted, when those other nodes are not the subject.
I believe the desired effect is that it DOES delete all inbound edges as well or else you will have phantom edges to nowhere. (Well it actually would go to somewhere being the uid of the deleted node, but that node would not have any predicates or even type(s).)
There is no way in Dgraph to delete ALL references from and TO a given node because there is no * * O delete method.
@droslean - You normally search for your node by uid, then get the reverse node uids to delete the inverse relationships. If you model your data correctly, you should not have to delete individual nodes at all. Make sure to use @hasInverse, and delete your nodes through graphql, or add both ends through dql and delete both ends.
If this problem comes up, you are probably not modeling your data as efficiently as possible.
Look into @reverse, as that is one way to solve the problem. Also, here are a couple of threads that you may find useful.
You shouldn’t need to randomly search for the inverse edges.
But we actually do. We just don’t want to refactor all blocks of code if we add an edge in the schema. And yes we have to implement a new graphQL client just for the deletions instead.
I was wondering if there is a way to do it with DQL, but based on your replies and the investigation I did in the docs, it seems that there is no way to do it, which in my ignorant eyes that seems as an epic fail in DQL.
But either way. Thanks a lot for all the information! My conclusion is that the best practice is to have a combination of graphQL and DQL.
What you are suggesting is correct but we can’t do this practice in production which is what I am trying to explain in my previous posts. Maybe the solution to my schema can be the @reverse but I am not sure how that works, to be honest. Let me try to give an example.
If understand correctly, you are saying that with DQL in order to delete a TestProduct I will have to delete all the other nodes as well. So far so good. The problem with this approach is that if we add a new field in TestProduct(e.x newField: [NewField!] @hasInverse(field: testProducts)), we will have to change ALL DQL mutations in every part in the code. This is impossible to maintain. I would appreciate if you elaborate more on why using @reverse will resolve the issue.
So the DQL schema is generated automatically from the GraphQL schema. You can only use reverse on the dql schema. If you’re using GraphQL, you should just use @hasInverse instead, and you should get what you need.
You can, even with DQL. All @reverse does (and all you have to do) is make sure every new mutation creates an inverse triple pointing back to the original node. This is how other graph databases work as well.
If you can’t create one direction, you will not have to delete in only one direction.
I don’t think I understand this. If you add a new field, that field will not have any data. Again, when you add data to a field, make sure to add it in both directions in dql, or use graphql with @hasInverse which does this automatically.
If you’re referring to modifying an existing field from one way to bi-directional… don’t do that. Dgraph does not work well with type changes. You should export the data, then re-import it. Better yet, depreciate the field, and create a new one.
So in sum:
Always add data in both directions for nested fields if you need to query in both directions
If you have one direction, you can get the other direction.
Always delete both directions
Graphql mutations do all this for you. Use graphql in all cases where possible… even in lambdas. Otherwise, add and delete in both directions.
Don’t modify a field in the schema with existing data…