Feature Request: Cascade Delete & Deep Mutations By Reference Directive

Experience Report for Feature Request

Update: 7/1/21 - to comply with Feature Request Template:

What you wanted to do

1.) Update nested nodes from the parent Update Mutation.
2.) Delete nested nodes from the parent Delete Mutation
3.) Choose which nodes allow this Cascade Delete and Cascade Update, as not all parent / children relationships should have this ability

What you actually did

1.) This can only be done by creating a new update mutation for EVERY SINGLE child individually. If I am updating 10 children nodes, I need 11 mutations when including the parent.

2.) This is currently impossible, even with multiple mutations, unless you query every single ID.

3.) Obviously I don’t want to delete some nested nodes like country, language, etc. Currently, all nested updates just update the connection, not the data.

Why that wasn’t great, with examples

1.) How to Update a Book’s Chapters

mutation {
    updateChapter0: updateChapter(input: { 
        filter: { id: "0xfffd8d6aa985abef" }, set: { ... changed info here } }
    ) {
        numUids
    }
    updateChapter1: updateChapter(input: { 
        filter: { id: "0xfffd8d6aa985abf1" }, set: { ... changed info here } }
    ) {
        numUids
    }
    updateBook(input: { 
        filter: { id: "0xfffd8d6aa985abee" }, set: { ... changed info here } }
    ) {
        chapter {
            id
            name
            slug
            description
        }
        numUids
    }
}

For every single chapter you’re updating, you need to manually query the id, and create a separate mutation with a separate namespace. The should be done in one mutation like on add.

2.) How to Delete a Book (with its chapters):

  • First Get all Chapter Ids:
query {
    queryBook(filter: { id: "0xfffd8d6aa985abdf" }) {
        chapters {
            id
            ...
        }
    }
}
  • Use the returned Ids to delete the chapters one by one…, then delete the book (no other way without querying first)
mutation {
    deleteChapter(filter: { id: ["0xfffd8d6aa985abde", "0xfffd8d6aa985abe0"] }) {
        chapter {
            id
            ...
        }
        numUids
        msg
    }
    deleteBook(filter: { id: "0xfffd8d6aa985abdf" }) {
        book {
            id
            ...
        }
        numUids
        msg
    }
}

This isn’t even one step, but several steps, and several mutations. Nested Mutations would make this one step, but it will still be several mutations.

Any external references to support your case


Original Post

There needs to be a way update deep fields, and delete deep fields.

While it is clear why this is not default behavior, not having it as an option is also a grave problem.

I realize there are many many posts on this, but in summary:

Deleting

Right now it is 100% impossible to delete nested fields without using DQL. I theoretically can query to get every single ID to do this manually, but it is not even advisable if I could, since there could be thousands. I also can’t flip the node, since it would be a nested field as well.

So, I end up with ghost nodes. Again, it is currently IMPOSSIBLE to avoid this in graphql. I understand nested filters are on the way eventually:

But, the best we can hope for if you are a cloud user like myself is November at the earliest. This again, does not guarantee (or almost guarantee since dgraph graphql is not perfect) a lack of ghost nodes.

So we need to start thinking about this now.

Updating

If I want to update one record with nested fields (say an array of data), I currently have to create a mutation for that node, plus a mutation for every single node in the array I need to update. If that array (nested field) is 10 items, I need to create 11 different mutations. Part of that problem is the lack of multiple sets in update mutations, but I can save that for a different post.

Solving the Problem

The best way to solve this problem is what @amaster507 said:

While this is way too complicated IMHO:

So we do something like this:

Type Student {
  id: ID!
  name: String
  classes: Class @hasInverse(field: students)
  ...
}
Type Book {
  id: ID!
  name: String
  ...
}
Type Class {
  id: ID!
  ...
  students: [Student] @reference(onUpdate: null, onDelete: restrict)
  books: [Book] @reference(onUpdate: cascade: onDelete: cascade);
}

And just like mySQL there are four options and two paramenters (onUpdate and onDelete):

options: [restrict, cascade, null, nothing]
  • nothing, short for doNothing or noAction would be the default behavior for onUpdate to be backwards compatible
  • cascade - delete or update
  • restrict - throws error if trying to delete or update
  • null - removes connection, does not delete, would be the default behavior for onDelete to be backwards compatible

(Note: Default should be a fifth option when Dgraph implements Default values)

We should be able to do this, it is simple, makes sense, and keeps Dgraph consistent.

J

6 Likes

Yes, please!

I have updated this to be in an official Feature Request format.

J

2 Likes

This is a foreign key problem. I just wanted to add a list of competitor’s solutions to this GraphQL problem. Obviously sql also handles this internally @reference.

Pay particular attention to this Prisma Solution in the schema.

Look familiar to my idea?

Delete

Update

J

1 Like

We definitely need this feature. I select Dgraph as my backend but now she I spent some time I see drawbacks and thinking of choosing another product such as Hasura or Neo4j. But I believe it will be implemented here to:)

I was thinking about this recently. While I think this problem is VERY important, I honestly would not put this at the top of the list.

You could solve both problems with Lambda Webhooks.

cascade delete

@lambdaOnMutate(delete: true)

deep mutation

@lambdaOnMutate(add: true, update: true)

I think you would need both the add and update hooks, since nested data could already exist.

Perhaps someone can eventually post a reusable boilerplate code for those cases. We could literally just write the types of @reference functions once, and reuse them in the types where we want deep mutations or cascade deletes.

I want to be clear that I think this feature is VERY IMPORTANT. However, features that we cannot accomplish at all should take a higher priority, due to the lack of programmers and updates to Dgraph.

J

2 Likes

@jdgamble555 thank you for reply. I was trying to do something like

enum ActionType {
  Restrict
  Cascade
  Nothing
  SetNull
}
directive @onAction(delete: ActionType, update: ActionType) on FIELD_DEFINITION

type Complex @lambdaOnMutate(delete: true, update: true) {
  id: ID!
  child1: Sub! @onAction(delete: Cascade, update: Cascade)
}

type Sub {
  id: ID!
  info: String
}

My question is: can i assign to fields some metadata(like directive onAction) to access this metadata from lambda hook?
Because dgraph removes all my directives i set, once i push my graphql schema to server and pull it back i do not see my @onAction directive. So that i can define logic in my schema instead of lambda function.
In another way i need to create lambda hook for every different type and field which has not sense. But if i can assign some information(metadata) to field in schema and access that info from lambda then it would be cool and i can just make one function to solve this problem.

No, you cannot create your own directives, you have to use the directives already in Dgraph. You can access the data though in your lambda hook.

J

2 Likes

For anyone who finds this post, I wrote the hooks for you:

That being said, I emphatically believe this should be built into GraphQL, or better yet, into DQL as constraints like mySQL.

We need to have data integrity built in!

J

3 Likes