Does dgraph support cascading delete?
Lol…
I think if we support DQL mutations in the custom directive (aiming for Upsert Block here) we could have a custom delete that works like cascading. Would be very easy to do a recursive delete for example, with DQL delete operation.
But that wouldn’t respect any auth rules.
Auth doesn’t works in Custom directive? it should.
@gja isn’t possible to combine a @auth
with @custom(dql...)
? for me feels natural both working. Cuz one is a param of a Type and the other is a param of a query.
BTW, if it is not possible, that means any custom directive can’t be used for private usage? just open APIs? If I can’t use Auth with the custom directive even the GraphQL remote is useless, unless I use open APIs like “Github GraphQL API” (just an example, but to use the Github’s GraphQL you need a token - the Auth directive is different thing in that context).
The custom directive can fire off DQL directly without going through the GraphQL resolve function that applies the auth rules. So it is useful. I have an auth that blocks users from querying a lit of users, but I also needed to allow access to a user under certain circumstances. Without really dirtying up my auth for users, I added a custom query that bypasses the auth and retrieves the data I allow directly. It can be a dangerous practice as it reveals an otherwise forbidden data aspect, but there are use cases for having both.
I also use remote and custom mutation to bring authentication into my GraphQL for a single endpoint. This helps to make it simple use for front ends not needing to authenticate at one place and then go to a second place to get data with that authorization.
Yeah, that’s a really different use case.
How? If I wanna do a complex query in DQL and those results can be accessed by unauthenticated users, this is useless in that case. I can use just for open(source) things or “passthrough” features like third party auth as you mentioned
Also, this seems to be a concern yours here.
I get what you say, but not sure about the meaning of the use case example.
The point here is that you mentioned the “@Auth
” doesn’t work with “@custom
”. That’s the question. For me it really should. We should be able to control who can do DQL queries and DQL mutations if so. Not just expose them.
I have never tested Auth, just custom. It would be obvious to me that there was a support. But you say there’s none.
You said “if we support”. I thought DQL is already supported in the custom directive? Never tried it though.
Are you talking about this example here for doing a recursive cascade delete with DQL? If so, can we also specify infinite depths? Also, what is the return type of this DQL upsert operation? Is it “Group”?
It does support, but just queries not mutations or Upsert Blocks.
If you use an Upsert Block, you can do several query blocks to reach out every single node related to that object you wanna take off.If you use an Upsert Block, you can do several query blocks to reach out to every single node related to that object you wanna take off. So you collect of UIDs and use a Delete operation block in the Upsert to clean up things. You can also do combinations with cascade directive and so on, you a free to do anything in DQL.
Recursive delete would be a single query from the target with all possible outgoing relations. That one is a bit destructive if you don’t know your schema very well.
Yep, but that one isn’t safe to run. Only if you know your stuff.
Okay, thank you. Do we have a ticket for this?
I think there is one, need to check tho. Maybe is in Jira. I’ll check. But there is an internal conversation about supporting this. But feels like it is getting cold as there’s no demand for this.
Ultimately I’m not sure about that since I only started exploring dgraph 2 weeks ago but I feel like that many of the more complex feature requests recently (cascade delete, copy mutations, soft delete, …) could be solved (at least temporarily) with dql upserts and mutations until dedicated GraphQL directives are implemented.
What I could do of course is to execute my complex DQL operation from my 2nd server and write a custom mutation that calls this 2nd server, which calls the DQL endpoint on dgraph.
Could you elaborate a bit more on this one here?
You are doing a recursive query, so you can end-up deleting unwanted data if you don’t control the depth or the model.
So Let’s break down my understanding and see if I am off base here anywhere: (sorry @tss) for hijackng your OP
@auth
- can only be applied on types in the schema. And only accepts four properties, query, add, update, and delete. The rules defined on these four properties are applied to it’s parents generated queries and mutations in the resolving function to limit the data returned to rule matching data only.
delete[Type]
- is a generated mutation as long as the type has either a field with the ID type or a field with the @id
directive. This mutation will delete nodes based upon a required filter parameter. This mutation will respect the auth.delete rule(s), e.g. @auth(delete: { ... })
.
@custom
- can be applied on either a predicate in a type, or on a custom query or mutation. A custom query or mutation, requires this directive. e.g.
# a generated type field
type MyType @auth(delete: { ... }) {
id: ID
customField: returnedType @custom(...)
}
# custom query
type Query {
myCustomQuery: returnedType @custom(...)
}
# or custom mutation
type Mutation {
myCustomMutation: returnedType @custom(...)
}
None of the examples above accept the auth
directive at the same place that accepts the custom
directive. The auth
directive is only valid on the generated type. When processing a delete of the example MyType with the generated deleteMyType mutation, the custom directive on the customField would only get called if that field was selected. And this call would happen pre-delete. I am also pretty sure that the custom directive is not able to distinguish if it is called from a getMyType, queryMyType, addMyType, updateMyType, or deleteMyType operation. All it knows is to run its function when requested without any data about the parent other than the parennt’s predicates to be able to pass them as props. (For instance the id can be passed and used inside of the custom directive.
This is my understanding based off from the docs and also confirmed in:
So as was stated, in that related topic, you can pass custom headers and even pass the entire header used for the auth directive. I see no way to do a DQL custom operation that respects the auth directive aside from a single case:
One work around, With a custom directive on the predicate it cannot be called on a node that the user does not have some form of access to (query, add, update, delete). So you could, very dangerously I will say, have a custom directive that runs a graphql delete operation passing through the JWT header. BUT!! this delete mutation could then be run on any query, add, update, delete operation. Let’s demo this:
Read the surrounding context and warnings above before trying this code:
type Child @auth(
query: { rule: "{$canQueryChild: { eq: "\true\" } }" }
add: { rule: "{$canAddChild: { eq: "\true\" } }" }
update: { rule: "{$canUpdateChild: { eq: "\true\" } }" }
delete: { rule: "{$canDeleteChild: { eq: "\true\" } }" }
) {
id: ID
name: String!
parents: [Parent]
}
type Parent @auth(
query: { rule: "{$canQueryParent: { eq: "\true\" } }" }
add: { rule: "{$canAddParent: { eq: "\true\" } }" }
update: { rule: "{$canUpdateParent: { eq: "\true\" } }" }
delete: { rule: "{$canDeleteParent: { eq: "\true\" } }" }
) {
id: ID
name: String!
children: [Child] @hasInverse(field: parents)
dangerouslyDeleteChildren: DeleteChildPayload @custom(
graphql: ...
)
}
# Dgraph.Authorization {"VerificationKey":"","Header":"","Namespace":"","Algo":"","Audience":[]}
But there are problems with this.
- This could be accidentally ran on a get*, query*, add*, update*, or delete* operation.
- The graph mutation to for deleteChild is not currently able to filter by the parent id, to delete children.
Okay, it just doesn’t work. So we should support it. Deleting or sensitive operation should be able to have Dgraph Auth.
We have some feature requests here and I’m not sure if we have tickets for all. Maybe @hardik could help here.
- Support Auth + Custom combo as mentioned in How to handle Authorization with @custom queries - #8 by abhimanyusinghgaur
- Support Mutations in Custom directive.
- Support some type of cascading delete natively via GraphQL (The hardest one)
The upcoming @lambda
support will help solve a lot of this custom business logic as it could be termed. This is where the lines get blurry. What to force users to do in external scripts whether through the custom or the lambda directive, and what to work on a feature requests into the core. I believe this is where some negative sentiments come from: (github roadmap comments)
@MichelDiz your analysis would work if GraphQL was a query language, which is not.
@styk-tv - There is also still the very important part of web interaction called user input sanitation, which is just as much, if not more, of a security aspect than RBACs or ACLs. I mean, we don’t want to run into little Bobby Tables, right?
Yes, graphql injections are real. So question is will dgraph as community work out authorization, DoS, injections or just say hey, here is /graphql we have worked out this issue or follow footsteps of bare graphql and let Apollo grow into a superpower?
Re:
/graphql
endpoint, once we have that in place, we’ll see what all functionality is provided by existing frameworks like Apollo, and what we need to build as a database. Of course, we want to prioritize the security of data, but at the same time, we don’t want to “become” Apollo.
That is smart and for that reason, I think it would be a wasted effort to offer a pure GraphQL connection to Dgraph, with the intentions of using Dgraph as a “backend” for client-side apps. There must be a layer of business logic in front of Dgraph and behind the GraphQL endpoint for any sized application to be safe and work smartly.
The current GraphQL API should be specs compliant to begin with (and even then, GraphQL Clients outside of other servers shouldn’t connect to it directly). I understand trying to keep backwards compatibility, but they shouldn’t have broken away from the spec to begin with. That’s what specs are for.
That makes me curious, please consider this: If /graphql is not to be used directly by the clients (no jwt auth mechanism, not recommended to be exposed outside) then what is the point of /graphql endpoint built in?
Since GraphQL endpoint usually consumed directly by public client, I vote that GraphQL support as new Dgraph client API that can be only used by Dgraph client or separate GraphQL-Dgraph conversion package/library, rather than opened to public /graphql endpoint.
Anyways, the conversation continues as to whether the graphql endpoint offered by dgraph should serve clients or only other graphql layers. It appears the decision was made to make it open enough for developers to decide which way they wanted to implement it themselves. Developer A can allow clients to connect directly and developer B could only allow another GraphQL server to connect. So developer A is not asking for all of these extra features to be included in the core while developer B is adding these features into his server layer. (I am developer A) But developer A is not realizing that maybe he should become more like developer B because of hard to solve business logic. Until he realizes that new lambda feature will put javascript resolvers back into the core (somewhat with external script) so I just need to create a bunch of custom queries and directives and back them with scripts. Which is possible, but might be easier if features X, Y, Z were added to the core as well. After adding features X, Y, and Z, developer C comes along and wants to adjust feature X to his business logic now or add feature W, and the cycle continues all along developer B has implemented his server layer and is happy using the core without X, Y, Z.
Hmmm… A lot to think about here and to process best direction to solve the never ending feature request for business logic. Where does the madness stop? lol. Do features keep getting adding to make Dgraph able to handle some of the functionality users are use to having with SQL like services which bloats the service for developer B?
Does developer D come along and say heck with it, I will just write a complete GraphQL layer with DQL since my business logic is going to have to be custom anyways. How much work then does developer D do, that Dgraph has already done. How much time is wasted adding already included features vs. time used to add custom business logic. Does developer D jump outside of his comfort zone writing Node GraphQL servers and instead fork Dgraph to make it generate his business logic?
Oh, what a tangled web we weave*
*I understand you are not seeking to deceive though it just happens with most development cycles.
I tried it out and must say that I don’t find it useful for the following reasons:
- It’s JavaScript. That adds tons of overhead, dependencies and deployment-issues to the very small (~40 MB) dgraph executable. In our case a deal-breaker for shipping a cross-platform solution.
- For custom behavior, I have to mirror all auto-generated queries, mutations and subscriptions into the javascript source file where I don’t even have auto-completion and have to rely on making no typos. I already see the amount of bugs emerging from having 2 different code basis that don’t sync automatically.
In the end, maybe this is just my personal opinion and everyone else will love @lambda.
For my part, I think that native golang pre- and post-hooks would be the goto-solution.
I can understand that. Which puts you in between developer C and D in my store above, lol.
For a moment I thought that “I” said that, but I realize that I would never say such a thing.
No need, You can stay Full GraphQL and with some DQL candies. No problem. But GraphQL can’t do all things, just the basic stuff and a lot of others by spec. GraphQL isn’t “aware” of Dgraph. It can’t bend towards us. So or we implement something elegant and hardcoded or use what we already have. You might think the lambda is the holy grail. I love JS, but not so sure about it. if people will be willing to use it or if it is better than use pure DQL. Only time will answer.
There’s no shame to cut paths if the actual end of the curve would come in 100 miles away. Why run 100 miles if you can turn left and it is not an illegal move? Also, we already doing it in one way.
We can query with custom, but we can’t mutate. Why? You put the hand on the honey pot, but you’re not eating? hehehe
Right, sorry. Not quoted from you but towards you in that conversation. I loved your factual responses and agree with the direction that was decided upon and where it is today.
Just time will tell how much of the candy I have to unwrap and how many licks I have to take to reach the center of that tootsie pop. If DQL does not honor the auth rules then as my team has thought before, maybe we should just rewrite all of our rules in DQL and do more direct DQL responses through a 3rd party script that can do upserts and some logic between multiple queries and mutations.
Sounds good to me! I like shortcuts!
But how many feature requests do I make for “business logic” as some would call it.