Find all inbound outbound edges

So, the “GQL” acronym isn’t quite clear - people frequently mean DQL or GraphQL when they type GQL.

@verneleem has answered the question for GraphQL. In DQL you’d do this:

{
   q(func: INSERT_ROOT_FUNCTION_OF_CHOICE) {
          y @filter(0x13) { # 0x13 is the ID 
             expand(_all_)
          }
    }
}

1 Like

But there you are assuming the edge label is y . Also, INSERT_ROOT_FUNCTION_OF_CHOICE would have to be has(dgraph.type), which looks expensive if it will traverse all nodes.
So the issue is that DQL can’t do generic queries like gremlin/cypher (as @verneleem confirmed), you have to be explict about the types, edge labels and so on

Yep, that’s accurate. I don’t think there is a need for generic patterns, but if so I don’t see why not support it. Just need a feature request and more people wanting it. Also, I’m not sure if Dgraph is designed for this. I think it works best with edges, uids and indexation. I think the generic pattern might drain too many resources of it. But who knows.

@Matias_Morant would mind translating in English what the Gremlin query means? In my mind is kind of like “wtf?? why so abstract?”

Cypher I can read. “Return to me all objects who have any incoming link/edge to id 13”. Meaning “Find all parents of that child dude”.

Not sure, but I would expect something like (x)-[*]->(y). I’m not Cypher expert tho.

@MichelDiz gremlin query means exactly the same:

        g        .        V(13)             .                 in()
from the graph  get  Vertex with id=13  then get  all INcoming vertices (parents)
2 Likes

If you use reverse indexation with recurse it might work as you wish. Actually, it will work. But unfortunately, it is highly typed.

Wall of text with my thoughts on the matter:

I think one of the strongest design principles of dgraph is the storage layer: store by predicate, not by “node.” I have come to appreciate this as the enabling factor for the scalability of the database. Using the type system to discover edges in theory is perfect for completed applications with strict typing everywhere. In practice, I have found the Type system in dgraph ineffective for dynamic use cases and prefer to ignore it completely. (Type with a capital T - not the predicate types, obviously those are necessary.)

I would say it is a huge hinderance to debugging within the database with YOLO queries and trying to figure out answers to questions like this. It is like the select * from.. analogy for RDBMSs. Yea, that is a thing you can do and probably the first thing you do when you get to exploring a table you dont understand - select * from tbl limit 10. But what kind of application actually executes select *? (spoilers: bad ones).

Maybe dgraph will need something to make people feel better about debugging experience in the database. But, as with our friends in the RDBMS world… I do not think making select* the feature everyone wants to use to build their application is a wise decision. Especially since the storage layer makes it incredibly inefficient to do so (search every edge predicate for this uid).

Spitballing: What I do when I want to debug in the database console is I copy all predicates in the bulk editor, quick sed them into a single Type called ALL. Then I can do:

query {
  q(func: eq(myrootfilter,"filter")){
    uid
    expand(ALL) {
      uid
      expand(ALL) {
        uid
      }
    }
  }
}

I think thats all people want to be able to do? Its ugly, hacky, inefficient… but it lets me select * in dgraph real easily. (note this is not what expand(_all_) does. That checks out the dgraph.type predicate on each node to know what to look for next.) Maybe thats all users really want? For ALL to be a virtual type that exists already?

1 Like

I have used this a lot. The idea to find incoming edges would be or via expand myType with a Type with all reversible edges. Or a recurse query with the reversible edges.

Also, the Type hack is a good way to debug some unTyped datasets out there. Let’s say you have noted all possible predicates the dataset has. Just create a huge Type with all of them and start to open a clearing in the dataset and start to see the structure to be able to reflect this in the Type System. So, there you go, you just built a Type Schema.

It is “hacky” indeed, but really useful. I think the old feature _predicate_ was really powerful and you didn’t have to worry about those details. But it competed with the Type System and therefore removed.

BTW folks. If we gonna continue with this, if you still need help. Let’s open a new topic. Next time, please, open a new topic right away and use the one you have found, once as reference. So the old participants don’t receive a lot of emails with somebody else’s discussions.

We gonna read all of your questions and answer them based on your references too. Don’t mind in opening new topics. Even if you are asking again the same thing that wasn’t clear to you.

I’ll close this, and again. Don’t hesitate to ask.

Cheers!

1 Like