Efficient querying a large database

Let’s say I have a zillion nodes, so a zillion uid’s. Though only 1000 with type User.
What would theoretically be the fastest query to find User 0x01?


{q(func: uid(0x01)) {


{q(func: uid(0x01)) @filter(type(User)) {


{q(func: type(User)) @filter(uid(0x01)) {


{q(func: has(name)) @filter(uid(0x01)) {

(providing name is only used in User)


A, fastest if you already know that uid is of type User

B, next best if the uid provided comes from a variable and you need to ensure you only get a node of type User.

D may be faster than C, it really depends on the data if all 1000 nodes all have a name predicate set. But with a very small data change one way or the other it may give inconsistent performance results.

D does not check the type, but if you know that all nodes that have a name predicate will be of type User than that might be acceptable.

B is better than C because you are checking the type of one node vs checking the uid of 1000 nodes. Starting with the smallest in the root will always lead to better performance in Dgraph.

And a side note, depending upon your exact configuration and data ingestions/mutations allowed, it might be possible to have nodes with the name predicate that are not of type User. Most configurations are setup to allow for nodes to follow a type model but not force type models upon those nodes. The type model then merely aids in queries that use expanding edges and S * * delete mutations

Thank you Anthony!