Was trying to find out query to get the total number of orders involving same user and rider
For example:
C1 does order O1 assigned to rider R1 for delivery
Same customer does order O2 assigned to R1 again
The count for orders involving C1 and R1 is 2
If the customer and rider identifiers are given the count of orders for given pair of customer and rider could be found out with filter and aggregate (even paths query can be used).
What if we want to find out the count of Orders for each pair of customer and rider in the dataset (Consider it as a large dataset) and then apply the condition based on the count?
Ex: Want to analyse the data where count of orders involving same customer-rider pair more than 3 in a week from all the orders in the past week
Can you please help with the above query ? Is there any plan to introduce foreach function or any loops -this seems to useful in the above use case
Welcome to the Dgraph community. Right now what you want to do is not quite possible in Dgraph with the current schema. You could change your schema a bit more to do that.
Thanks for replying, can you please suggest the possible schema change . As @BlankRain suggested is it better to have a edge between customer and rider with some edge attribute with which I can filter and aggregate later
Hi
Using upsert block, each time ,do mutation on both C-R Pair and DateCount Object, when having an new order .
If you want to query the count,
you can just go from C-R Pair ,then add some filter on Date Count ,then sum it.
Or just filter DateCount Object, then sum it.
Hi @chewxy / @MichelDiz ,
Can you please suggest the possible schema change
It can be done by using group by on order.assigned_to and ~customer.order , in that case I have filter out the results in the code itself and then query again with the filtered customer-rider pairs but if it is possible by making schema change then will try it out
And also if I group by multiple uid predicates (ex: groupby(order.assigned_to,~customer.order)) I cannot assign it to any variable , is there any work around for that