Get all shared nodes(type A) between nodes( type B)

What I want to do

I’m starting with dgraph and I am stuck with one query
I have the following case:
A person (node P) can have n things (node T)
The things can belong to just one person but can also be shared
I found a way to get the shared things between 2 people

 shared_2(func: uid(0x3bd9a)) @cascade{
	Person.name
        Person.things{
	      Thing.name
       ~Person.things @filter(uid(0x2db09)) {
	     Person.name
      }
    }
  }

But I can not make it work with 3 or more people.
I tried uid() AND uid() but it does not work, (because the filter aplies to each edge one by one, I guess)

I tried to duplicate ~Person.things @filter(uid(…)), and let cascade work, but dgraph does not let me duplicate them neighter.

This looks like something that should be simple… but I can’t :frowning:

Thank you for your help, and sorry if my english is not good enough.

1 Like

hi I think value variables will solve it. You query first like you did for each person the nodes, and then you intersect them Value Variables - Query language

also check out Multiple Query Blocks with DQL - Query language

btw also check out the discord Community-run unofficial Discord community

1 Like

Thx for the help. I found a solution in the weekend but I also tried your idea for benchmarking.
The real use case is not about people and things, it is about people and genes. But the same idea.
The metrics of the result for the common genes between 4 people is:

Yours:

var(func: uid(0x3bd9a)) {
    Organism.encodes_transcript {
  		tr_1 as uid
    }
  }
  var(func: uid(0x2db0a)) {
    Organism.encodes_transcript {
  		tr_2 as uid
    }
  }
  var(func: uid(0x33a9e)) {
    Organism.encodes_transcript {
  		tr_3 as uid
    }
  }
  var(func: uid(0x2db09)) {
    Organism.encodes_transcript {
  		tr_4 as uid
    }
  }
  
  query_2(func: uid(tr_1,tr_2,tr_3,tr_4)) @filter(uid(tr_1) AND uid(tr_2) AND uid(tr_3) AND uid(tr_4)) {
		Transcript.code
  }
"extensions": {
    "server_latency": {
      "parsing_ns": 78300,
      "processing_ns": 184284000,
      "encoding_ns": 205161000,
      "assign_timestamp_ns": 707000,
      "total_ns": 393479700                      <- Time about 3,9s
    },
    "txn": {
      "start_ts": 1180074
    },
    "metrics": {
      "num_uids": {
        "": 1167023,
        "Organism.encodes_transcript": 4,
        "Transcript.code": 61234,            <- Requested elements 
        "_total": 1681440,
        "uid": 453179
      }
    }

Mine:

  var(func: uid(0x3bd9a)) {
    Organism.encodes_transcript {
  	  tr_1 as uid
      }
  }
  var(func: uid(tr_1)) @filter(uid_in(~Organism.encodes_transcript,0x2db0a)) {
  	tr_1n2 as uid
  }
  var(func: uid(tr_1n2)) @filter(uid_in(~Organism.encodes_transcript,0x33a9e)) {
  	tr_1n2n3 as uid
  }
  query_1(func: uid(tr_1n2n3)) @filter(uid_in(~Organism.encodes_transcript,0x2db09)) {
  	Transcript.code
  }
"extensions": {
    "server_latency": {
      "parsing_ns": 108400,
      "processing_ns": 1585298600,
      "encoding_ns": 167919600,
      "assign_timestamp_ns": 653700,
      "total_ns": 1756457000             <-Time about 1.7s
    },
    "txn": {
      "start_ts": 1180089
    },
    "metrics": {
      "num_uids": {
        "": 1,
        "Organism.encodes_transcript": 1,
        "Transcript.code": 61234,          <- Requested nodes
        "_total": 604554,
        "uid": 271659,
        "~Organism.encodes_transcript": 271659
      }
    }

I think the reason mi aproach is a slightly faster is because I filter the uids pair by pair reducing the amount of uids to manage. (No my intention at all, just a lucky coincidence :rofl:)
On the other side mine feels a bit less readable.

Thx for the help :bowing_man:

2 Likes