Duplicating an edge with a different facet value

Hi,

I’m deploying a database recording (among others) is_a (parent/child) information between nodes. As multiple resources are connected in the instance, there may be multiple is_a edges between two nodes. To allow the user to specify a particular resource of origin, a facet value was added to each edge recording the origin of the information. However, it seems while multiple edges are provided with different facet information, only one edge is retained after bulk load into the Dgraph instance.

I provide a minimal example below, with schema and example data. Below you also find the two queries to showcase the issue. The first is_a edges are those that are duplicated but with different facet values. With the query below, it only returns the first edge and the second seems not the be recorded in the database.

It this a limitation in Dgraph where it is not possible to duplicate edges and record different facet information onto them or is there an additional parameter to take into account when bulk loading the data or specifying the schema?

Many thanks,
Liesbeth


schema

is_a: uid @reverse @count .
name: string @index(exact, trigram) .

example

{
  set {
    _:MONDO_0005027 <name> "MONDO:0005027" .
    _:MONDO_0005579 <name> "MONDO:0005579" .  
    _:MONDO_0010561  <name> "MONDO:0010561 " .
    _:MONDO_0006748 <name> "MONDO:0006748" .
    
    _:MONDO_0005579 <is_a> _:MONDO_0005027 (origin="MONDO") .
    _:MONDO_0005579 <is_a> _:MONDO_0005027 (origin="EFO") .
    _:MONDO_0010561 <is_a> _:MONDO_0005027 (origin="MONDO") .
    _:MONDO_0006748  <is_a> _:MONDO_0005027 (origin="EFO") .  
    }
}

query

Return all children (~is_a) nodes with facets. It should return the two edges for node “MONDO:0005579” with facet “EFO” and “MONDO” respectively.

{
  q(func: eq(name, "MONDO:0005027")){
    name
    ~is_a @facets {
      name
    }
  }

Return parent node (is_a), it should return the two edges with facet “EFO” and “MONDO” respectively.

{ 
q(func: eq(name, "MONDO:0005579")){
    name
    is_a @facets {
      name
    }
  }
}

It isn’t two edges. Edges with the same name are unique. You can’t point twice the same edge to a node. So, in your case, the facet was overwritten by the second record.

Thanks for your reply.

Is it possible to encode this information in another way as a facet array/list for example? I prefer not to perform regular expression filtering and concatenate it as a string.

Many thanks
Liesbeth

For now, you can’t do anything on the DB side.

If you want to have multiple information between two nodes. I recommend that you create an abstract queue between the two nodes. And within that queue are these (outsourced) relationships. It’s a way of doing it.

e.g.

{
  set {
    _:MONDO_0005027 <name> "MONDO:0005027" .
    _:MONDO_0005579 <name> "MONDO:0005579" .  
    _:MONDO_0010561 <name> "MONDO:0010561" .
    _:MONDO_0006748 <name> "MONDO:0006748" .

    _:queue_0005027_0 <to> _:MONDO_0005027 .
    _:queue_0005027_1 <to> _:MONDO_0005027 .
    _:queue_0005027_2 <to> _:MONDO_0005027 .
    _:queue_0005027_3 <to> _:MONDO_0005027 .

    _:MONDO_0005579 <is_a> _:queue_0005027_0 (origin="MONDO") .
    _:MONDO_0005579 <is_a> _:queue_0005027_1 (origin="EFO") .
    _:MONDO_0010561 <is_a> _:queue_0005027_2 (origin="MONDO") .
    _:MONDO_0006748 <is_a> _:queue_0005027_3 (origin="EFO") .  
    }
}

Query

{
  q(func: eq(name, "MONDO:0005027")){
    name
    is_a : ~to @normalize {
      ~is_a @facets {
       name: name
      }
    }
  }
}

Result

{
  "data": {
    "q": [
      {
        "name": "MONDO:0005027",
        "is_a": [
          {
            "name": "MONDO:0005579",
            "~is_a|origin": "MONDO"
          },
          {
            "name": "MONDO:0005579",
            "~is_a|origin": "EFO"
          },
          {
            "name": "MONDO:0010561",
            "~is_a|origin": "MONDO"
          },
          {
            "name": "MONDO:0006748",
            "~is_a|origin": "EFO"
          }
        ]
      }
    ]
  }
}

Facets, for now, has a lot of improvement to be done. There is a lot of open issues in the backlog. BTW, I think the issue bellow would fit your case.