Upsert with multiple UIDs

Hi,

I am wanting to create an upsert that updates the data for three UIDs. When I run this query:

{
 var(func: has(products)){
  productsUid as uid
  productNode as products @filter(eq(productId, 10888870)){
    optionNode as options @filter(eq(optionId, 16818278)){
      uid
    }
  }
 }
 getVals(func: uid(productsUid)) @normalize {
  productsUid : uid
  products @filter(uid(productNode)){
    productUid : uid
  	options @filter(uid(optionNode)){
      optionUid : uid
    }
  }
 }
}

I get this result:

{
  "extensions": {
    "server_latency": {
      "parsing_ns": 18220,
      "processing_ns": 13088415,
      "encoding_ns": 474410
    },
    "txn": {
      "start_ts": 124254
    }
  },
  "data": {
    "getVals": [
      {
        "optionUid": "0x937",
        "productUid": "0x934",
        "productsUid": "0x930"
      }
    ]
  }
}

What I want to do next is create an upsert for all three of these UIDs in a way that will create the product node and option node (so the product is a child of products, and the option is a child of the product) if missing. For example, I want to do something like this:

upsert {
  query {
    [insert query from above]
  }

  mutation {
    set {
      {
        {
           "uid": "**productsUid**",
           "products": [
             {
               "productId": 10888870,
               "uid": "**productUid**",
               "options": [
                 {
                   "optionId": 16818278,
                    "uid": "**optionUid**",
                   "color": "red"
                 }
               ]
             }
           ]}
       }
    }
  }
}

Is there a way to do this?
I’m using the Java Client, but any upsert solution will suffice.
A requirement for the upsert is that if a product doesn’t exist, it must create a new one under the existing products collection. Similarly, if an option doesn’t exist, it must create a new one under the existing product. If neither the product nor the option exist, then it must create them both.

I am not sure I understand all the details, but your above upsert [1] will probably look like as follows in GraphQL±. There could be minor differences due to how your data model is, so check once. You can also add conditions using Conditional Upsert [2].

upsert {
  query {
    [insert query from above]
  }

  mutation {
    set {
        uid(productsUid) <products> uid(productUid) .
        uid(productUid) <productId> "10888870" .
        uid(productUid) <options> uid(optionUid) .
        uid(optionUid) <color> "red" .
        uid(optionUid) <optionId> "16818278" .
    }
  }
}

[1] Get started with Dgraph
[2] Get started with Dgraph

Quick update

I (@micheldiz) marked Aman’s answer as the correct one because it is in fact what corresponds to the requested. However, if you (future reader) want to continue reading below. Please note that other topics are covered below and this here is the right answer. The conversation below is more about troubleshooting than about the title.

1 Like

This is great! Thanks for the help!

When I attempt to structure the upsert with the Java client (as per this link: GitHub - dgraph-io/dgraph4j: Official Dgraph Java client) like this:

String query = "query {\n" +
  "user as var(func: eq(email, \"wrong_email@dgraph.io\"))\n" +
  "}\n";
Mutation mu =
  Mutation.newBuilder()
  .setSetNquads(ByteString.copyFromUtf8("uid(user) <email> \"correct_email@dgraph.io\" ."))
  .build();
Request request = Request.newBuilder()
  .setQuery(query)
  .addMutations(mu)
  .setCommitNow(true)
  .build();
txn.doRequest(request);

I get the message that addMutations cannot be found. I checked the imports from the sample here: dgraph4j/samples/DgraphJavaSample/src/main/java/App.java at master · dgraph-io/dgraph4j · GitHub
and my imports match. Am I using the wrong builder for the upsert?

image

What version of Dgraph4j are you using?

We’re using 1.7.3.

        <dependency>
            <groupId>io.dgraph</groupId>
            <artifactId>dgraph4j</artifactId>
            <version>1.7.3</version>
        </dependency>

These changes are only available after 2.0.2 version.

1 Like

Awesome. Thanks! Is the 2.0.2 client backward compatible with Dgraph (Zero, Alpha, and Ratel) or is it only compatible with Dgraph after a particular version?

2.* of dgraph4j is only compatible with dgraph 1.1.* (Alpha and Zero)

1 Like

We updated the client version, and we’re attempting to execute a mutation to setup our local dgraph database for our integration tests, and we’re getting StatusRuntimeExecution: UNKNOWN: Empty query

Here’s our mutation code:

Transaction txn = dgraphClient.newTransaction();
DgraphProto.Mutation mu = DgraphProto.Mutation.newBuilder()
                    .setSetJson(ByteString.copyFromUtf8(message)).build();

DgraphProto.Request request = DgraphProto.Request.newBuilder()
                    .addMutations(mu)
                    .build();

Map uids = txn.doRequest(request).getUidsMap();

Here’s the exception detail:

[main] ERROR updates.ImInStockStatusReaderWriterTest - Showing stack trace: java.lang.RuntimeException: java.util.concurrent.CompletionException: java.lang.RuntimeException: The doRequest encountered an execution exception:
	at io.dgraph.AsyncTransaction.lambda$doRequest$2(AsyncTransaction.java:173)
	at java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
	at java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
	at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
	at java.util.concurrent.CompletableFuture$AsyncSupply.run$$$capture(CompletableFuture.java:1595)
	at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java)
	at java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1582)
	at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
	at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
	at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
	at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
Caused by: java.util.concurrent.CompletionException: java.lang.RuntimeException: The doRequest encountered an execution exception:
	at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
	at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
	at java.util.concurrent.CompletableFuture$AsyncSupply.run$$$capture(CompletableFuture.java:1592)
	... 6 more
Caused by: java.lang.RuntimeException: The doRequest encountered an execution exception:
	at io.dgraph.DgraphAsyncClient.lambda$runWithRetries$2(DgraphAsyncClient.java:212)
	at java.util.concurrent.CompletableFuture$AsyncSupply.run$$$capture(CompletableFuture.java:1590)
	... 6 more
Caused by: java.util.concurrent.ExecutionException: io.grpc.StatusRuntimeException: UNKNOWN: Empty query
	at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
	at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
	at io.dgraph.DgraphAsyncClient.lambda$runWithRetries$2(DgraphAsyncClient.java:180)
	... 7 more
Caused by: io.grpc.StatusRuntimeException: UNKNOWN: Empty query
	at io.grpc.Status.asRuntimeException(Status.java:533)
	at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:442)
	at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
	at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
	at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
	at io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1$1.onClose(CensusStatsModule.java:700)
	at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
	at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
	at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
	at io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1$1.onClose(CensusTracingModule.java:399)
	at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:507)
	at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:66)
	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.close(ClientCallImpl.java:627)
	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl.access$700(ClientCallImpl.java:515)
	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:686)
	at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:675)
	at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
	at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

We get the same exception if we use:

DgraphProto.Response response = txn.mutate(mu);
uids = response.getUidsMap();

instead of:

Map uids = txn.doRequest(request).getUidsMap();

Before we updated the version of our Dgraph client, our working mutation code was this:

Assigned response;

Transaction txn = dgraphClient.newTransaction();
try {
     Mutation mu = Mutation.newBuilder().setSetJson(ByteString.copyFromUtf8(message)).build();
     response = txn.mutate(mu);
     txn.commit();
} finally {
     txn.discard();
}
//get uids from mutate
Map uids = response.getUidsMap();

What changes do we need to make to run a mutation with the new version of the Dgraph client?

As I said, you have to be running v1.1.0 version of dgraph. What version are you running now?

I hadn’t updated the docker container, so updating the docker container to v1.1.0 seemed to get me farther.
However, when I try to run the mutate now, my mutation doesn’t seem to be reaching Dgraph. I know my hostname and port are set correctly because I’m able to run alter to set the schema. I get UIDs back from the mutate, but there’s no evidence that the mutation actually ran in Dgraph. The alpha and zero logs don’t show anything about the mutation, and the data doesn’t show up when I query it in Ratel. I’ve triple checked that the hostname and port are configured correctly in my test method and in Ratel.
Any ideas?

For example, we’re running:

DgraphProto.Mutation mu = DgraphProto.Mutation.newBuilder().setSetJson(ByteString.copyFromUtf8(fileContents)).build();
response = txn.mutate(mu);
txn.commit();

with this JSON data for the fileContents variable:

{
    "uid": "_:products",
  "collectionId": 1,
    "products": [
      {
        "uid": "_:product",
        "productId": 19610626,
        "options": [
          {
            "uid": "_:option",
            "optionId": 32661491,
            "color": "red"
          }
        ]
      }
    ]
}

What’s the query you are running to verify whether data exists or not?

I’ve tried several.
Here are some of the queries we’ve tried:

{
  data(func: has(productId)) {
    _predicate_
  }
}
{
  data(func: has(products)) {
    _predicate_
  }
}
{
  data(func: has(product)) {
    _predicate_
  }
}

Interestingly, when I ran:

{
  data(func: has(productId)) {
    productId
  }
}

I actually got a productId value back:

{
  "data": {
    "data": [
      {
        "productId": 19610626
      }
    ]
  },
  "extensions": {
    "server_latency": {
      "parsing_ns": 39200,
      "processing_ns": 2303100,
      "encoding_ns": 14300,
      "assign_timestamp_ns": 1451600
    },
    "txn": {
      "start_ts": 76
    }
  }
}

Does the _predicate_ statement still work?

I was able to get more data to load by trying this:

Transaction txn = dgraphClient.newTransaction();
Map uids;
DgraphProto.Mutation mu = DgraphProto.Mutation.newBuilder()
        .setSetJson(ByteString.copyFromUtf8(message)).build();

DgraphProto.Request request = DgraphProto.Request.newBuilder()
        .addMutations(mu)
        .build();
uids = txn.doRequest(request).getUidsMap();
DgraphProto.Response response = txn.mutate(mu);
uids = response.getUidsMap();
txn.commit();

I’m able to get it to show up here:

However, when I query it, I still don’t get anything useful. For example:

{
  data(func: has(products)) {
    _predicate_
  }
}

gives me:

{
  "data": {
    "data": []
  },
  "extensions": {
    "server_latency": {
      "parsing_ns": 53700,
      "processing_ns": 3191600,
      "encoding_ns": 17700,
      "assign_timestamp_ns": 2149000
    },
    "txn": {
      "start_ts": 176
    }
  }
}

(Edited after Daniel’s comment below) 1.1 version of dgraph doesn’t support _ predicate_.

1 Like

This is not true. has() function is not related to type system. You’d need to specify the predicates you want to return in the response. Or, if you use the type system, then you can use expand(_all_) to fetch the predicates defined in the type.

2 Likes

Thanks for the help. I was able to get data back by specifying the predicates in the response, like this:

{
  data(func: has(products)) {
    products{
    	productId
      options{
        optionId
      }
    }
  }
}

However, we relied a lot on using _predicate_, and I’m not sure that I understand how to use expand(_all_). I tried:

{
  data(func: has(products)) {
    expand(_all_)
  }
}

but I get no data back. Am I missing something?
For what it’s worth, I also tried querying based on the type, like this:

{
  data(func: type(string)) {
    expand(_all_)
  }
}

but I got no data back that time either.

Regarding the type system, what exactly does that mean? Am I required to provide type information as well when I create the schema? If so, how do we do that? I’m not finding any examples on the website or in the docs for the Go or Java clients.

This is something we miss too. We are considering getting this back IMPORTANT: new behavior for expand, deprecation of _forward_ and _reverse_.

Consider adding the following to the schema and then try expand(_all_):

type Product {
  productId: string
  options: [Option]
}

type Option {
  optionId: string
  color: string
}

You will also have attach these types to corresponding nodes. For example, you upsert should look like this now -

        uid(productsUid) <products> uid(productUid) .
        uid(productsUid) <dgraph.type> "Product" .
        uid(productUid) <productId> "10888870" .
        uid(productUid) <options> uid(optionUid) .
        uid(optionUid) <dgraph.type> "Option" .
        uid(optionUid) <color> "red" .
        uid(optionUid) <optionId> "16818278" .

Once you add these types and attach them to nodes, expand(_all_) should work. You can read more about type system here Get started with Dgraph.

I added the type information to the schema, and you can see it here: