Error when insert node to dgraph with a big uid


(hisense2019) #1

Hi , I want to insert a node like this:
<11011935246> “Star War II”.
the uid is generated by ourselves. However, it meets a mistake as follows:
Error Name:t
Uid:[11011935246] cannot be greater than lease : [3040000]
the json format is :

{
  "name": "t",
  "url": "http://spark-3:8080/mutate",
  "errors": [
    {
      "code": "ErrorInvalidRequest",
      "message": "Uid: [11011935246] cannot be greater than lease: [3040000]"
    }
  ]
}

(Michel Conrado) #2

This means that Zero is not in the same range as the given UID. You can use:

/assign?what=uids&num=100 This would allocate num uids and return a JSON map containing startId and endId , both inclusive. This id range can be safely assigned externally to new nodes during data ingestion.

to increase the UIDs to be available.

https://docs.dgraph.io/deploy/#more-about-dgraph-zero

BTW, Dgraph only accepts UIDs in Dgraph’s pattern and generated by Dgraph.


(hisense2019) #3

Thanks,when I use tha commend like this
curl -X GET -H "Content-Type: application/json" spark-1:6080/assign?what=uids&num=100000000000
It response an error as follows:
{"errors":[{"code":"ErrorInvalidRequest","message":"num not passed"}]}
Then I have found a parameter about the Zero, the “maxLeaseId”:“3040000”. How to modify this parameter?

If we use the UIDs genrated by Dgraph itself, it may be confuse for us to find the correct node since these node have many same attributes. That is to say, we may meet a wrong result even if we find the UIDs with many conditions.
So, we generate the RDF data with our defined UIDs like 0x1,0x9a et. Is there another method for we to do easily?


(Michel Conrado) #4

Check the last response in => Error while importing data: Error while mutating Uid: [4600006] cannot be greater than lease: [0]

As far I know, there’s none limiter in lease uids.

That’s the only way. You should use upsert transaction. e.g: https://docs.dgraph.io/mutations/#external-ids-and-upsert-block

No, as far I can tell.


(hisense2019) #5

Thank you for your help and patience. I will check your response and find a more suit method for our applications.