To import the data in a fresh cluster I’m currently unzipping this file and using the set { ...exported RDF data in here... } mutation to load the data. This however isn’t a good solution, as it throws Uid: [UID being set] cannot be greater than lease: [10000] and I have to manually update the UID lease before importing the data. If the RDF data were exported without specified UIDs (something like <_:uid1> <predicate> _:uid2 . to my understanding) I could skip this step making the process a lot simpler. But I haven’t found a way to do this.
How is the exported RDF data intended to be imported into a fresh DGraph cluster (with a new zw directory and everything), where the UIDs aren’t persisted? Is it using the Live Loader?
And separately, is there a way to export the data without persisting the UIDs in the first place? Thanks!
You can import the data using bulk loader as well as live loader. But for your use case, as you are importing a data in fresh cluster, bulk loader fits your needs as it is faster. see (https://dgraph.io/docs/deploy/#bulk-loader)
In addition to Naman’s comment. You have two options for your case. The error “cannot be greater than lease” is due to the fact that you are running a fresh cluster with defined UIDs dataset. Dgraph exports your dataset preserving the UIDs. But Dgraph can’t push the lease of UIDs based on the dataset. So you have to (options):
Use the flag --new_uids and Dgraph will automatically lease the UIDs for you.
In my opinion, you should go with --new_uids. But, if you ain’t gonna use the Bulk loader or Live loader, you have to use the option 1. And BTW, there’s no way to export with Blank nodes. Feel free to open an issue requesting this.