hi guys,
we are running a large ETL process in Dask to pre-process tons of files.
This is an hourly process and is incremental (so we append to the graph already built last hour).
How does one achieve this ? it seems that the bulk loaders are all commandline based. Is there any other way to bulk load data into Dgraph from another system (in python )