How do I import my RDF data into the Dgraph database?It is currently in win10
You can use https://dgraph.io/docs/deploy/#bulk-loader or Live loader.
do you mean windows?
yes,It is currently in win10
I did get the out directory, which also has zeros and ones,
but ‘Once the output is created, they can be copied to all the servers that will run Dgraph Alphas. Each Dgraph Alpha must have its own copy of the group’s p directory output. Each replica of the first group should have its own copy of ./out/0/p, each replica of the second group should have its own copy of ./out/1/p, and so on.’
what do I mean by that?
Since I’m running dgraph on windows10, it looks like this:
start .\dgraph zero
start .\dgraph alpha --lru_mb 2048 --zero localhost:5080
You have to move the
./out/0/p directory to the path you are running dgraph.
Or, make it hard like
dgraph.exe zero dgraph.exe alpha -p="C:\Users\han\dgraph\out\0\p" --zero localhost:5080 dgraph-ratel.exe
I’m sorry, but I have a lot of questions
Why do I use out\0\p directory instead of the database loading RDF data and schema
The out\1\p directory database does have data and schema when I use it, except for the predicate of schema
The order went like this:
dgraph.exe alpha -p=“C:\Users\han\dgraph\out\1\p” --lru_mb 1024 --zero localhost:5080
and i have a question,Why not import data as easily as Neo4j or ArangoDB?For example, ArangoDB can import data directly with a single command. Neo4j Need to go to import file, but it is also very convenient.
They also support various data formats such as CSV
Well, because there’s no SST file in 0, but why?
Okay, I get it
When I add the -p path after starting alpha, it links to another database that was generated under the dgraph bulk command
You don’t need to use it. You can just move the bulkloader “p” directory to wherever you are working.
If you provided a schema, it will be there. And in fact, Bulkloader requires you to provide a schema.
It didn’t worked?
You can do that, use Liveloader instead of Bulkloader.
CSV isn’t a graph format. Also, you can easily transform CSV into JSON, and that works fine in Dgraph. Not sure why Neo4j says CSV is their main input format. It is not graph compliant. There are no relations in CSV.
What are the commands you used to bulkload?
You are right, but CSV can represent relationships.I don’t think there’s any difference, for example, arangodb: I have two real table documents, and I need to establish a relational mapping on these two tables, so I need to create an edge table, an edge table if it’s in CSV format, right
Here it is:
The column name: _key, _from, _to
0, document1_name/document1_entity_id (_key), document2_name/document2_entity_id (_key)
Then use the arangoimp command directly
In the neo4j, relational table is like this: entity1, base, entity2
Can you provide a reference for this? is this a standardized approach?
By nature, people uses CSV as an exporter format for Spreadsheets. Not for graphs system. If there are DBs doing it, it is some kind of personal choice. Maybe to be friendly with people who aren’t used to graphs. Maybe it is the fact that there is a LOT of data sources in CSV out there. And these are good points.
CSV is fine, but support it and force it to work in graph context is hard. We need to teach users about the small details of it. It can work tho. But it doesn’t depend on me and the tools out there that you can transform that data into JSON are stronger than maintain one more parse.
I think if this request https://github.com/dgraph-io/dgraph/issues/4920 gets popular, the core devs would work on it. Or maybe, someone could make a PR from the community.
Does it have to be hexadecimal for uid
Okay, that’s all I can do. 0x0, 0x1, 0x2, 0x3…0x99999