Dose it have time or size limit to upload data via dgraph live?

I upload the dataset which has 500,000 lines, just like following:

<p16438855> <job_status> "0" .
<p16438855> <audit_status> "3" .
<p16438855> <deleted_status> "0" .
<p16438855> <assort> <pc101005> .
<p16438855> <exist_prov> <430000> .
<p16438855> <exist_city> <430100> .

ALL is ok, but when i reach the 460,000th line, I always meet the error:

Total Txns done:       44 RDFs per second:     701 Time Elapsed: 10m28s, Aborts: 0
Total Txns done:       44 RDFs per second:     698 Time Elapsed: 10m30s, Aborts: 0
Total Txns done:       44 RDFs per second:     696 Time Elapsed: 10m32s, Aborts: 0
Total Txns done:       44 RDFs per second:     694 Time Elapsed: 10m34s, Aborts: 0
Total Txns done:       44 RDFs per second:     692 Time Elapsed: 10m36s, Aborts: 0
Total Txns done:       44 RDFs per second:     690 Time Elapsed: 10m38s, Aborts: 0
Total Txns done:       45 RDFs per second:     703 Time Elapsed: 10m40s, Aborts: 0
Total Txns done:       45 RDFs per second:     701 Time Elapsed: 10m42s, Aborts: 0
Total Txns done:       45 RDFs per second:     699 Time Elapsed: 10m44s, Aborts: 0
Total Txns done:       45 RDFs per second:     697 Time Elapsed: 10m46s, Aborts: 0
Total Txns done:       45 RDFs per second:     694 Time Elapsed: 10m48s, Aborts: 0
Total Txns done:       45 RDFs per second:     692 Time Elapsed: 10m50s, Aborts: 0
Total Txns done:       45 RDFs per second:     690 Time Elapsed: 10m52s, Aborts: 0
Total Txns done:       46 RDFs per second:     703 Time Elapsed: 10m54s, Aborts: 39
Total Txns done:       46 RDFs per second:     701 Time Elapsed: 10m56s, Aborts: 113
Total Txns done:       46 RDFs per second:     699 Time Elapsed: 10m58s, Aborts: 233
Total Txns done:       46 RDFs per second:     697 Time Elapsed: 11m0s, Aborts: 341
Total Txns done:       46 RDFs per second:     695 Time Elapsed: 11m2s, Aborts: 415
Total Txns done:       46 RDFs per second:     693 Time Elapsed: 11m4s, Aborts: 545
Total Txns done:       46 RDFs per second:     691 Time Elapsed: 11m6s, Aborts: 683

And zero reply this:

2018/01/22 20:38:45 oracle.go:84: purging below ts:8363, len(o.commits):2, len(o.aborts):0
2018/01/22 20:38:46 wal.go:118: Writing snapshot to WAL, metadata: {ConfState:{Nodes:[1] XXX_unrecognized:[]} Index:17506 Term:2 XXX_unrecognized:[]}, len(data): 1862

I have compare those lines around the 460,000th line, their are in the same format, but lines after the 460,000th line are failed to uodate.

<p16438855> <assort> <pc101005> .
<p16438855> <exist_prov> <430000> .  # 460,0000th line
<p16438855> <exist_city> <430100> .

Everytime i get this error,excpet i succeeed to upload it once. But this dataset is smaller than 1million.rdf which can be uploaded succefully.

SO what can I do to solve it?Thx

What version of dgraph are you using @pldawn? Dgraph live counted all errors as aborts prior to v1.0.2 and didn’t print them. There could be some other error here which is not shown. We have improved that in v1.0.2.

This is completely normal.

Since the default batch size is 1000 lines, the error could be in any of the 1000 RDF’s and not necessary on the 460, 000th line.

The version of dgraph I’m using is v1.0.1.
I will update dgraph and try this again. I really want to know what the wrong is.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.