Dgraph bulk load out of memory

I am trying to load a 97G json file into dgraph. After starting dgraph bulk, the memory consumption will increase all the way up to 100G then crash with a “run out of memory” error.

Bulk command: dgraph bulk --format json --zero localhost:5082 --http localhost:9900 --reduce_shard 1 --schema s.txt --files d.json.gz

Screenshot1: bulk output https://ibb.co/PDPV2gG
Screenshot2: pprof heap output https://ibb.co/T1Tjz2k

Can you share what version you are using?

Dgraph version: v20.03.2
Commit SHA-1: 7553f0dea
Branch: HEAD

Can you give the actual heap profile (gzipped version), so we can do a bit more analysis?

FWIW, it looks like it is stuck at slurpQuoted, and 65GB of memory is being just for that function. That means that the data has some issue, where a double quote is neither escaped, nor completed with another double quote. In that case, it just keeps on “slurping” text and putting that into a bytes.Buffer until it can find the end – causing your server to OOM.

You should look for an unmatched double quote into your data and fix it up.

Hi,I met the same problem. bulk loader could not work well in large file. I had meet the oom too.and it require large memory for large file.can bulk loader support distributed method.

It seems that this problem exist for a long time because reduce phase load all the map result once into mem. Is this not a bottleneck or i misunderstand something?

You are right. I have solve the OOM problem in the v1.1.0,pr:https://github.com/dgraph-io/dgraph/pull/4529. Since v20.03.1 bulkloader has changed a lot , consuming more memory than previous version. Before v20.03.1, reduce phase loaded all mapentries which produce one badger kv in memory, since v20.03.1, it loads even more mapentries.


Hi, just to report back my OOM issue is indeed due to unescaped double quoute. For example, my data contains malformed string like “123”,“some_user_name”,“some_other_field”