What is the expected behavior of 'dgraph server --memory_mb?

When I tried to load the crawling data to the dgraph by using raw http, I got a OOM problem.
Even if I increased my docker instance memory to 8GB and ran dgraph server with “–memory_mb 2048”, I got a same problem.

Is there any way to limit the dgraph server memory usage??

The below image is the screenshot before the dgraph server died by the OOM killer.

There’re not many details here about what’s going on. But, I’d guess that you’re not committing these transactions.

We don’t recommend users using raw HTTP directly. You should be using a language client unless we don’t have a client in the language you’re using.

Thank you for your answer.

Since I used a raw HTTP request likes below, I’d guess that there was another reason for the OOM.

    // I used OkHttp java library and dgraph version is v0.9.0
    Request request = new Request.Builder()
        .header("X-Dgraph-CommitNow", "true")
    Response response = client.newCall(request).execute();

After visited the dgraph4j github repository, I found that the java client is officially supported at the maven.
I’ll try to use it. Thank you.

1 Like

Hi @mrjn.

Even if I used the dgraph4j client, I found that the dgraph server took memory more that I specified.
I ran dgraph server with “–memory_mb 2048” and it currently uses 8GB memory.

Could you give me a hint for this??
My test java code likes this:

 public DgraphEntity loadData(DgraphEntity dgraphEntity, String dataBody) throws IOException {
    Transaction transaction = client.newTransaction();
    try {
      Mutation mutation = Mutation.newBuilder()
      Assigned assigned = transaction.mutate(mutation);
      if (dgraphEntity != DgraphUtils.CREATE_ENTITY) {
        return dgraphEntity;
      Map<String, String> uidsMap = assigned.getUidsMap();
      if (uidsMap == null || uidsMap.size() != 1) {
        throw new IOException(String.format(
            "Unknown result.\nrequest: %s\nresult: %s", dataBody, uidsMap));
      String uid = uidsMap.values().iterator().next();
      return DgraphUtils.buildDgraphEntity(uid);
    } catch (StatusRuntimeException e) {
      throw new IOException(String.format("Request failed.\nrequest: %s", dataBody), e);

@hyunseok: Can you please share heap profile and output of /debug/vars

On getting error are you aborting the transactions ?
How many mutations did you do

Hi, @janardhan.

I’m sorry to ask how can I get a heap profile and out of /debug/vars.
Could you give me a guide for it?? (For example, command on the mac or command inside the docker instance)
I could not find /debug/vars path in my docker instance.

I have run dgraph by using the docker on my mac book.

My mutation task is still running without error, but the memory usage is increasing steadily.
I’m not sure the exact the mutation count. I guess that it may be 150,000 * 30 = 4,500,000.

Thank you.

Hey @janardhan,

Can we add instructions on How To page about retrieving debug information, when asked for?

@hyunseok: Please follow these instructions to get the information.


1 Like

Hi, @janardhan.

Thank you for your instructions!!
Since I am currently on vacation, so I will share the profile results in a week.

Thank you.

Hi, @janardhan.

I come back from the vacation.
I share the heap profile and output of /debug/vars.

I just attached “.pdf” extension to each files by upload file rules restriction.

debug.vars.txt.pdf (14.4 KB)
pprof.dgraph.alloc_objects.alloc_space.inuse_objects.inuse_space.001.pb.gz.pdf (93.6 KB)

Since I’m not familiar with ‘go tool pprof’, I can not be sure that the resulting file is correct.
If you’ll find any fault above files, please notice to me. I’ll try again. Thank you.

@hyunseok: Thanks for providing the info. Can you please try the same on latest nightly build.
There was a bug in lru eviction which has been fixed in master.

Hi, @janardhan.

With the latest nightly build, I found that the dgraph used near 4GB memory.
I think that it is a reasonable memory usage when I ran the dgraph with “–memory_mb 2048”.

Thank you~!!!

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.