Very slow schema mutation on empty database (still an issue)

What version of Dgraph are you using?

v20.07.0

Have you tried reproducing the issue with the latest release?

Yes, the above.

What is the hardware spec (RAM, OS)?

Fedora 30 / Docker
2 alphas limited to 8gb each (30gb total available)

Steps to reproduce the issue (command/config used to run Dgraph).

Shell

# 1 zero, 2 alphas (--lru_mb 2048), 1 ratel
$ docker stack deploy ...

Python

client.alter(pydgraph.Operation(schema=schema, run_in_background=True))

Expected behaviour and actual result.

On a completely fresh execution of dgraph (zero+alphas), the alter schema command to load my schema sometimes takes much, much longer to complete than normal. While the command is running, one of the alphas will max out the CPU to the point where my machine will sometimes freeze for several minutes.

This issue has been reported before with no resolution: Very slow schema mutation on empty database

I have come across what I believe to be the same issue many times on several versions of Dgraph (IIRC as early as v1.1.0). I also have a fairly large schema: 29 types and 171 predicates. As in the original report, if I restart my machine the problem goes away, but only for a while. At first, an alter schema will complete in seconds, but the problem eventually returns.

However, I would prefer not to have to restart my machine, and I think it is worthwhile to determine the actual cause of the problem.

My schema shows up in Ratel within seconds. The problem appears to be related to indexing. I have 50 predicates indexed. If I remove the indexes before the alter schema command, it will complete within seconds. run_in_background does not seem to make any difference.

I have also tried:

  • Pruning all containers and removing the volumes.
  • Restarting the docker daemon

Neither one of these seems to make any difference.

Do you do anything else with Go? The weirdest part I found when investigating this was that everything else on my system was fine (while the Alpha node was having its issues) except building other Go application. Building very small Go applications that usually take a second to complete would hang until I killed the Dgraph processes.

Once it go that weird I decided to stop looking for an answer.

We are both running Linux with reasonably new kernel versions, but other than that our setups seem different. For example, I am not running Dgraph in a Docker container.

I hope you make some progress!

No I’m not currently doing anything else with Go. But I would be even more surprised to see an effect with Dgraph running inside Docker. That said, I can’t understand why the problem persists after wiping the Docker containers.

Hi @freb

Thanks for reporting the issue. Did you noticed how much time did it take to complete the alter command? I am trying to see its difference with normal one.

Also I am marking this as accepted so that Dgraph team will start looking into this.

:rofl: I am working with 66 types (and growing), and in the neighborhood of 1,000 pedicates.

@ashishgoswami @freb mentioned in the original issue waiting as much as 60 mins. In my case, right now it is taking about 30 seconds during which my entire machine will occasionally freeze. Yesterday it was taking 3 minutes, and much longer at other times in the past.

Hi @freb, @dhagrow, do you have the alpha logs for the run when the schema mutation was slow? This would streamline our efforts in figuring out the cause.
Thanks.

Hi @dhagrow, I tried reproducing this error using the configurations mentioned by you. docker-compose.yml (1.6 KB) I limited the docker usage to combinations of memory(8GB,12GB, 32GB) and CPU(2,4,6).

.....
alpha1:
    image: dgraph/dgraph:v20.07.0
    container_name: alpha1
    working_dir: /data/alpha1
    labels:
      cluster: test
    ports:
    - 8080:8080
    - 9080:9080
    volumes:
    - type: bind
      source: $GOPATH/bin
      target: /gobin
      read_only: true
    deploy:
      resources:
        limits:
          memory: 12G
          cpus: "4"
    command: /gobin/dgraph alpha -o 0 --my=alpha1:7080 --lru_mb=1024 --zero=zero1:5080
      --logtostderr -v=2 --idx=1 --whitelist=10.0.0.0/8,172.16.0.0/12,192.168.0.0/16
....

I couldn’t see the slow schema mutation when run through iteration of 10 using a very simple script test.py (465 Bytes) that iteratively alters the schema and later drops it. The schema that I am using is from @freb (having 180+ predicates out of which 60+ are indexed), I can share the schema here with the permission of @freb.

Observations:

  1. I can see the CPU spikes, but it didn’t lead to freezing of the system.
  2. The time taken for schema mutation was around 8-12s with an outlier of 24s.
  3. alpha crashes for memory limit 8GB.

The reason for the third one is that, we write the index in a temporary badger first and then, merge entries before writing them to p directory. This happens for each of the index in parallel. This leads to a crash of alpha in the schema mutation phase only. This should be fixed too by throttling.

What can you do?
@freb @dhagrow It would be great if you can help us reproduce the slow schema mutation, so that we can fix the problem you are facing.

cc: @ashishgoswami @ibrahim

1 Like

I would prefer to not have my schema made publicly available in this thread, though I don’t mind it being shared with anyone that is looking into this issue, including other users.

I’m not running dgraph in a container. Here are the steps that I just performed:

Dgraph version   : v20.07.0
Dgraph codename  : shuri
Dgraph SHA-256   : 10a0c3bfc7b8be5bdbdf2d0b95d8823b7d26304874a451b1cc6f12efc411583e
Commit SHA-1     : d65e20530
Commit timestamp : 2020-07-28 15:31:37 -0700
Branch           : HEAD
Go version       : go1.14.4
  1. cd to a new directory
  2. in one terminal, run dgraph alpha --lru_mb 2048 --zero localhost:5080
  3. in a second terminal, run dgraph zero
  4. in a third terminal, run my migration, which is performed via my application written in Go, using the dgo client.

The first time it ran for 12 minutes before it completed. While trying to copy the logs out of my terminal, my terminal crashed and I only got a partial alpha log, which I discarded.

I then ran it a few more time (maybe 5), and each time it finished in a few seconds.

Finally, I ran it again and it took about 30 minutes to complete. I have the full logs from alpha and zero for this run. Just let me know how you would like me to provide them.

Note that I have 12 cores/24 threads and 64GB of ram, though obviously the memory use is limited by lru_mb flag. Since my last time through this I’ve switched to the Arch Linux LTS kernel, which is 5.4.61-1-lts, instead of the most recent 5.8 kernel.

I have full alpha/zero logs from a recent run that I can share privately. These were collected on a new docker swarm deploy (same single host machine). The first schema alter took about 15 minutes. The next took less than a minute.

The freezing of my system may be tangentially related, since I do have that issue in other contexts. However, in those contexts it will only be for a second or two, whereas dgraph will freeze my system for minutes at a time.

Thanks @freb for responding.

Sure.

You can mail the logs to me at naman@dgraph.io

What do you mean by eventually returns? By this do you mean after restarting Dgraph after cleaning up, and then on altering schema on empty db takes a lot of time?

That’s great. Please share logs as well as schema on naman@dgraph.io

Thanks @dhagrow

To clarify that same question for logs, each time I ran the migration, I started with killing alpha and zero and deleting all files out current/data directory. I then restarted them both and ran the schema alteration.

1 Like

Thanks a lot, @freb, for the logs provided. We have made some progress and this looks like some RAFT issue. But to be sure and pinpoint it, we would need some more detailed logs. If possible, Jaeger traces too.
I am listing the steps below. I request you to help us in this issue.

  1. Setting Jaeger:
    This is documented here, but for your convenience I am mentioning the exact command to run.
docker run -d --name jaeger \
  -e COLLECTOR_ZIPKIN_HTTP_PORT=9411 \
  -p 5775:5775/udp \
  -p 6831:6831/udp \
  -p 6832:6832/udp \
  -p 5778:5778 \
  -p 16686:16686 \
  -p 14268:14268 \
  -p 14250:14250 \
  -p 9411:9411 \
  jaegertracing/all-in-one:1.19

This configures Jaeger to collect traces. You can see the traces in http://localhost:16686/search (by default)

  1. Configure Jaeger in dgraph
    This is configured by passing --jaeger.collector flag.
    i.e., your command would look like
dgraph zero --jaeger.collector http://localhost:14268        ⍉
ctor http://localhost:14268 --v=3
dgraph alpha --lru_mb 2048 --zero localhost:5080 --jaeger.collector http://localhost:14268 --v 3

And then perform your alter operation.

  1. Collect Traces:
    Once you are done with running a few runs of schema mutation, collect traces. I am attaching a script here. Assuming the above settings(port etc remain the same).
    collect.py (1.1 KB)
    The following commands can help you get the traces from dgraph.alpha and dgraph zero services; and dump it in ztrace.json and atrace.json respectively.
python collect.py --s=dgraph.zero --t=3 --u=http://localhost:16686/api/traces --f ztrace.json
python collect.py --s=dgraph.alpha --t=3 --u=http://localhost:16686/api/traces --f atrace.json

Thanks again

I emailed the logs. I’m curious if you found anything interesting in them.

Thanks, @freb for the logs. I looked like the zero became unhealthy and was not able to stream the updates to alpha. Logs are kind of weird too.

I0917 20:50:15.865071 2557844 log.go:30] Stopping garbage collection of values.
I0917 20:58:08.757863 2557844 node.go:163] RaftComm: [0x1] Heartbeats out: 0, in: 0
I0917 20:50:15.864667 2557844 log.go:30] Closing database
I0917 20:58:08.757869 2557844 log.go:30] Stopping garbage collection of values.
I0917 20:58:08.757892 2557844 groups.go:637] No healthy Zero leader found. Trying to find a Zero leader...
W0917 20:58:08.757902 2557844 groups.go:852] No membership update for 10s. Closing connection to Zero.
I0917 20:58:08.757922 2557844 groups.go:637] No healthy Zero leader found. Trying to find a Zero leader..

Note the time of various logs. We discussed about the possibility and we may need to further look into it. I got involved in some other issue then. Will soon get back to this issue after that.

@freb we also noticed that the zero is having difficulties communicating with the alpha. Zero was not able to stream any membership updates to the alpha and alpha keeps waiting for it. If the nodes cannot communicate, there will be delays in the operation.

I don’t see this issue locally but somehow you’re able to always reproduce it. We’ll keep investigating this and get back.

@freb would you be able to run dgraph on a different computer and confirm if you’re seeing the same issue there as well? I want to rule out the fact that this might be a machine specific bug.

Hello everyone. I apologize for my lack of participation despite reopening this issue, but I have had many things on my plate recently. As soon as I get a chance I will contribute my logs and traces, if possible.

I do have a potential origin for the problem that may be worth investigating, though it doesn’t seem that @freb shares a similar configuration. Perhaps we are experiencing similar symptoms with a different origin.

In my case, I have VMWare Workstation installed on my host machine alongside Docker. I use Docker to reproduce my production environment locally, and Workstation to host my project’s client endpoints. I don’t know specifically how these technologies interact with the host’s virtualization support, but I do understand that there is some potential for conflict. What I have actually observed is that when I shutdown my Workstation VMs, I no longer have any problems with Dgraph. Which is to say that I haven’t been able to reproduce the issues when Workstation is not running.

I don’t know if there may be a virtualization conflict, a system resource conflict, or something else entirely, but I thought it might be worth investigating.

1 Like

Thanks @dhagrow, I got the logs. alpha is losing the connection with zero and hence there is a delay in schema update.

W0915 19:11:54.304720       1 groups.go:852] No membership update for 10s. Closing connection to Zero.

Will you be able to run this on a different machine and check if this happens there too? We want to rule out the possibility of any system-specific issue. Does alpha loses the connection after inserting data too? I mean apart from schema alteration.

@freb were you able to check this? Thanks.

Could you try your examples on [Slash]?(https://dgraph.io/slash-graphql) It is a managed cloud deployment from Dgraph that would quickly allow us to understand if this is specific to your choice of Dgraph setup and not a general problem in supporting your dataset (choice of schema and indexes)

1 Like