No leader at term 3; dropping index reading msg

Hello, I have no idea how to reproduce this, so I’ll just say what happened:

First I received "ResourceExhausted desc = grpc: received message larger than max (XXXXXXX vs 4194304)"}" out of nowhere.

My project is loading data from an online API into Dgraph whenever I run it. I saw a GitHub issue suggesting it should be fixed in versions later than Jul 3, 2017. Since I kept getting this error no matter how many times I restarted the program (had previously restarted it many times with no issues), then I decided to stop the docker-compose containers.

After restarting the containers then I first got these errors repeated over and over:

server_1 | 2018/12/13 09:17:45 Error while retrieving timestamps: rpc error: code = Unknown desc = Assigning IDs is only allowed on leader… Will retry…
zero_1 | 2018/12/13 09:17:45 raft.go:1070: INFO: 1 no leader at term 3; dropping index reading msg
zero_1 | 2018/12/13 09:17:48 raft.go:1070: INFO: 1 no leader at term 3; dropping index reading msg
zero_1 | 2018/12/13 09:17:51 raft.go:1070: INFO: 1 no leader at term 3; dropping index reading msg
zero_1 | 2018/12/13 09:17:54 raft.go:1070: INFO: 1 no leader at term 3; dropping index reading msg

And after restarting again (multiple times) then I only get the ‘no leader on term 3’ error:

zero_1 | 2018/12/13 11:43:38 raft.go:1070: INFO: 1 no leader at term 3; dropping index reading msg
zero_1 | 2018/12/13 11:43:41 raft.go:1070: INFO: 1 no leader at term 3; dropping index reading msg
zero_1 | 2018/12/13 11:43:44 raft.go:1070: INFO: 1 no leader at term 3; dropping index reading msg
zero_1 | 2018/12/13 11:43:47 raft.go:1070: INFO: 1 no leader at term 3; dropping index reading msg
zero_1 | 2018/12/13 11:43:50 raft.go:1070: INFO: 1 no leader at term 3; dropping index reading msg
zero_1 | 2018/12/13 11:43:53 raft.go:1070: INFO: 1 no leader at term 3; dropping index reading msg
zero_1 | 2018/12/13 11:43:56 raft.go:1070: INFO: 1 no leader at term 3; dropping index reading msg
zero_1 | 2018/12/13 11:43:59 raft.go:1070: INFO: 1 no leader at term 3; dropping index reading msg
zero_1 | 2018/12/13 11:44:00 oracle.go:92: Purged below ts:43063, len(o.commits):1, len(o.rowCommit):0
zero_1 | 2018/12/13 11:44:02 raft.go:1070: INFO: 1 no leader at term 3; dropping index reading msg
zero_1 | 2018/12/13 11:44:05 raft.go:1070: INFO: 1 no leader at term 3; dropping index reading msg
zero_1 | 2018/12/13 11:44:08 raft.go:1070: INFO: 1 no leader at term 3; dropping index reading msg
zero_1 | 2018/12/13 11:44:11 raft.go:1070: INFO: 1 no leader at term 3; dropping index reading msg

These are my zero logs:

docker logs 57c99f8ed498

Dgraph version : v1.0.7
Commit SHA-1 : f1803442
Commit timestamp : 2018-08-10 13:00:21 -0700
Branch : HEAD

For Dgraph official documentation, visit https://docs.dgraph.io.
For discussions about Dgraph , visit http://discuss.dgraph.io.
To say hi to the community , visit https://dgraph.slack.com.

Licensed under Apache 2.0 + Commons Clause. Copyright 2015-2018 Dgraph Labs, Inc.

Setting up grpc listener at: 0.0.0.0:5080
Setting up http listener at: 0.0.0.0:6080
2018/12/13 09:24:07 node.go:218: Found Snapshot, Metadata: {ConfState:{Nodes: XXX_unrecognized:} Index:4131 Term:3 XXX_unrecognized:}
2018/12/13 09:24:07 node.go:229: Found hardstate: {Term:3 Vote:1 Commit:4396 XXX_unrecognized:}
2018/12/13 09:24:07 node.go:238: Group 0 found 266 entries
2018/12/13 09:24:07 raft.go:355: Restarting node for dgraphzero
2018/12/13 09:24:08 pool.go:162: Echo error from server:7080. Err: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = “transport: Error while dialing dial tcp 172.20.0.3:7080: connect: connection refused”
2018/12/13 09:24:08 pool.go:108: == CONNECTED ==> Setting server:7080
2018/12/13 09:24:08 raft.go:567: INFO: 1 became follower at term 3
2018/12/13 09:24:08 raft.go:315: INFO: newRaft 1 [peers: , term: 3, commit: 4396, applied: 4131, lastindex: 4396, lastterm: 3]
Running Dgraph zero…
2018/12/13 09:24:08 zero.go:333: Got connection request: id:1 addr:“server:7080”
2018/12/13 09:24:08 zero.go:442: Connected: id:1 addr:“server:7080”
2018/12/13 09:24:08 raft.go:1070: INFO: 1 no leader at term 3; dropping index reading msg
2018/12/13 09:24:11 raft.go:1070: INFO: 1 no leader at term 3; dropping index reading msg
2018/12/13 09:24:12 zero.go:507: Error while applying proposal in update stream While proposing error: context deadline exceeded
2018/12/13 09:24:14 raft.go:1070: INFO: 1 no leader at term 3; dropping index reading msg
2018/12/13 09:24:17 raft.go:1070: INFO: 1 no leader at term 3; dropping index reading msg
2018/12/13 09:24:20 raft.go:1070: INFO: 1 no leader at term 3; dropping index reading msg
2018/12/13 09:24:23 raft.go:1070: INFO: 1 no leader at term 3; dropping index reading msg
… this no leader error repeat for as long as I run it …

These are my server logs:

docker logs 9d61b6d7e93b

Dgraph version : v1.0.7
Commit SHA-1 : f1803442
Commit timestamp : 2018-08-10 13:00:21 -0700
Branch : HEAD

For Dgraph official documentation, visit https://docs.dgraph.io.
For discussions about Dgraph , visit http://discuss.dgraph.io.
To say hi to the community , visit https://dgraph.slack.com.

Licensed under Apache 2.0 + Commons Clause. Copyright 2015-2018 Dgraph Labs, Inc.

2018/12/13 09:24:08 server.go:117: Setting Badger option: ssd
2018/12/13 09:24:08 server.go:133: Setting Badger table load option: mmap
2018/12/13 09:24:08 server.go:146: Setting Badger value log load option: none
2018/12/13 09:24:08 server.go:157: Opening postings Badger DB with options: {Dir:p ValueDir:p SyncWrites:true TableLoadingMode:2 ValueLogLoadingMode:2 NumVersionsToKeep:2147483647 MaxTableSize:67108864 LevelSizeMultiplier:10 MaxLevels:7 ValueThreshold:32 NumMemtables:5 NumLevelZeroTables:5 NumLevelZeroTablesStall:10 LevelOneSize:268435456 ValueLogFileSize:1073741824 ValueLogMaxEntries:1000000 NumCompactors:3 managedTxns:false DoNotCompact:false maxBatchCount:0 maxBatchSize:0 ReadOnly:false Truncate:true}
2018/12/13 09:24:08 groups.go:78: Current Raft Id: 1
2018/12/13 09:24:08 gRPC server started. Listening on port 9080
2018/12/13 09:24:08 HTTP server started. Listening on port 8080
2018/12/13 09:24:08 worker.go:89: Worker listening at address: [::]:7080
2018/12/13 09:24:08 pool.go:108: == CONNECTED ==> Setting zero:5080
2018/12/13 09:24:08 groups.go:105: Connected to group zero. Assigned group: 0
2018/12/13 09:24:08 draft.go:74: Node ID: 1 with GroupID: 1
2018/12/13 09:24:08 node.go:218: Found Snapshot, Metadata: {ConfState:{Nodes: XXX_unrecognized:} Index:36857 Term:3 XXX_unrecognized:}
2018/12/13 09:24:08 node.go:229: Found hardstate: {Term:3 Vote:1 Commit:37748 XXX_unrecognized:}
2018/12/13 09:24:08 node.go:238: Group 1 found 892 entries
2018/12/13 09:24:08 draft.go:964: Restarting node for group: 1
2018/12/13 09:24:08 raft.go:567: INFO: 1 became follower at term 3
2018/12/13 09:24:08 raft.go:315: INFO: newRaft 1 [peers: , term: 3, commit: 37748, applied: 36857, lastindex: 37748, lastterm: 3]
2018/12/13 09:24:08 draft.go:411: Creating snapshot at index: 36857. ReadTs: 42498.
2018/12/13 09:24:08 groups.go:496: Got address of a Zero server: zero:5080
2018/12/13 09:24:08 mutation.go:174: Done schema update predicate:“nodeType” value_type:STRING directive:INDEX tokenizer:“exact”
2018/12/13 09:24:08 mutation.go:174: Done schema update predicate:“date” value_type:DATETIME directive:INDEX tokenizer:“day”
… more schema updates …

This is my docker-compose file:

version: “3.2”
services:
zero:
image: dgraph/dgraph:latest
volumes:
- ./_dgraphdata:/dgraph
ports:
- 5080:5080
- 6080:6080
restart: on-failure
command: dgraph zero --my=zero:5080
server:
image: dgraph/dgraph:latest
volumes:
- ./_dgraphdata:/dgraph
ports:
- 8080:8080
- 9080:9080
restart: on-failure
command: dgraph server --my=server:7080 --lru_mb=2048 --zero=zero:5080
ratel:
image: dgraph/dgraph:latest
ports:
- 8000:8000
command: dgraph-ratel

Not sure if it’s relevant, but the _dgraphdata folder (total size 350MB) is owned by root while the project is run by user (but that’s always been the case without issues).

Are you able to export and upgrade to v1.0.10 or v1.0.11-rc4 ?
I believe this has been addressed recently (in a 4 month window).

PS. Clean all docker volumes and etc after export. Or rename the docker-compose.

Cheers.

Oh wow… I foolishly thought I was using the latest version… didn’t realize I had to run docker-compose pull

curl localhost:8080/export results in Dgraph browser is available for running separately using the dgraph-ratel binary. I guess that means it’s impossible to export it?

I actually thought I already had a backup thanks to:

volumes:
 - ./_dgraphdata:/dgraph

But after updating to the latest version then I get a new error (so I guess it is necessary to export the data using v1.0.7 assuming it’s possible to export it in another way):

Creating xxxxxx_server_1 … done
Creating xxxxxx_zero_1 … done
Creating xxxxxx_ratel_1 … done
Attaching to xxxxxx_zero_1, xxxxxx_ratel_1, xxxxxx_server_1
zero_1 | I1213 16:20:39.770441 1 init.go:80]
zero_1 |
zero_1 | Dgraph version : v1.0.10
zero_1 | Commit SHA-1 : 8b801bd7
zero_1 | Commit timestamp : 2018-11-05 17:52:33 -0800
zero_1 | Branch : HEAD
zero_1 |
zero_1 | For Dgraph official documentation, visit https://docs.dgraph.io.
zero_1 | For discussions about Dgraph , visit http://discuss.dgraph.io.
zero_1 | To say hi to the community , visit https://dgraph.slack.com.
zero_1 |
zero_1 | Licensed under Apache 2.0. Copyright 2015-2018 Dgraph Labs, Inc.
zero_1 |
zero_1 |
ratel_1 | 2018/12/13 16:20:40 Listening on port 8000…
zero_1 | I1213 16:20:39.770809 1 run.go:90] Setting up grpc listener at: 0.0.0.0:5080
server_1 | Error: unknown command “server” for “dgraph”
server_1 | Run ‘dgraph --help’ for usage.
server_1 | unknown command “server” for “dgraph”
zero_1 | I1213 16:20:39.770881 1 run.go:90] Setting up http listener at: 0.0.0.0:6080
zero_1 | badger2018/12/13 16:20:39 INFO: Replaying file id: 0 at offset: 1016743
zero_1 | badger2018/12/13 16:20:39 INFO: Replay took: 32.151µs
zero_1 | I1213 16:20:39.787598 1 node.go:152] Setting raft.Config to: &{ID:1 peers: ElectionTick:100 HeartbeatTick:1 Storage:0xc0000b0f70 Applied:4131 MaxSizePerMsg:1048576 MaxInflightMsgs:256 CheckQuorum:false PreVote:true ReadOnlyOption:0 Logger:0x1d112c0}
zero_1 | I1213 16:20:39.788887 1 node.go:271] Found Snapshot.Metadata: {ConfState:{Nodes: XXX_unrecognized:} Index:4131 Term:3 XXX_unrecognized:}
zero_1 | I1213 16:20:39.789200 1 node.go:282] Found hardstate: {Term:3 Vote:1 Commit:4396 XXX_unrecognized:}
zero_1 | I1213 16:20:39.791487 1 node.go:291] Group 0 found 266 entries
zero_1 | I1213 16:20:39.791537 1 raft.go:371] Restarting node for dgraphzero
zero_1 | I1213 16:20:39.791595 1 node.go:174] Setting conf state to
zero_1 | E1213 16:20:41.823332 1 pool.go:178] Echo error from server:7080. Err: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = “transport: Error while dialing dial tcp 172.20.0.4:7080: connect: connection refused”
zero_1 | I1213 16:20:41.824310 1 pool.go:118] CONNECTED to server:7080
zero_1 | I1213 16:20:41.825624 1 node.go:84] 1 became follower at term 3
zero_1 | I1213 16:20:41.825874 1 node.go:84] newRaft 1 [peers: , term: 3, commit: 4396, applied: 4131, lastindex: 4396, lastterm: 3]
zero_1 | I1213 16:20:41.826087 1 run.go:229] Running Dgraph Zero…
xxxxxx_server_1 exited with code 1
server_1 | unknown command “server” for “dgraph”
server_1 | Error: unknown command “server” for “dgraph”
server_1 | Run ‘dgraph --help’ for usage.
server_1 | Error: unknown command “server” for “dgraph”
server_1 | Run ‘dgraph --help’ for usage.
server_1 | unknown command “server” for “dgraph”
server_1 | unknown command “server” for “dgraph”
server_1 | Error: unknown command “server” for “dgraph”
server_1 | Run ‘dgraph --help’ for usage.
xxxxxx_server_1 exited with code 1
server_1 | unknown command “server” for “dgraph”
server_1 | Error: unknown command “server” for “dgraph”
server_1 | Run ‘dgraph --help’ for usage.
server_1 | Error: unknown command “server” for “dgraph”
server_1 | Run ‘dgraph --help’ for usage.
server_1 | unknown command “server” for “dgraph”
server_1 | unknown command “server” for “dgraph”
server_1 | Error: unknown command “server” for “dgraph”
server_1 | Run ‘dgraph --help’ for usage.
server_1 | unknown command “server” for “dgraph”
server_1 | Error: unknown command “server” for “dgraph”
server_1 | Run ‘dgraph --help’ for usage.
server_1 | Error: unknown command “server” for “dgraph”
server_1 | Run ‘dgraph --help’ for usage.
server_1 | unknown command “server” for “dgraph”
xxxxxx_server_1 exited with code 1
server_1 | unknown command “server” for “dgraph”
server_1 | Error: unknown command “server” for “dgraph”
server_1 | Run ‘dgraph --help’ for usage.
server_1 | Error: unknown command “server” for “dgraph”
server_1 | Run ‘dgraph --help’ for usage.
server_1 | unknown command “server” for “dgraph”
server_1 | unknown command “server” for “dgraph”
server_1 | Error: unknown command “server” for “dgraph”
server_1 | Run ‘dgraph --help’ for usage.
server_1 | unknown command “server” for “dgraph”
server_1 | Error: unknown command “server” for “dgraph”
server_1 | Run ‘dgraph --help’ for usage.
server_1 | Error: unknown command “server” for “dgraph”
server_1 | Run ‘dgraph --help’ for usage.
server_1 | unknown command “server” for “dgraph”
server_1 | Error: unknown command “server” for “dgraph”
server_1 | Run ‘dgraph --help’ for usage.
server_1 | unknown command “server” for “dgraph”
server_1 | unknown command “server” for “dgraph”
server_1 | Error: unknown command “server” for “dgraph”
server_1 | Run ‘dgraph --help’ for usage.
^CGracefully stopping… (press Ctrl+C again to force)
Stopping xxxxxx_ratel_1 …
Stopping xxxxxx_server_1 …
Stopping xxxxxx_zero_1 …
Killing xxxxxx_ratel_1 … error
Killing xxxxxx_server_1 … error
Killing xxxxxx_zero_1 … error

This warning does not exist at this endpoint. Are you sure you exported correctly?

What’s inside the volume is not necessarily “backup”. It’s usually good to clean the volume for each upgrade so you do not risk upgrading to incompatible versions. So you should export and then bulkload the exported RDF.

This happens because we change the way Alpha Server starts. You should use “Dgraph Alpha” instead of “Dgraph Server”. You must update your docker-compose tho.

I’m an idiot… I had used localhost:8080/export instead of localhost:8080/admin/export… Also felt it didn’t seem like an error message but you replied before I had the chance to update the post (:

After using the correct command then I got the following error:

{"errors":[{"code":"ErrorUnauthorized","message":"Request from IP: 172.20.0.1"}]}

So I updated the dockerly-compose file, perhaps incorrectly, to this (just changed image versions and added --whitelist to dgraph server):

version: "3.2"
services:
  zero:
    image: dgraph/dgraph:v1.0.7
    volumes:
      - ./_dgraphdata:/dgraph
    ports:
      - 5080:5080
      - 6080:6080
    restart: on-failure
    command: dgraph zero --my=zero:5080
  server:
    image: dgraph/dgraph:v1.0.7
    volumes:
      - ./_dgraphdata:/dgraph
    ports:
      - 8080:8080
      - 9080:9080
    restart: on-failure
    command: dgraph server --my=server:7080 --lru_mb=2048 --zero=zero:5080 --whitelist=172.20.0.1:172.20.0.1
  ratel:
    image: dgraph/dgraph:v1.0.7
    ports:
      - 8000:8000
    command: dgraph-ratel

Which results in the following error:

{"errors":[{"code":"rpc error: code = Unknown desc = Assigning IDs is only allowed on leader.","message":"Export failed."}]}

You have to access the container and from there use curl. Or Get started with Dgraph

PS. You will also need to download the RDF generated in the process.

Does that mean it won’t work with

command: dgraph server --my=server:7080 --lru_mb=2048 --zero=zero:5080 --whitelist=172.20.0.1:172.20.0.1

? After adding that to the docker-compose file the unauthorized error message stopped, and I instead got this error message:

{"errors":[{"code":"rpc error: code = Unknown desc = Assigning IDs is only allowed on leader.","message":"Export failed."}]}

Edit: I’ve also tried to enter the docker container (dgraph server) and run the curl localhost:8080/admin/export command, which resulted in the same “Assign IDs is only allowed on leader” message.

I’m not sure about this range. But you got a “Export failed.” so, done.

This would allow admin operations from hosts with IP between 172.17.0.0 and 172.20.0.0 along with the server which has IP address as 192.168.1.1 .

I’m not sure, I’ll speculate about it…

I think you’ve lost your leader. And with that some settings may have truncated.
Related dgraph/dgraph/cmd/zero/assign.go at b07420786e9d3e980ca429800a00e3da592d5f1f · dgraph-io/dgraph · GitHub
This may have happened if you used multiple Alphas before and you didn’t make the necessary adjustments to decrease (like remove nodes), so you started using a single Alpha. Other thing could be upgrading directly for a unsupported version.

Try to simulate the first state that you’ve started that cluster.

I guess the {"errors":[{"code":"rpc error: code = Unknown desc = Assigning IDs is only allowed on leader.","message":"Export failed."}]} error is related to the

zero_1 | 2018/12/14 09:31:51 zero.go:540: Error while applying proposal in update stream While proposing error: context deadline exceeded
zero_1 | 2018/12/14 09:31:53 raft.go:1070: INFO: 1 no leader at term 3; dropping index reading msg

errors, which sounds like need to be fixed through version update.

I’m curious what steps you would recommend to take in the future to prevent/reduce the chance of data loss? I thought

volumes:

  • ./_dgraphdata:/dgraph

Where the folder could then be synced with other servers was the perfect solution, but it seemed I misunderstood/overestimated the value of this folder. How would you do it (assuming you just want to run on one server)? A cron job that did a curl admin/export every X hours? And if so, what method would you recommend to backup this exported data afterwards?


Edit: by the way, I can confirm that the raft.go:1070: INFO: 1 no leader at term 3; dropping index reading msg error is relevant up until v1.0.9

In v1.0.10 the error is replaced with Error: unknown command "server" for "dgraph". There doesn’t appear to be any errors with ratel but the server shows the following errors repeated in different colors.

server_1 exited with code 1
server_1 | Error: unknown command “server” for “dgraph”
server_1 | Run ‘dgraph --help’ for usage.
server_1 | unknown command “server” for “dgraph”
server_1 | Error: unknown command “server” for “dgraph”
server_1 | Run ‘dgraph --help’ for usage.
server_1 | unknown command “server” for “dgraph”
server_1 | Error: unknown command “server” for “dgraph”
server_1 | Run ‘dgraph --help’ for usage.
server_1 | unknown command “server” for “dgraph”
zero_1 | E1214 09:25:31.431993 1 oracle.go:425] No healthy connection found to leader of group 1

In v1.0.11-rc4 both the old and new errors seem to appear:

server_1 | Error: unknown command “server” for “dgraph”
server_1 | Run ‘dgraph --help’ for usage.
server_1 | unknown command “server” for “dgraph”
server_1 | Error: unknown command “server” for “dgraph”
server_1 | Run ‘dgraph --help’ for usage.
server_1 | unknown command “server” for “dgraph”
zero_1 | W1214 09:22:39.602250 1 node.go:550] [1] Read index context timed out
zero_1 | I1214 09:22:39.602324 1 node.go:83] 1 no leader at term 3; dropping index reading msg
zero_1 | W1214 09:22:42.602629 1 node.go:550] [1] Read index context timed out
zero_1 | I1214 09:22:42.602716 1 node.go:83] 1 no leader at term 3; dropping index reading msg
zero_1 | W1214 09:22:45.603044 1 node.go:550] [1] Read index context timed out
zero_1 | I1214 09:22:45.603139 1 node.go:83] 1 no leader at term 3; dropping index reading msg
zero_1 | W1214 09:22:48.603306 1 node.go:550] [1] Read index context timed out
zero_1 | I1214 09:22:48.603395 1 node.go:83] 1 no leader at term 3; dropping index reading msg
zero_1 | W1214 09:22:51.603600 1 node.go:550] [1] Read index context timed out
zero_1 | I1214 09:22:51.603695 1 node.go:83] 1 no leader at term 3; dropping index reading msg
zero_1 | W1214 09:22:54.603858 1 node.go:550] [1] Read index context timed out
zero_1 | I1214 09:22:54.603968 1 node.go:83] 1 no leader at term 3; dropping index reading msg

It’s not about the data loss, but about overwriting data settings. In my assumption.

This would have more to do with your abilities with Docker. In my opinion using some cron task would be cool though, you need to analyze a solution with Docker. I have scripts for K8’s, using the K8’s own API. But without CRON.

check

Ahh, after changing server to alpha then, when running it it first loads all the data:

Attaching to xxxxxxx_zero_1, xxxxxxx_server_1, xxxxxxx_ratel_1
zero_1 | I1214 14:52:14.524222 1 init.go:80]
zero_1 |
zero_1 | Dgraph version : v1.0.10
zero_1 | Commit SHA-1 : 8b801bd7
zero_1 | Commit timestamp : 2018-11-05 17:52:33 -0800
zero_1 | Branch : HEAD
zero_1 |
zero_1 | For Dgraph official documentation, visit https://docs.dgraph.io.
zero_1 | For discussions about Dgraph , visit http://discuss.dgraph.io.
zero_1 | To say hi to the community , visit https://dgraph.slack.com.
zero_1 |
zero_1 | Licensed under Apache 2.0. Copyright 2015-2018 Dgraph Labs, Inc.
zero_1 |
zero_1 |
zero_1 | I1214 14:52:14.524964 1 run.go:90] Setting up grpc listener at: 0.0.0.0:5080
zero_1 | I1214 14:52:14.525044 1 run.go:90] Setting up http listener at: 0.0.0.0:6080
zero_1 | badger2018/12/14 14:52:14 INFO: Replaying file id: 0 at offset: 1019703
zero_1 | badger2018/12/14 14:52:14 INFO: Replay took: 11.761µs
zero_1 | I1214 14:52:14.541635 1 node.go:152] Setting raft.Config to: &{ID:1 peers: ElectionTick:100 HeartbeatTick:1 Storage:0xc0000aaa90 Applied:4131 MaxSizePerMsg:1048576 MaxInflightMsgs:256 CheckQuorum:false PreVote:true ReadOnlyOption:0 Logger:0x1d112c0}
server_1 | I1214 14:52:15.508225 1 init.go:80]
server_1 |
server_1 | Dgraph version : v1.0.10
server_1 | Commit SHA-1 : 8b801bd7
server_1 | Commit timestamp : 2018-11-05 17:52:33 -0800
server_1 | Branch : HEAD
server_1 |
server_1 | For Dgraph official documentation, visit https://docs.dgraph.io.
server_1 | For discussions about Dgraph , visit http://discuss.dgraph.io.
server_1 | To say hi to the community , visit https://dgraph.slack.com.
server_1 |
server_1 | Licensed under Apache 2.0. Copyright 2015-2018 Dgraph Labs, Inc.
server_1 |
server_1 |
zero_1 | I1214 14:52:14.542135 1 node.go:271] Found Snapshot.Metadata: {ConfState:{Nodes: XXX_unrecognized:} Index:4131 Term:3 XXX_unrecognized:}
server_1 | I1214 14:52:15.508472 1 server.go:115] Setting Badger table load option: mmap
server_1 | I1214 14:52:15.508481 1 server.go:127] Setting Badger value log load option: mmap
ratel_1 | 2018/12/14 14:52:16 Listening on port 8000…
zero_1 | I1214 14:52:14.542212 1 node.go:282] Found hardstate: {Term:3 Vote:1 Commit:4396 XXX_unrecognized:}
server_1 | I1214 14:52:15.508485 1 server.go:155] Opening write-ahead log BadgerDB with options: {Dir:w ValueDir:w SyncWrites:true TableLoadingMode:1 ValueLogLoadingMode:2 NumVersionsToKeep:1 MaxTableSize:67108864 LevelSizeMultiplier:10 MaxLevels:7 ValueThreshold:65500 NumMemtables:5 NumLevelZeroTables:5 NumLevelZeroTablesStall:10 LevelOneSize:268435456 ValueLogFileSize:1073741823 ValueLogMaxEntries:10000 NumCompactors:3 managedTxns:false DoNotCompact:false maxBatchCount:0 maxBatchSize:0 ReadOnly:false Truncate:true}
server_1 | badger2018/12/14 14:52:15 INFO: Replaying file id: 3 at offset: 1504
server_1 | badger2018/12/14 14:52:15 INFO: Replay took: 26.334µs
server_1 | I1214 14:52:15.517213 1 server.go:115] Setting Badger table load option: mmap
server_1 | I1214 14:52:15.517246 1 server.go:127] Setting Badger value log load option: mmap
server_1 | I1214 14:52:15.517259 1 server.go:169] Opening postings BadgerDB with options: {Dir:p ValueDir:p SyncWrites:true TableLoadingMode:2 ValueLogLoadingMode:2 NumVersionsToKeep:2147483647 MaxTableSize:67108864 LevelSizeMultiplier:10 MaxLevels:7 ValueThreshold:1024 NumMemtables:5 NumLevelZeroTables:5 NumLevelZeroTablesStall:10 LevelOneSize:268435456 ValueLogFileSize:1073741823 ValueLogMaxEntries:1000000 NumCompactors:3 managedTxns:false DoNotCompact:false maxBatchCount:0 maxBatchSize:0 ReadOnly:false Truncate:true}
zero_1 | I1214 14:52:14.543501 1 node.go:291] Group 0 found 266 entries
zero_1 | I1214 14:52:14.543521 1 raft.go:371] Restarting node for dgraphzero
server_1 | badger2018/12/14 14:52:15 INFO: Replaying file id: 1 at offset: 98425724
server_1 | badger2018/12/14 14:52:15 INFO: Replay took: 9.699µs
server_1 | I1214 14:52:15.567809 1 run.go:338] gRPC server started. Listening on port 9080
server_1 | I1214 14:52:15.567829 1 run.go:339] HTTP server started. Listening on port 8080
server_1 | I1214 14:52:15.567805 1 groups.go:92] Current Raft Id: 1
server_1 | I1214 14:52:15.567811 1 worker.go:80] Worker listening at address: [::]:7080
zero_1 | I1214 14:52:14.543548 1 node.go:174] Setting conf state to
server_1 | I1214 14:52:15.569387 1 pool.go:118] CONNECTED to zero:5080
zero_1 | E1214 14:52:15.557917 1 pool.go:178] Echo error from server:7080. Err: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = “transport: Error while dialing dial tcp 172.20.0.3:7080: connect: connection refused”
zero_1 | I1214 14:52:15.558361 1 pool.go:118] CONNECTED to server:7080
server_1 | I1214 14:52:15.569895 1 groups.go:119] Connected to group zero. Assigned group: 0
zero_1 | I1214 14:52:15.558860 1 node.go:84] 1 became follower at term 3
server_1 | I1214 14:52:15.575788 1 draft.go:74] Node ID: 1 with GroupID: 1
server_1 | I1214 14:52:15.575848 1 node.go:152] Setting raft.Config to: &{ID:1 peers: ElectionTick:100 HeartbeatTick:1 Storage:0xc000249110 Applied:36857 MaxSizePerMsg:1048576 MaxInflightMsgs:256 CheckQuorum:false PreVote:true ReadOnlyOption:0 Logger:0x1d112c0}
server_1 | I1214 14:52:15.594979 1 node.go:271] Found Snapshot.Metadata: {ConfState:{Nodes: XXX_unrecognized:} Index:36857 Term:3 XXX_unrecognized:}
server_1 | I1214 14:52:15.595004 1 node.go:282] Found hardstate: {Term:3 Vote:1 Commit:37748 XXX_unrecognized:}
server_1 | I1214 14:52:15.596512 1 node.go:291] Group 1 found 892 entries
server_1 | I1214 14:52:15.596519 1 draft.go:1047] Restarting node for group: 1
server_1 | I1214 14:52:15.596531 1 node.go:174] Setting conf state to
zero_1 | I1214 14:52:15.558939 1 node.go:84] newRaft 1 [peers: , term: 3, commit: 4396, applied: 4131, lastindex: 4396, lastterm: 3]
server_1 | I1214 14:52:15.597566 1 node.go:84] 1 became follower at term 3
server_1 | I1214 14:52:15.597649 1 node.go:84] newRaft 1 [peers: , term: 3, commit: 37748, applied: 36857, lastindex: 37748, lastterm: 3]
zero_1 | I1214 14:52:15.559037 1 run.go:229] Running Dgraph Zero…
server_1 | I1214 14:52:15.767871 1 draft.go:313] Skipping snapshot at 36857, because found one at 36857
zero_1 | I1214 14:52:15.569493 1 zero.go:375] Got connection request: id:1 addr:“server:7080”
server_1 | E1214 14:52:15.767994 1 draft.go:407] Applying proposal. Error: proto: wrong wireType = 1 for field BlockSize. Proposal: "mutations:<group_id:1 start_ts:42500 edges:<entity:728 attr:\

[…a lot of data loaded here, seems my entire database…]

e_type:STRING > edges:<entity:1310 attr:"predicate" value:"nodeType" value_type:STRING > edges:<entity:1310 attr:"date" value:"2018-12-07T19:45:00Z" value_type:STRING > edges:<entity:1310 attr:"predicate" value:"date" value_type:STRING > > key:"01-12437238023092424395" ".
server_1 | I1214 14:44:25.124114 1 mutation.go:181] Done schema update predicate:“nodeType” value_type:STRING directive:INDEX tokenizer:“exact”
zero_1 | W1214 14:44:25.215747 1 node.go:551] [1] Read index context timed out
zero_1 | I1214 14:44:25.215807 1 node.go:84] 1 no leader at term 3; dropping index reading msg
server_1 | I1214 14:44:26.220543 1 mutation.go:181] Done schema update predicate:“date” value_type:DATETIME directive:INDEX tokenizer:“day”

then update all the schema before getting the dropping index error:

server_1 | I1214 14:49:40.007281 1 mutation.go:181] Done schema update predicate:“date” value_type:DATETIME directive:INDEX tokenizer:“day”
zero_1 | W1214 14:49:42.852696 1 node.go:551] [1] Read index context timed out
zero_1 | I1214 14:49:42.852757 1 node.go:84] 1 no leader at term 3; dropping index reading msg
zero_1 | W1214 14:49:45.852843 1 node.go:551] [1] Read index context timed out
zero_1 | I1214 14:49:45.852871 1 node.go:84] 1 no leader at term 3; dropping index reading msg
zero_1 | I1214 14:49:48.420797 1 oracle.go:100] Purged below ts:43083, len(o.commits):0, len(o.rowCommit):0
zero_1 | W1214 14:49:48.853019 1 node.go:551] [1] Read index context timed out
zero_1 | I1214 14:49:48.853059 1 node.go:84] 1 no leader at term 3; dropping index reading msg
zero_1 | W1214 14:49:51.853114 1 node.go:551] [1] Read index context timed out
zero_1 | I1214 14:49:51.853148 1 node.go:84] 1 no leader at term 3; dropping index reading msg
zero_1 | W1214 14:49:54.853290 1 node.go:551] [1] Read index context timed out
zero_1 | I1214 14:49:54.853340 1 node.go:84] 1 no leader at term 3; dropping index reading msg
zero_1 | W1214 14:49:57.853691 1 node.go:551] [1] Read index context timed out
zero_1 | I1214 14:49:57.853787 1 node.go:84] 1 no leader at term 3; dropping index reading msg
zero_1 | W1214 14:50:00.854039 1 node.go:551] [1] Read index context timed out
zero_1 | I1214 14:50:00.854141 1 node.go:84] 1 no leader at term 3; dropping index reading msg
zero_1 | W1214 14:50:03.854430 1 node.go:551] [1] Read index context timed out
zero_1 | I1214 14:50:03.854535 1 node.go:84] 1 no leader at term 3; dropping index reading msg
zero_1 | W1214 14:50:06.854679 1 node.go:551] [1] Read index context timed out
zero_1 | I1214 14:50:06.854765 1 node.go:84] 1 no leader at term 3; dropping index reading msg

What’s the issue exac?

Not sure what’s causing it but getting these errors on repeat:

server_1 | W1214 15:50:58.446481 1 server.go:230] Error while retrieving timestamps: rpc error: code = Unknown desc = Assigning IDs is only allowed on leader. with delay: 10s. Will retry…
zero_1 | W1214 15:51:00.906721 1 node.go:551] [1] Read index context timed out
zero_1 | I1214 15:51:00.906815 1 node.go:84] 1 no leader at term 3; dropping index reading msg
zero_1 | W1214 15:51:03.907158 1 node.go:551] [1] Read index context timed out
zero_1 | I1214 15:51:03.907259 1 node.go:84] 1 no leader at term 3; dropping index reading msg
zero_1 | W1214 15:51:06.907399 1 node.go:551] [1] Read index context timed out
zero_1 | I1214 15:51:06.907546 1 node.go:84] 1 no leader at term 3; dropping index reading msg
zero_1 | E1214 15:51:08.447363 1 oracle.go:479] Got error: Assigning IDs is only allowed on leader. while leasing timestamps: val:1
server_1 | W1214 15:51:08.447821 1 server.go:230] Error while retrieving timestamps: rpc error: code = Unknown desc = Assigning IDs is only allowed on leader. with delay: 10s. Will retry…
zero_1 | W1214 15:51:09.907622 1 node.go:551] [1] Read index context timed out
zero_1 | I1214 15:51:09.907699 1 node.go:84] 1 no leader at term 3; dropping index reading msg
zero_1 | W1214 15:51:12.907861 1 node.go:551] [1] Read index context timed out
zero_1 | I1214 15:51:12.907962 1 node.go:84] 1 no leader at term 3; dropping index reading msg
zero_1 | W1214 15:51:15.908142 1 node.go:551] [1] Read index context timed out
zero_1 | I1214 15:51:15.908258 1 node.go:84] 1 no leader at term 3; dropping index reading msg
zero_1 | E1214 15:51:18.448849 1 oracle.go:479] Got error: Assigning IDs is only allowed on leader. while leasing timestamps: val:1

These errors showed up after restarting the docker containers (which I had to do as a result of another error ("ResourceExhausted desc = grpc: received message larger than max (XXXXXXX vs 4194304)"}" that occurred while querying and loading new (little) data into Dgraph)