Node.go:400: WARN: A tick missed to fire. Node blocks too long!


(Nikita Zaletov) #1

what does this error mean?

should I pay attention for it? i know that there were problems with this issue in 1.0.7/8, but I still get it on 1.0.9 while just inserting data in batches of 1000.


(Michel Conrado (Support Engineer)) #2

This error has already been worked since version v1.0.8. Do the following. Create a new test instance (from scratch) with the latest RC version of Dgraph. (v1.0.10-rc1) - And try to Fetch this same error in this instance. If So see it persists tell us.

More details about your settings are also interesting to know.

Chek this commit:


(Nikita Zaletov) #3

was something changed related to this error in 1.0.10-rc1 comparing to 1.0.9?
I used fresh install of 1.0.9 dgraph


(Nikita Zaletov) #4

btw, what does this error actually mean?


(Michel Conrado (Support Engineer)) #5

Not sure but is related to etcd raft, not Dgraph directly.

And in the commit we have this comment from Manish.

– A Tick missed to fire, caused due to the repeated calling of raft Storage.FirstIndex(). This was causing Badger to create an iterator every time, which was expensive. Now we cache the first index, to avoid repeatedly looking it up.


(Michel Conrado (Support Engineer)) #6

Check this issue:


(Arthur Wiebe) #7

I was having this issue as well on 1.0.9. Most transactions were failing as a result (trying to do about 40/sec)
To resolve I changed my code to run transactions sequentially instead of simultaneously on up to 10 go routines.
Hope that this bug can get fixed soon though.


(Manish R Jain) #8

Can you try with v1.0.10?


(Arthur Wiebe) #9

I actually upgraded to 1.0.11-rc1 yesterday and had the issue again, except the line number is different. It’s now:

node.go:86] A tick missed to fire. Node blocks too long!

(Manish R Jain) #10

How bad is it? Can you share your full log? (Use -v=2 flag, and either --logtostderr, or tar gzip the log directory where glog is outputting the logs.)


(Arthur Wiebe) #11

It generally ends with:

groups.go:807] No longer the leader of group 1. Exiting
groups.go:758] Error in oracle delta stream. Error: rpc error: code = Canceled desc = context canceled
E1114 20:59:02.848681       1 mutation.go:56] MutateOverNetwork Error: context canceled....

And after a few seconds transactions continue at a normal pace until the same thing happens again in a couple minutes.

If the load isn’t heavy, there’s no issues.

I’ll see if I can capture a full log for you at -v=2 as well.