Raft.Ready took too long to process: Timer Total: 122ms. Breakdown: [{proposals 118ms} {disk 2ms} {sync 0s} {advance 0s}]. Num entries: 1. MustSync: true


#1

I’m running a small (one Zero, one Alpha) dgraph instance in Docker on my Macbook Pro. I get a lot of these types of messages when ingesting lots of data:

zero_1   | W0726 14:44:48.091589       1 raft.go:707] Raft.Ready took too long to process: Timer Total: 122ms. Breakdown: [{proposals 118ms} {disk 2ms} {sync 0s} {advance 0s}]. Num entries: 1. MustSync: true
zero_1   | W0726 14:44:49.914108       1 raft.go:707] Raft.Ready took too long to process: Timer Total: 158ms. Breakdown: [{proposals 155ms} {disk 3ms} {sync 0s} {advance 0s}]. Num entries: 1. MustSync: true
zero_1   | W0726 14:44:51.245969       1 raft.go:707] Raft.Ready took too long to process: Timer Total: 145ms. Breakdown: [{proposals 141ms} {disk 3ms} {advance 2ms} {sync 0s}]. Num entries: 1. MustSync: true
zero_1   | W0726 14:44:51.870343       1 raft.go:707] Raft.Ready took too long to process: Timer Total: 343ms. Breakdown: [{proposals 333ms} {disk 10ms} {sync 0s} {advance 0s}]. Num entries: 1. MustSync: true
zero_1   | W0726 14:44:52.371070       1 raft.go:707] Raft.Ready took too long to process: Timer Total: 500ms. Breakdown: [{proposals 497ms} {disk 3ms} {sync 0s} {advance 0s}]. Num entries: 1. MustSync: true
zero_1   | W0726 14:44:54.440440       1 raft.go:707] Raft.Ready took too long to process: Timer Total: 115ms. Breakdown: [{proposals 109ms} {disk 6ms} {sync 0s} {advance 0s}]. Num entries: 1. MustSync: true

It looks like the proposals are the issue, not disks. Any advice?

System Specs:

  • Docker version 18.09.2, build 6247962
  • MacBook Pro (15-inch, 2017)
  • dgraph v1.0.16 from Docker Hub

Thanks!


(Manish R Jain) #2

There’s a pushback mechanism inbuilt so if there’re non-applied proposals already present in the Alpha, the new proposals are not pushed and the Raft.Ready loop is blocked. These are more information.


#3

Thanks, @mrjn! What’s the advice? Decrease my batch size (i.e., smaller transactions)? That seems to help…


(Manish R Jain) #4

No need to change anything. Just keep a watch.


(Sailorfeng) #5

Similar situation comes to me.

zero_1_2d1c24b1e703 | W0730 06:11:24.437509       1 raft.go:707] Raft.Ready took too long to process: Timer Total: 622ms. Breakdown: [{sync 622ms} {disk 0s} {proposals 0s} {advance 0s}]. Num entries: 1. MustSync: true
alpha_1_1_6092b86da97f | W0730 06:11:25.282047       1 draft.go:914] Raft.Ready took too long to process: Timer Total: 842ms. Breakdown: [{sync 842ms} {disk 0s} {proposals 0s} {advance 0s}] Num entries: 1. MustSync: true
alpha_1_1_6092b86da97f | W0730 06:11:26.044922       1 draft.go:914] Raft.Ready took too long to process: Timer Total: 301ms. Breakdown: [{sync 301ms} {disk 0s} {proposals 0s} {advance 0s}] Num entries: 1. MustSync: true
alpha_1_1_6092b86da97f | W0730 06:11:27.150708       1 draft.go:914] Raft.Ready took too long to process: Timer Total: 453ms. Breakdown: [{sync 452ms} {disk 0s} {proposals 0s} {advance 0s}] Num entries: 1. MustSync: true

It seems the sync phase took a long time. Any advice for this?

System:

  • dgraph v1.0.16 from Docker Hub
  • Debian GNU/Linux 9.2 (stretch)
  • VPS with quite low IOPS

Thanks


(Manish R Jain) #6

If it is happening only occasionally, it’s OK. There could be other disk activity happening, causing slow downs for sync. However, if it happening frequently, then you should consider switching to a higher IOPS disk. Or, even a local NVME drive.


#7

I don’t know about you, @sailorfeng, but I’m on regular spinning rust HDDs, so I’m not surprised the IOPS is low enough for dgraph to log warnings.


(system) closed #8

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.