I’m running a small (one Zero, one Alpha) dgraph instance in Docker on my Macbook Pro. I get a lot of these types of messages when ingesting lots of data:
zero_1 | W0726 14:44:48.091589 1 raft.go:707] Raft.Ready took too long to process: Timer Total: 122ms. Breakdown: [{proposals 118ms} {disk 2ms} {sync 0s} {advance 0s}]. Num entries: 1. MustSync: true
zero_1 | W0726 14:44:49.914108 1 raft.go:707] Raft.Ready took too long to process: Timer Total: 158ms. Breakdown: [{proposals 155ms} {disk 3ms} {sync 0s} {advance 0s}]. Num entries: 1. MustSync: true
zero_1 | W0726 14:44:51.245969 1 raft.go:707] Raft.Ready took too long to process: Timer Total: 145ms. Breakdown: [{proposals 141ms} {disk 3ms} {advance 2ms} {sync 0s}]. Num entries: 1. MustSync: true
zero_1 | W0726 14:44:51.870343 1 raft.go:707] Raft.Ready took too long to process: Timer Total: 343ms. Breakdown: [{proposals 333ms} {disk 10ms} {sync 0s} {advance 0s}]. Num entries: 1. MustSync: true
zero_1 | W0726 14:44:52.371070 1 raft.go:707] Raft.Ready took too long to process: Timer Total: 500ms. Breakdown: [{proposals 497ms} {disk 3ms} {sync 0s} {advance 0s}]. Num entries: 1. MustSync: true
zero_1 | W0726 14:44:54.440440 1 raft.go:707] Raft.Ready took too long to process: Timer Total: 115ms. Breakdown: [{proposals 109ms} {disk 6ms} {sync 0s} {advance 0s}]. Num entries: 1. MustSync: true
It looks like the proposals are the issue, not disks. Any advice?
There’s a pushback mechanism inbuilt so if there’re non-applied proposals already present in the Alpha, the new proposals are not pushed and the Raft.Ready loop is blocked. These are more information.
If it is happening only occasionally, it’s OK. There could be other disk activity happening, causing slow downs for sync. However, if it happening frequently, then you should consider switching to a higher IOPS disk. Or, even a local NVME drive.