Recover database after "Error while creating badger KV posting store error: Unable to find log file. Please retry"

Dgraph alpha is failing on load, giving the error “Unable to find log file”.

This is with dgraph 1.0.17. We plan to upgrade dgraph soon, but in the mean time, is there any way to retrieve the data after this happens. Every time I try to start it up it fails with this message.

I’ve already deleted all LOCK files but it hasn’t helped. Is there a tool to fix the data?

–EDIT –

I’m getting this error even when restoring from a backup on AWS when the database was working. In the past, this didn’t happen. Any idea why?

This is the whole output of dgraph alpha, if that helps:

 I0511 10:00:14.684280       1 init.go:88] 
 
 Dgraph version   : v1.0.17
 Commit SHA-1     : 42deb299
 Commit timestamp : 2019-08-30 12:46:31 -0700
 Branch           : HEAD
 Go version       : go1.12.7
 
 For Dgraph official documentation, visit https://docs.dgraph.io.
 For discussions about Dgraph     , visit http://discuss.dgraph.io.
 To say hi to the community       , visit https://dgraph.slack.com.
 
 Licensed variously under the Apache Public License 2.0 and Dgraph Community License.
 Copyright 2015-2018 Dgraph Labs, Inc.
 
 
 I0511 10:00:14.684539       1 run.go:461] x.Config: {DebugMode:false PortOffset:0 QueryEdgeLimit:1000000 NormalizeNodeLimit:10000}
 I0511 10:00:14.684556       1 run.go:462] worker.Config: {ExportPath:/backup NumPendingProposals:256 Tracing:1 MyAddr:dgraph-master:7080 ZeroAddr:zero-master:5080 RaftId:0 ExpandEdge:true WhiteListedIPRanges:[{Lower:172.18.0.1 Upper:172.19.0.10}] MaxRetries:-1 StrictMutations:false SnapshotAfter:10000 AbortOlderThan:5m0s}
 I0511 10:00:14.684586       1 run.go:463] edgraph.Config: {PostingDir:p BadgerTables:mmap BadgerVlog:mmap WALDir:w MutationsMode:0 AuthToken: AllottedMemory:2048}
 I0511 10:00:14.684612       1 server.go:115] Setting Badger table load option: mmap
 I0511 10:00:14.684618       1 server.go:127] Setting Badger value log load option: mmap
 I0511 10:00:14.684623       1 server.go:155] Opening write-ahead log BadgerDB with options: {Dir:w ValueDir:w SyncWrites:false TableLoadingMode:1 ValueLogLoadingMode:2 NumVersionsToKeep:1 ReadOnly:false Truncate:true Logger:0x1f8b490 MaxTableSize:67108864 LevelSizeMultiplier:10 MaxLevels:7 ValueThreshold:65500 NumMemtables:5 NumLevelZeroTables:5 NumLevelZeroTablesStall:10 LevelOneSize:268435456 ValueLogFileSize:1073741823 ValueLogMaxEntries:10000 NumCompactors:2 CompactL0OnClose:true LogRotatesToFlush:2 managedTxns:false maxBatchCount:0 maxBatchSize:0}
 I0511 10:00:14.983352       1 node.go:88] All 4 tables opened in 288ms
 I0511 10:00:14.985007       1 node.go:88] Replaying file id: 14593 at offset: 1454725
 I0511 10:00:14.985060       1 node.go:88] Replay took: 42.96µs
 I0511 10:00:14.985076       1 node.go:88] Replaying file id: 14594 at offset: 0
 I0511 10:00:14.992031       1 node.go:88] Replay took: 6.948376ms
 I0511 10:00:14.992158       1 server.go:115] Setting Badger table load option: mmap
 I0511 10:00:14.992169       1 server.go:127] Setting Badger value log load option: mmap
 I0511 10:00:14.992175       1 server.go:169] Opening postings BadgerDB with options: {Dir:p ValueDir:p SyncWrites:false TableLoadingMode:2 ValueLogLoadingMode:2 NumVersionsToKeep:2147483647 ReadOnly:false Truncate:true Logger:0x1f8b490 MaxTableSize:67108864 LevelSizeMultiplier:10 MaxLevels:7 ValueThreshold:1024 NumMemtables:5 NumLevelZeroTables:5 NumLevelZeroTablesStall:10 LevelOneSize:268435456 ValueLogFileSize:1073741823 ValueLogMaxEntries:1000000 NumCompactors:2 CompactL0OnClose:true LogRotatesToFlush:2 managedTxns:false maxBatchCount:0 maxBatchSize:0}
 I0511 10:00:23.839732       1 node.go:88] 1 tables out of 32 opened in 8.845s
 I0511 10:00:24.254582       1 node.go:88] 2 tables out of 32 opened in 9.26s
 I0511 10:00:31.412106       1 node.go:88] 4 tables out of 32 opened in 16.417s
 I0511 10:00:35.616013       1 node.go:88] 6 tables out of 32 opened in 20.621s
 I0511 10:00:39.150667       1 node.go:88] 7 tables out of 32 opened in 24.156s
 I0511 10:00:46.223222       1 node.go:88] 9 tables out of 32 opened in 31.228s
 I0511 10:00:49.623099       1 node.go:88] 10 tables out of 32 opened in 34.628s
 I0511 10:00:53.985441       1 node.go:88] 12 tables out of 32 opened in 38.991s
 I0511 10:00:54.005778       1 node.go:88] 13 tables out of 32 opened in 39.011s
 I0511 10:00:57.605740       1 node.go:88] 14 tables out of 32 opened in 42.611s
 I0511 10:01:00.179329       1 node.go:88] 15 tables out of 32 opened in 45.184s
 I0511 10:01:05.686521       1 node.go:88] 17 tables out of 32 opened in 50.692s
 I0511 10:01:10.829332       1 node.go:88] 18 tables out of 32 opened in 55.834s
 I0511 10:01:12.096028       1 node.go:88] 19 tables out of 32 opened in 57.101s
 I0511 10:01:16.687006       1 node.go:88] 21 tables out of 32 opened in 1m1.692s
 I0511 10:01:18.556060       1 node.go:88] 22 tables out of 32 opened in 1m3.561s
 I0511 10:01:21.674435       1 node.go:88] 23 tables out of 32 opened in 1m6.68s
 I0511 10:01:29.281771       1 node.go:88] 25 tables out of 32 opened in 1m14.287s
 I0511 10:01:30.085827       1 node.go:88] 26 tables out of 32 opened in 1m15.091s
 I0511 10:01:37.918882       1 node.go:88] 28 tables out of 32 opened in 1m22.924s
 I0511 10:01:49.721809       1 node.go:88] All 32 tables opened in 1m34.727s
 I0511 10:01:49.724439       1 node.go:88] Replaying file id: 136 at offset: 1070928676
 I0511 10:01:50.277628       1 node.go:88] Replay took: 553.171076ms
 I0511 10:01:50.277685       1 node.go:88] Replaying file id: 137 at offset: 0
 I0511 10:04:00.608466       1 node.go:88] Replay took: 2m10.330770677s
 2020/05/11 10:04:00 Error while creating badger KV posting store error: Unable to find log file. Please retry
 github.com/dgraph-io/dgraph/vendor/github.com/dgraph-io/badger.init.ializers
 	/tmp/go/src/github.com/dgraph-io/dgraph/vendor/github.com/dgraph-io/badger/errors.go:66
 runtime.main
 	/usr/local/go/src/runtime/proc.go:188
 runtime.goexit
 	/usr/local/go/src/runtime/asm_amd64.s:1337
 failed to read value pointer from vlog file: {Fid:135 Len:1521 Offset:357387235}
 github.com/dgraph-io/dgraph/vendor/github.com/dgraph-io/badger.(*valueLog).populateDiscardStats
 	/tmp/go/src/github.com/dgraph-io/dgraph/vendor/github.com/dgraph-io/badger/value.go:1441
 github.com/dgraph-io/dgraph/vendor/github.com/dgraph-io/badger.(*valueLog).open
 	/tmp/go/src/github.com/dgraph-io/dgraph/vendor/github.com/dgraph-io/badger/value.go:855
 github.com/dgraph-io/dgraph/vendor/github.com/dgraph-io/badger.Open
 	/tmp/go/src/github.com/dgraph-io/dgraph/vendor/github.com/dgraph-io/badger/db.go:317
 github.com/dgraph-io/dgraph/vendor/github.com/dgraph-io/badger.OpenManaged
 	/tmp/go/src/github.com/dgraph-io/dgraph/vendor/github.com/dgraph-io/badger/managed_db.go:26
 github.com/dgraph-io/dgraph/edgraph.(*ServerState).initStorage
 	/tmp/go/src/github.com/dgraph-io/dgraph/edgraph/server.go:170
 github.com/dgraph-io/dgraph/edgraph.InitServerState
 	/tmp/go/src/github.com/dgraph-io/dgraph/edgraph/server.go:77
 github.com/dgraph-io/dgraph/dgraph/cmd/alpha.run
 	/tmp/go/src/github.com/dgraph-io/dgraph/dgraph/cmd/alpha/run.go:465
 github.com/dgraph-io/dgraph/dgraph/cmd/alpha.init.1.func1
 	/tmp/go/src/github.com/dgraph-io/dgraph/dgraph/cmd/alpha/run.go:81
 github.com/dgraph-io/dgraph/vendor/github.com/spf13/cobra.(*Command).execute
 	/tmp/go/src/github.com/dgraph-io/dgraph/vendor/github.com/spf13/cobra/command.go:702
 github.com/dgraph-io/dgraph/vendor/github.com/spf13/cobra.(*Command).ExecuteC
 	/tmp/go/src/github.com/dgraph-io/dgraph/vendor/github.com/spf13/cobra/command.go:783
 github.com/dgraph-io/dgraph/vendor/github.com/spf13/cobra.(*Command).Execute
 	/tmp/go/src/github.com/dgraph-io/dgraph/vendor/github.com/spf13/cobra/command.go:736
 github.com/dgraph-io/dgraph/dgraph/cmd.Execute
 	/tmp/go/src/github.com/dgraph-io/dgraph/dgraph/cmd/root.go:60
 main.main
 	/tmp/go/src/github.com/dgraph-io/dgraph/dgraph/main.go:33
 runtime.main
 	/usr/local/go/src/runtime/proc.go:200
 runtime.goexit
 	/usr/local/go/src/runtime/asm_amd64.s:1337

Hey @Awoogamuffin, I see you’re using dgraph v1.0.17 which is an old version. The issue you’re seeing is an old bug that was fixed in badger and dgraph via https://github.com/dgraph-io/badger/pull/929 (issue https://github.com/dgraph-io/badger/issues/1023) .

Yes, the backup would contain the moved key (which gives the retry error).

This can be fixed via a backup and restore in badger but that won’t be necessary if you use the latest the version of dgraph.

Can you please update your dgraph binary to the latest version and try again? Please let me know how it goes.

Please make a copy of the existing data before updating the binary. :slight_smile:

My understanding is that migrating from < 1.1.0 to > 1.1.0 requires some refactoring. Is that not the case?

Yes, the schema has to be updated if you want to use version v1.1. onwards https://dgraph.io/docs/howto/#migrate-to-dgraph-v1-1

But we also have release Release Dgraph v1.0.18 · dgraph-io/dgraph · GitHub which has the fix you need. Look at the release notes.

1 Like

That’s fantastic! Thank you Ibrahim.

Though I’m looking forward to updating the system to the latest version of dgraph (it has so many features I would have killed for a year ago!) it’s very reassuring to know that that a <1.1.0 version has the stability we need. That particular error has been a thorn in my side!

I hadn’t realised that 1.0.18 had been released (it was released after 1.1.0, right?)

Thanks again

Michael

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.