Report a Dgraph Bug
Dgraph is deployed in k8s. I need to verify whether the data of the alpha node is normal after another machine restarts. Because the data of the alpha node is stored in PVC, I manually deleted the PVC, but the corresponding pod failed to restart after deleting. Please take a look, thank you!
++ hostname -f
+ dgraph alpha --my=dgraph-alpha-0.dgraph-alpha.crm-test.svc.cluster.local:7080 --zero dgraph-zero-0.dgraph-zero.crm-test.svc.cluster.local:5080,dgraph-zero-1.dgraph-zero.crm-test.svc.cluster.local:5080,dgraph-zero-2.dgraph-zero.crm-test.svc.cluster.local:5080
[Decoder]: Using assembly version of decoder
Page Size: 4096
[Sentry] 2021/03/16 14:45:26 Integration installed: ContextifyFrames
[Sentry] 2021/03/16 14:45:26 Integration installed: Environment
[Sentry] 2021/03/16 14:45:26 Integration installed: Modules
[Sentry] 2021/03/16 14:45:26 Integration installed: IgnoreErrors
[Decoder]: Using assembly version of decoder
Page Size: 4096
[Sentry] 2021/03/16 14:45:26 Integration installed: ContextifyFrames
[Sentry] 2021/03/16 14:45:26 Integration installed: Environment
[Sentry] 2021/03/16 14:45:26 Integration installed: Modules
[Sentry] 2021/03/16 14:45:26 Integration installed: IgnoreErrors
I0316 14:45:26.815633 18 sentry_integration.go:48] This instance of Dgraph will send anonymous reports of panics back to Dgraph Labs via Sentry. No confidential information is sent. These reports help improve Dgraph. To opt-out, restart your instance with the --enable_sentry=false flag. For more info, see https://dgraph.io/docs/howto/#data-handling.
I0316 14:45:26.816333 18 util_ee.go:126] KeyReader instantiated of type <nil>
I0316 14:45:27.021901 18 init.go:107]
Dgraph version : v20.11.0
Dgraph codename : tchalla
Dgraph SHA-256 : 8acb886b24556691d7d74929817a4ac7d9db76bb8b77de00f44650931a16b6ac
Commit SHA-1 : c4245ad55
Commit timestamp : 2020-12-16 15:55:40 +0530
Branch : HEAD
Go version : go1.15.5
jemalloc enabled : true
For Dgraph official documentation, visit https://dgraph.io/docs/.
For discussions about Dgraph , visit http://discuss.dgraph.io.
Licensed variously under the Apache Public License 2.0 and Dgraph Community License.
Copyright 2015-2020 Dgraph Labs, Inc.
I0316 14:45:27.021924 18 run.go:696] x.Config: {PortOffset:0 QueryEdgeLimit:1000000 NormalizeNodeLimit:10000 MutationsNQuadLimit:1000000 PollInterval:1s GraphqlExtension:true GraphqlDebug:false GraphqlLambdaUrl:}
I0316 14:45:27.021966 18 run.go:697] x.WorkerConfig: {TmpDir:t ExportPath:export NumPendingProposals:256 Tracing:0.01 MyAddr:dgraph-alpha-0.dgraph-alpha.crm-test.svc.cluster.local:7080 ZeroAddr:[dgraph-zero-0.dgraph-zero.crm-test.svc.cluster.local:5080 dgraph-zero-1.dgraph-zero.crm-test.svc.cluster.local:5080 dgraph-zero-2.dgraph-zero.crm-test.svc.cluster.local:5080] TLSClientConfig:<nil> TLSServerConfig:<nil> RaftId:0 WhiteListedIPRanges:[] MaxRetries:-1 StrictMutations:false AclEnabled:false AbortOlderThan:5m0s SnapshotAfter:10000 ProposedGroupId:0 StartTime:2021-03-16 14:45:26.398492004 +0000 UTC m=+0.019960014 LudicrousMode:false LudicrousConcurrency:2000 EncryptionKey:**** LogRequest:0 HardSync:false}
I0316 14:45:27.022019 18 run.go:698] worker.Config: {PostingDir:p PostingDirCompression:1 PostingDirCompressionLevel:0 WALDir:w MutationsMode:0 AuthToken: PBlockCacheSize:697932185 PIndexCacheSize:375809638 WalCache:0 HmacSecret:**** AccessJwtTtl:0s RefreshJwtTtl:0s CachePercentage:0,65,35,0 CacheMb:0}
I0316 14:45:27.023020 18 log.go:295] Found file: 12 First Index: 0
I0316 14:45:27.024088 18 storage.go:132] Init Raft Storage with snap: 0, first: 1, last: 0
I0316 14:45:27.024113 18 server_state.go:76] Setting Posting Dir Compression Level: 0
I0316 14:45:27.024126 18 server_state.go:120] Opening postings BadgerDB with options: {Dir:p ValueDir:p SyncWrites:false NumVersionsToKeep:2147483647 ReadOnly:false Logger:0x2dffcb8 Compression:1 InMemory:false MemTableSize:67108864 BaseTableSize:2097152 BaseLevelSize:10485760 LevelSizeMultiplier:10 TableSizeMultiplier:2 MaxLevels:7 ValueThreshold:1024 NumMemtables:5 BlockSize:4096 BloomFalsePositive:0.01 BlockCacheSize:697932185 IndexCacheSize:375809638 NumLevelZeroTables:5 NumLevelZeroTablesStall:15 ValueLogFileSize:1073741823 ValueLogMaxEntries:1000000 NumCompactors:4 CompactL0OnClose:false ZSTDCompressionLevel:0 VerifyValueChecksum:false EncryptionKey:[] EncryptionKeyRotationDuration:240h0m0s BypassLockGuard:false ChecksumVerificationMode:0 DetectConflicts:false managedTxns:false maxBatchCount:0 maxBatchSize:0}
I0316 14:45:27.030413 18 log.go:34] All 0 tables opened in 0s
I0316 14:45:27.030569 18 log.go:34] Discard stats nextEmptySlot: 0
I0316 14:45:27.030593 18 log.go:34] Set nextTxnTs to 0
I0316 14:45:27.030765 18 log.go:34] Deleting empty file: p/000011.vlog
E0316 14:45:27.031424 18 groups.go:1143] Error during SubscribeForUpdates for prefix "\x00\x00\vdgraph.cors\x00": Unable to find any servers for group: 1. closer err: <nil>
I0316 14:45:27.031473 18 groups.go:99] Current Raft Id: 0x1
I0316 14:45:27.031654 18 worker.go:104] Worker listening at address: [::]:7080
I0316 14:45:27.032811 18 run.go:519] Bringing up GraphQL HTTP API at 0.0.0.0:8080/graphql
I0316 14:45:27.032827 18 run.go:520] Bringing up GraphQL HTTP admin API at 0.0.0.0:8080/admin
E0316 14:45:27.032801 18 groups.go:1143] Error during SubscribeForUpdates for prefix "\x00\x00\x15dgraph.graphql.schema\x00": Unable to find any servers for group: 1. closer err: <nil>
I0316 14:45:27.032851 18 run.go:552] gRPC server started. Listening on port 9080
I0316 14:45:27.032863 18 run.go:553] HTTP server started. Listening on port 8080
I0316 14:45:27.131924 18 pool.go:162] CONNECTING to dgraph-zero-0.dgraph-zero.crm-test.svc.cluster.local:5080
I0316 14:45:27.136471 18 groups.go:127] Connected to group zero. Assigned group: 0
I0316 14:45:27.136485 18 groups.go:129] Raft Id after connection to Zero: 0x1
I0316 14:45:27.136532 18 pool.go:162] CONNECTING to dgraph-alpha-1.dgraph-alpha.crm-test.svc.cluster.local:7080
I0316 14:45:27.136553 18 pool.go:162] CONNECTING to dgraph-alpha-2.dgraph-alpha.crm-test.svc.cluster.local:7080
I0316 14:45:27.136606 18 pool.go:162] CONNECTING to dgraph-zero-1.dgraph-zero.crm-test.svc.cluster.local:5080
I0316 14:45:27.136632 18 pool.go:162] CONNECTING to dgraph-zero-2.dgraph-zero.crm-test.svc.cluster.local:5080
I0316 14:45:27.136649 18 draft.go:230] Node ID: 0x1 with GroupID: 1
I0316 14:45:27.136708 18 node.go:152] Setting raft.Config to: &{ID:1 peers:[] learners:[] ElectionTick:20 HeartbeatTick:1 Storage:0xc00003e500 Applied:0 MaxSizePerMsg:262144 MaxCommittedSizePerReady:67108864 MaxUncommittedEntriesSize:0 MaxInflightMsgs:256 CheckQuorum:false PreVote:true ReadOnlyOption:0 Logger:0x2dffcb8 DisableProposalForwarding:false}
I0316 14:45:27.136778 18 node.go:326] Group 1 found 0 entries
I0316 14:45:27.136787 18 draft.go:1650] Calling IsPeer
I0316 14:45:27.138466 18 draft.go:1655] Done with IsPeer call
I0316 14:45:27.138497 18 draft.go:1689] Restarting node for group: 1
I0316 14:45:27.138544 18 log.go:34] 1 became follower at term 0
I0316 14:45:27.138556 18 log.go:34] newRaft 1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
I0316 14:45:27.138577 18 draft.go:180] Operation started with id: opRollup
I0316 14:45:27.138628 18 groups.go:807] Got address of a Zero leader: dgraph-zero-0.dgraph-zero.crm-test.svc.cluster.local:5080
I0316 14:45:27.138634 18 draft.go:1084] Found Raft progress: 0
I0316 14:45:27.138845 18 groups.go:821] Starting a new membership stream receive from dgraph-zero-0.dgraph-zero.crm-test.svc.cluster.local:5080.
I0316 14:45:27.140563 18 groups.go:838] Received first state update from Zero: counter:75876 groups:<key:1 value:<members:<key:1 value:<id:1 group_id:1 addr:"dgraph-alpha-0.dgraph-alpha.crm-test.svc.cluster.local:7080" last_update:1615898699 > > members:<key:2 value:<id:2 group_id:1 addr:"dgraph-alpha-1.dgraph-alpha.crm-test.svc.cluster.local:7080" last_update:1615898476 > > members:<key:3 value:<id:3 group_id:1 addr:"dgraph-alpha-2.dgraph-alpha.crm-test.svc.cluster.local:7080" leader:true last_update:1615898821 > > tablets:<key:"account_relation" value:<group_id:1 predicate:"account_relation" > > tablets:<key:"create_time" value:<group_id:1 predicate:"create_time" on_disk_bytes:209158072 uncompressed_bytes:909846316 > > tablets:<key:"dgraph.cors" value:<group_id:1 predicate:"dgraph.cors" on_disk_bytes:199 uncompressed_bytes:75 > > tablets:<key:"dgraph.drop.op" value:<group_id:1 predicate:"dgraph.drop.op" > > tablets:<key:"dgraph.graphql.p_query" value:<group_id:1 predicate:"dgraph.graphql.p_query" > > tablets:<key:"dgraph.graphql.p_sha256hash" value:<group_id:1 predicate:"dgraph.graphql.p_sha256hash" > > tablets:<key:"dgraph.graphql.schema" value:<group_id:1 predicate:"dgraph.graphql.schema" > > tablets:<key:"dgraph.graphql.schema_created_at" value:<group_id:1 predicate:"dgraph.graphql.schema_created_at" > > tablets:<key:"dgraph.graphql.schema_history" value:<group_id:1 predicate:"dgraph.graphql.schema_history" > > tablets:<key:"dgraph.graphql.xid" value:<group_id:1 predicate:"dgraph.graphql.xid" > > tablets:<key:"dgraph.type" value:<group_id:1 predicate:"dgraph.type" on_disk_bytes:181031655 uncompressed_bytes:652629421 > > tablets:<key:"identity_id" value:<group_id:1 predicate:"identity_id" on_disk_bytes:668112128 uncompressed_bytes:1123320948 > > tablets:<key:"model_type" value:<group_id:1 predicate:"model_type" > > tablets:<key:"namespace" value:<group_id:1 predicate:"namespace" on_disk_bytes:118698642 uncompressed_bytes:441263005 > > tablets:<key:"primary" value:<group_id:1 predicate:"primary" on_disk_bytes:51624206 uncompressed_bytes:189042002 > > tablets:<key:"relation" value:<group_id:1 predicate:"relation" on_disk_bytes:141790681 uncompressed_bytes:303153262 > > tablets:<key:"source" value:<group_id:1 predicate:"source" on_disk_bytes:150636664 uncompressed_bytes:321591343 > > tablets:<key:"source_id" value:<group_id:1 predicate:"source_id" on_disk_bytes:326471204 uncompressed_bytes:564324005 > > tablets:<key:"source_type" value:<group_id:1 predicate:"source_type" on_disk_bytes:41291249 uncompressed_bytes:165319164 > > tablets:<key:"tenant_id" value:<group_id:1 predicate:"tenant_id" on_disk_bytes:478459823 uncompressed_bytes:870357015 > > tablets:<key:"type" value:<group_id:1 predicate:"type" on_disk_bytes:110518468 uncompressed_bytes:384008394 > > snapshot_ts:278852 checksum:12249075648478441628 > > zeros:<key:1 value:<id:1 addr:"dgraph-zero-0.dgraph-zero.crm-test.svc.cluster.local:5080" leader:true > > zeros:<key:2 value:<id:2 addr:"dgraph-zero-1.dgraph-zero.crm-test.svc.cluster.local:5080" > > zeros:<key:3 value:<id:3 addr:"dgraph-zero-2.dgraph-zero.crm-test.svc.cluster.local:5080" > > maxLeaseId:15190000 maxTxnTs:290000 maxRaftId:3 cid:"fec6a030-0c07-4fe9-a775-219482e41177" license:<maxNodes:18446744073709551615 expiryTs:1617290026 enabled:true >
I0316 14:45:29.533598 18 log.go:34] 1 [term: 0] received a MsgHeartbeat message with higher term from 3 [term: 18]
I0316 14:45:29.533633 18 log.go:34] 1 became follower at term 18
2021/03/16 14:45:29 tocommit(266103) is out of range [lastIndex(0)]. Was the raft log corrupted, truncated, or lost?
panic: tocommit(266103) is out of range [lastIndex(0)]. Was the raft log corrupted, truncated, or lost?
goroutine 248 [running]:
log.Panicf(0x1dd447a, 0x5d, 0xc00043d860, 0x2, 0x2)
/usr/local/go/src/log/log.go:358 +0xc5
github.com/dgraph-io/dgraph/x.(*ToGlog).Panicf(0x2dffcb8, 0x1dd447a, 0x5d, 0xc00043d860, 0x2, 0x2)
/ext-go/1/src/github.com/dgraph-io/dgraph/x/log.go:40 +0x53
go.etcd.io/etcd/raft.(*raftLog).commitTo(0xc00021e380, 0x40f77)
/go/pkg/mod/go.etcd.io/etcd@v0.0.0-20190228193606-a943ad0ee4c9/raft/log.go:203 +0x135
go.etcd.io/etcd/raft.(*raft).handleHeartbeat(0xc00b870000, 0x8, 0x1, 0x3, 0x12, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/pkg/mod/go.etcd.io/etcd@v0.0.0-20190228193606-a943ad0ee4c9/raft/raft.go:1324 +0x54
go.etcd.io/etcd/raft.stepFollower(0xc00b870000, 0x8, 0x1, 0x3, 0x12, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/pkg/mod/go.etcd.io/etcd@v0.0.0-20190228193606-a943ad0ee4c9/raft/raft.go:1269 +0x439
go.etcd.io/etcd/raft.(*raft).Step(0xc00b870000, 0x8, 0x1, 0x3, 0x12, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/pkg/mod/go.etcd.io/etcd@v0.0.0-20190228193606-a943ad0ee4c9/raft/raft.go:971 +0x1218
go.etcd.io/etcd/raft.(*node).run(0xc0006d6e40, 0xc00b870000)
/go/pkg/mod/go.etcd.io/etcd@v0.0.0-20190228193606-a943ad0ee4c9/raft/node.go:357 +0x1178
created by go.etcd.io/etcd/raft.RestartNode
/go/pkg/mod/go.etcd.io/etcd@v0.0.0-20190228193606-a943ad0ee4c9/raft/node.go:246 +0x346
[Sentry] 2021/03/16 14:45:29 Sending fatal event [1487316add634967ad1d228b5b49d3cb] to o318308.ingest.sentry.io project: 1805390
[Sentry] 2021/03/16 14:45:31 Buffer flushing reached the timeout.
What version of Dgraph are you using?
Dgraph Version
$ dgraph version
Dgraph version : v20.11.0
Dgraph codename : tchalla
Dgraph SHA-256 : 8acb886b24556691d7d74929817a4ac7d9db76bb8b77de00f44650931a16b6ac
Commit SHA-1 : c4245ad55
Commit timestamp : 2020-12-16 15:55:40 +0530
Branch : HEAD
Go version : go1.15.5
jemalloc enabled : true
PASTE YOUR RESULTS HERE
Have you tried reproducing the issue with the latest release?
What is the hardware spec (RAM, OS)?
Volumes:
datadir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: datadir-dgraph-alpha-0
ReadOnly: false
default-token-w2qj9:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-w2qj9
Optional: false
Steps to reproduce the issue (command/config used to run Dgraph).
Manually delete the PVC corresponding to alpha pod