Alpha cannot see zero port when loading data through tutorial

While going through the dgraph tutorial, I got stuck while trying to load data here:

https://tour.dgraph.io/moredata/1/

docker exec -it dgraph_server_1 dgraph live -r 1million.rdf.gz --zero localhost:5080 -c 1

2019/01/24 20:05:45 Unable to connect to zero, Is it running at localhost:5080? error: context deadline exceeded

I have reproduced this problem both on Mac OS, and Ubuntu. I followed the instructions for installing dgraph in a single server, using docker-compose here:

https://docs.dgraph.io/get-started/#docker-compose

The dgraph_zero container is visible from the host (and even from my desktop), but is not from inside the dgraph_server container:

zero:6080

~/dgraph$ curl localhost:6080/state

{"counter":"105","groups":{"1":{"members":{"1":{"id":"1","groupId":1,"addr":"server:7080","leader":true,"lastUpdate":"1548272290"}},"tablets":{"_predicate_":{"groupId":1,"predicate":"_predicate_","space":"1443"},"dgraph.group.acl":{"groupId":1,"predicate":"dgraph.group.acl","space":"39"},"dgraph.password":{"groupId":1,"predicate":"dgraph.password","space":"37"},"dgraph.user.group":{"groupId":1,"predicate":"dgraph.user.group","space":"43"},"dgraph.xid":{"groupId":1,"predicate":"dgraph.xid","space":"36"},"friend":{"groupId":1,"predicate":"friend","space":"846"},"name":{"groupId":1,"predicate":"name","space":"790"},"relative":{"groupId":1,"predicate":"relative","space":"59"},"testp1":{"groupId":1,"predicate":"testp1","space":"19"},"type":{"groupId":1,"predicate":"type","space":"423"}},"snapshotTs":"14201"}},"zeros":{"1":{"id":"1","addr":"zero:5080","leader":true}},"maxLeaseId":"10000","maxTxnTs":"20000","maxRaftId":"1","cid":"8089284e-0527-491e-a60b-89dd941cdf97"}~/dgraph$

~/dgraph$ docker exec -it dgraph_server_1 curl localhost:6080/state

curl: (7) Failed to connect to localhost port 6080: Connection refused

zero:5080

~/dgraph$ curl localhost:5080

ďż˝~/dgraph$

~/dgraph$ docker exec -it dgraph_server_1 curl localhost:5080

curl: (7) Failed to connect to localhost port 5080: Connection refused

Note that the zero server is aware of my schema/predicates. Running Ratel and issuing queries to dgraph also works fine.

Before I dive into why the server container cannot see this port (docker network?), has someone else encountered and solved this issue?

I attach the docker logs as well.


Attaching to dgraph_server_1, dgraph_zero_1, dgraph_ratel_1
e[36mserver_1 |e[0m I0123 19:38:06.848413 1 init.go:88]
e[36mserver_1 |e[0m
e[36mserver_1 |e[0m Dgraph version : v1.0.11
e[36mserver_1 |e[0m Commit SHA-1 : b2a09c5b
e[36mserver_1 |e[0m Commit timestamp : 2018-12-17 09:50:56 -0800
e[36mserver_1 |e[0m Branch : HEAD
e[36mserver_1 |e[0m Go version : go1.11.1
e[36mserver_1 |e[0m
e[36mserver_1 |e[0m For Dgraph official documentation, visit https://docs.dgraph.io.
e[36mserver_1 |e[0m For discussions about Dgraph , visit http://discuss.dgraph.io.
e[36mserver_1 |e[0m To say hi to the community , visit https://dgraph.slack.com.
e[36mserver_1 |e[0m
e[36mserver_1 |e[0m Licensed variously under the Apache Public License 2.0 and Dgraph Community License.
e[36mserver_1 |e[0m Copyright 2015-2018 Dgraph Labs, Inc.
e[36mserver_1 |e[0m
e[36mserver_1 |e[0m
e[36mserver_1 |e[0m I0123 19:38:06.850956 1 server.go:113] Setting Badger table load option: mmap
e[36mserver_1 |e[0m I0123 19:38:06.850975 1 server.go:125] Setting Badger value log load option: mmap
e[36mserver_1 |e[0m I0123 19:38:06.859879 1 server.go:153] Opening write-ahead log BadgerDB with options: {Dir:w ValueDir:w SyncWrites:true TableLoadingMode:1 ValueLogLoadingMode:2 NumVersionsToKeep:1 MaxTableSize:67108864 LevelSizeMultiplier:10 MaxLevels:7 ValueThreshold:65500 NumMemtables:5 NumLevelZeroTables:5 NumLevelZeroTablesStall:10 LevelOneSize:268435456 ValueLogFileSize:1073741823 ValueLogMaxEntries:10000 NumCompactors:3 managedTxns:false DoNotCompact:false maxBatchCount:0 maxBatchSize:0 ReadOnly:false Truncate:true}
e[36mserver_1 |e[0m I0123 19:38:06.882450 1 server.go:113] Setting Badger table load option: mmap
e[36mserver_1 |e[0m I0123 19:38:06.882470 1 server.go:125] Setting Badger value log load option: mmap
e[36mserver_1 |e[0m I0123 19:38:06.882477 1 server.go:167] Opening postings BadgerDB with options: {Dir:p ValueDir:p SyncWrites:true TableLoadingMode:2 ValueLogLoadingMode:2 NumVersionsToKeep:2147483647 MaxTableSize:67108864 LevelSizeMultiplier:10 MaxLevels:7 ValueThreshold:1024 NumMemtables:5 NumLevelZeroTables:5 NumLevelZeroTablesStall:10 LevelOneSize:268435456 ValueLogFileSize:1073741823 ValueLogMaxEntries:1000000 NumCompactors:3 managedTxns:false DoNotCompact:false maxBatchCount:0 maxBatchSize:0 ReadOnly:false Truncate:true}
e[36mserver_1 |e[0m I0123 19:38:06.916422 1 run.go:385] gRPC server started. Listening on port 9080
e[36mserver_1 |e[0m I0123 19:38:06.916440 1 run.go:386] HTTP server started. Listening on port 8080
e[36mserver_1 |e[0m I0123 19:38:06.916535 1 worker.go:79] Worker listening at address: [::]:7080
e[36mserver_1 |e[0m I0123 19:38:06.916614 1 groups.go:89] Current Raft Id: 0
e[36mserver_1 |e[0m I0123 19:38:07.019355 1 pool.go:140] CONNECTED to zero:5080
e[36mserver_1 |e[0m I0123 19:38:10.025692 1 groups.go:112] Connected to group zero. Assigned group: 1
e[36mserver_1 |e[0m I0123 19:38:10.029837 1 draft.go:72] Node ID: 1 with GroupID: 1
e[36mserver_1 |e[0m I0123 19:38:10.030003 1 node.go:151] Setting raft.Config to: &{ID:1 peers: learners: ElectionTick:100 HeartbeatTick:1 Storage:0xc0003d0ff0 Applied:0 MaxSizePerMsg:1048576 MaxInflightMsgs:256 CheckQuorum:false PreVote:true ReadOnlyOption:0 Logger:0x1f94370 DisableProposalForwarding:false}
e[36mserver_1 |e[0m I0123 19:38:10.030200 1 node.go:290] Group 1 found 1 entries
e[36mserver_1 |e[0m I0123 19:38:10.030297 1 draft.go:1117] New Node for group: 1
e[36mserver_1 |e[0m I0123 19:38:10.030388 1 node.go:83] 1 became follower at term 0
e[36mserver_1 |e[0m I0123 19:38:10.030508 1 node.go:83] newRaft 1 [peers: , term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
e[36mserver_1 |e[0m I0123 19:38:10.030573 1 node.go:83] 1 became follower at term 1
e[36mserver_1 |e[0m I0123 19:38:10.030902 1 groups.go:695] Got address of a Zero leader: zero:5080
e[36mserver_1 |e[0m I0123 19:38:10.031066 1 groups.go:708] Starting a new membership stream receive from zero:5080.
e[36mserver_1 |e[0m I0123 19:38:10.032836 1 node.go:173] Setting conf state to nodes:1
e[36mserver_1 |e[0m I0123 19:38:10.033150 1 node.go:83] 1 is starting a new election at term 1
e[36mserver_1 |e[0m I0123 19:38:10.033323 1 node.go:83] 1 became pre-candidate at term 1
e[36mserver_1 |e[0m I0123 19:38:10.033403 1 node.go:83] 1 received MsgPreVoteResp from 1 at term 1
e[36mserver_1 |e[0m I0123 19:38:10.033503 1 node.go:83] 1 became candidate at term 2
e[36mserver_1 |e[0m I0123 19:38:10.033593 1 node.go:83] 1 received MsgVoteResp from 1 at term 2
e[36mserver_1 |e[0m I0123 19:38:10.033714 1 node.go:83] 1 became leader at term 2
e[36mserver_1 |e[0m I0123 19:38:10.033839 1 node.go:83] raft.node: 1 elected leader 1 at term 2
e[36mserver_1 |e[0m I0123 19:38:10.035784 1 groups.go:725] Received first state update from Zero: counter:4 groups:<key:1 value:<members:<key:1 value:<id:1 group_id:1 addr:“server:7080” > > tablets:<key:“predicate” value:<group_id:1 predicate:“predicate” > > > > zeros:<key:1 value:<id:1 addr:“zero:5080” leader:true > > maxRaftId:1
e[36mserver_1 |e[0m I0123 19:38:10.036049 1 groups.go:388] Serving tablet for: predicate
e[36mserver_1 |e[0m I0123 19:38:10.037850 1 mutation.go:158] Done schema update predicate:“predicate” value_type:STRING list:true
e[36mserver_1 |e[0m I0123 19:38:10.041617 1 groups.go:388] Serving tablet for: dgraph.xid
e[36mserver_1 |e[0m I0123 19:38:10.043340 1 index.go:33] Deleting index for dgraph.xid
e[36mserver_1 |e[0m I0123 19:38:10.043482 1 index.go:38] Rebuilding index for dgraph.xid
e[36mserver_1 |e[0m I0123 19:38:10.043570 1 mutation.go:158] Done schema update predicate:“dgraph.xid” value_type:STRING directive:INDEX tokenizer:“exact”
e[36mserver_1 |e[0m I0123 19:38:10.047372 1 groups.go:388] Serving tablet for: dgraph.password
e[36mserver_1 |e[0m I0123 19:38:10.049219 1 mutation.go:158] Done schema update predicate:“dgraph.password” value_type:PASSWORD
e[36mserver_1 |e[0m I0123 19:38:10.053103 1 groups.go:388] Serving tablet for: dgraph.user.group
e[36mserver_1 |e[0m I0123 19:38:10.055003 1 index.go:48] Deleting reverse index for dgraph.user.group
e[36mserver_1 |e[0m I0123 19:38:10.055143 1 index.go:54] Rebuilding reverse index for dgraph.user.group
e[36mserver_1 |e[0m I0123 19:38:10.055241 1 mutation.go:158] Done schema update predicate:“dgraph.user.group” value_type:UID directive:REVERSE
e[36mserver_1 |e[0m I0123 19:38:10.061079 1 groups.go:388] Serving tablet for: dgraph.group.acl
e[36mserver_1 |e[0m I0123 19:38:10.063507 1 mutation.go:158] Done schema update predicate:“dgraph.group.acl” value_type:STRING
e[36mserver_1 |e[0m I0123 19:38:11.031274 1 groups.go:850] Leader idx=1 of group=1 is connecting to Zero for txn updates
e[36mserver_1 |e[0m I0123 19:38:11.031428 1 groups.go:859] Got Zero leader: zero:5080
e[36mserver_1 |e[0m I0123 19:41:18.389306 1 groups.go:388] Serving tablet for: friend
e[36mserver_1 |e[0m I0123 19:43:40.033418 1 draft.go:323] Creating snapshot at index: 218. ReadTs: 118.
e[36mserver_1 |e[0m I0123 19:45:02.019805 1 groups.go:388] Serving tablet for: name
e[36mserver_1 |e[0m I0123 19:46:10.033380 1 draft.go:323] Creating snapshot at index: 390. ReadTs: 355.
e[36mserver_1 |e[0m I0123 19:48:10.031475 1 stream.go:255] Rolling up Sent 17 keys
e[36mserver_1 |e[0m I0123 19:48:10.036204 1 draft.go:836] Rollup on disk done. Rolling up 17 keys in LRU cache now…
e[36mserver_1 |e[0m I0123 19:48:10.036470 1 draft.go:846] Rollup in LRU cache done.
e[36mserver_1 |e[0m I0123 19:48:10.036547 1 draft.go:364] List rollup at Ts 355: OK.
e[33mzero_1 |e[0m I0123 19:38:06.869576 1 init.go:88]
e[33mzero_1 |e[0m
e[33mzero_1 |e[0m Dgraph version : v1.0.11
e[33mzero_1 |e[0m Commit SHA-1 : b2a09c5b
e[33mzero_1 |e[0m Commit timestamp : 2018-12-17 09:50:56 -0800
e[33mzero_1 |e[0m Branch : HEAD
e[33mzero_1 |e[0m Go version : go1.11.1
e[33mzero_1 |e[0m
e[33mzero_1 |e[0m For Dgraph official documentation, visit https://docs.dgraph.io.
e[33mzero_1 |e[0m For discussions about Dgraph , visit http://discuss.dgraph.io.
e[33mzero_1 |e[0m To say hi to the community , visit https://dgraph.slack.com.
e[33mzero_1 |e[0m
e[33mzero_1 |e[0m Licensed variously under the Apache Public License 2.0 and Dgraph Community License.
e[33mzero_1 |e[0m Copyright 2015-2018 Dgraph Labs, Inc.
e[33mzero_1 |e[0m
e[33mzero_1 |e[0m
e[33mzero_1 |e[0m I0123 19:38:06.886050 1 run.go:98] Setting up grpc listener at: 0.0.0.0:5080
e[33mzero_1 |e[0m I0123 19:38:06.886139 1 run.go:98] Setting up http listener at: 0.0.0.0:6080
e[33mzero_1 |e[0m I0123 19:38:06.918080 1 node.go:151] Setting raft.Config to: &{ID:1 peers: learners: ElectionTick:100 HeartbeatTick:1 Storage:0xc0003d0960 Applied:0 MaxSizePerMsg:1048576 MaxInflightMsgs:256 CheckQuorum:false PreVote:true ReadOnlyOption:0 Logger:0x1f94370 DisableProposalForwarding:false}
e[33mzero_1 |e[0m I0123 19:38:06.918435 1 node.go:290] Group 0 found 1 entries
e[33mzero_1 |e[0m I0123 19:38:06.918493 1 node.go:83] 1 became follower at term 0
e[33mzero_1 |e[0m I0123 19:38:06.918540 1 node.go:83] newRaft 1 [peers: , term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
e[33mzero_1 |e[0m I0123 19:38:06.918548 1 node.go:83] 1 became follower at term 1
e[33mzero_1 |e[0m E0123 19:38:06.920226 1 raft.go:464] While proposing CID: Not Zero leader. Aborting proposal: cid:“8082768e-d9c0-4dab-bf1e-f6995625eab8” . Retrying…
e[33mzero_1 |e[0m I0123 19:38:06.922726 1 run.go:265] Running Dgraph Zero…
e[33mzero_1 |e[0m I0123 19:38:06.924731 1 node.go:173] Setting conf state to nodes:1
e[33mzero_1 |e[0m I0123 19:38:06.924771 1 raft.go:596] Done applying conf change at 1
e[33mzero_1 |e[0m I0123 19:38:07.019740 1 zero.go:368] Got connection request: cluster_info_only:true
e[33mzero_1 |e[0m I0123 19:38:07.019975 1 node.go:83] 1 no leader at term 1; dropping index reading msg
e[33mzero_1 |e[0m I0123 19:38:09.500767 1 node.go:83] 1 is starting a new election at term 1
e[33mzero_1 |e[0m I0123 19:38:09.500988 1 node.go:83] 1 became pre-candidate at term 1
e[33mzero_1 |e[0m I0123 19:38:09.501104 1 node.go:83] 1 received MsgPreVoteResp from 1 at term 1
e[33mzero_1 |e[0m I0123 19:38:09.501234 1 node.go:83] 1 became candidate at term 2
e[33mzero_1 |e[0m I0123 19:38:09.501309 1 node.go:83] 1 received MsgVoteResp from 1 at term 2
e[33mzero_1 |e[0m I0123 19:38:09.501429 1 node.go:83] 1 became leader at term 2
e[33mzero_1 |e[0m I0123 19:38:09.501557 1 node.go:83] raft.node: 1 elected leader 1 at term 2
e[33mzero_1 |e[0m I0123 19:38:09.504094 1 raft.go:613] I’ve become the leader, updating leases.
e[33mzero_1 |e[0m I0123 19:38:09.504241 1 assign.go:44] Updated Lease id: 1. Txn Ts: 1
e[33mzero_1 |e[0m E0123 19:38:09.920645 1 raft.go:464] While proposing CID: Not Zero leader. Aborting proposal: cid:“4f8478a2-44b8-4330-b005-dd6bf603b3a6” . Retrying…
e[33mzero_1 |e[0m W0123 19:38:10.020204 1 node.go:550] [1] Read index context timed out
e[33mzero_1 |e[0m I0123 19:38:10.020484 1 zero.go:386] Connected: cluster_info_only:true
e[33mzero_1 |e[0m I0123 19:38:10.020966 1 zero.go:368] Got connection request: addr:“server:7080”
e[33mzero_1 |e[0m I0123 19:38:10.022930 1 pool.go:140] CONNECTED to server:7080
e[33mzero_1 |e[0m I0123 19:38:10.025346 1 zero.go:495] Connected: id:1 group_id:1 addr:“server:7080”
e[33mzero_1 |e[0m I0123 19:38:12.922807 1 raft.go:458] CID set for cluster: 8089284e-0527-491e-a60b-89dd941cdf97
e[33mzero_1 |e[0m I0123 19:43:43.033006 1 oracle.go:103] Purged below ts:118, len(o.commits):0, len(o.rowCommit):3
e[33mzero_1 |e[0m I0123 19:46:17.033033 1 oracle.go:103] Purged below ts:355, len(o.commits):0, len(o.rowCommit):1
e[33mzero_1 |e[0m I0123 19:53:43.033343 1 oracle.go:103] Purged below ts:808, len(o.commits):0, len(o.rowCommit):1
e[33mzero_1 |e[0m I0123 20:01:17.033155 1 oracle.go:103] Purged below ts:1015, len(o.commits):0, len(o.rowCommit):1
e[33mzero_1 |e[0m I0123 20:03:43.033096 1 oracle.go:103] Purged below ts:1097, len(o.commits):0, len(o.rowCommit):1
e[33mzero_1 |e[0m I0123 20:42:12.033133 1 oracle.go:103] Purged below ts:3580, len(o.commits):0, len(o.rowCommit):1
e[33mzero_1 |e[0m I0123 20:43:43.033042 1 oracle.go:103] Purged below ts:3630, len(o.commits):0, len(o.rowCommit):1
e[33mzero_1 |e[0m I0123 20:46:17.033614 1 oracle.go:103] Purged below ts:3768, len(o.commits):0, len(o.rowCommit):0
e[33mzero_1 |e[0m I0123 22:02:45.033394 1 oracle.go:103] Purged below ts:13346, len(o.commits):0, len(o.rowCommit):2
e[33mzero_1 |e[0m I0123 22:06:17.033277 1 oracle.go:103] Purged below ts:13404, len(o.commits):0, len(o.rowCommit):2
e[33mzero_1 |e[0m I0123 22:08:43.033125 1 oracle.go:103] Purged below ts:13419, len(o.commits):0, len(o.rowCommit):3
e[33mzero_1 |e[0m I0124 19:01:17.033271 1 oracle.go:103] Purged below ts:13564, len(o.commits):0, len(o.rowCommit):2
e[33mzero_1 |e[0m I0124 19:03:43.033182 1 oracle.go:103] Purged below ts:13590, len(o.commits):0, len(o.rowCommit):0
e[33mzero_1 |e[0m I0124 19:06:17.033061 1 oracle.go:103] Purged below ts:13604, len(o.commits):0, len(o.rowCommit):0
e[33mzero_1 |e[0m I0124 19:16:17.032983 1 oracle.go:103] Purged below ts:14201, len(o.commits):0, len(o.rowCommit):0
e[36mserver_1 |e[0m I0123 19:53:40.033995 1 draft.go:323] Creating snapshot at index: 822. ReadTs: 808.
e[36mserver_1 |e[0m I0123 19:58:10.031450 1 stream.go:255] Rolling up Sent 2 keys
e[36mserver_1 |e[0m I0123 19:58:10.034205 1 draft.go:836] Rollup on disk done. Rolling up 2 keys in LRU cache now…
e[36mserver_1 |e[0m I0123 19:58:10.034329 1 draft.go:846] Rollup in LRU cache done.
e[36mserver_1 |e[0m I0123 19:58:10.034386 1 draft.go:364] List rollup at Ts 808: OK.
e[36mserver_1 |e[0m I0123 20:01:10.042435 1 draft.go:323] Creating snapshot at index: 1062. ReadTs: 1015.
e[32mratel_1 |e[0m 2019/01/23 19:38:06 Listening on port 8000…
e[36mserver_1 |e[0m I0123 20:03:10.031517 1 stream.go:255] Rolling up Sent 14 keys
e[36mserver_1 |e[0m I0123 20:03:10.035577 1 draft.go:836] Rollup on disk done. Rolling up 14 keys in LRU cache now…
e[36mserver_1 |e[0m I0123 20:03:10.035725 1 draft.go:846] Rollup in LRU cache done.
e[36mserver_1 |e[0m I0123 20:03:10.036219 1 draft.go:364] List rollup at Ts 1015: OK.
e[36mserver_1 |e[0m I0123 20:03:13.314657 1 http.go:406] Got alter request via HTTP from 76.14.116.59:50529
e[36mserver_1 |e[0m I0123 20:03:13.315007 1 server.go:271] Received ALTER op: schema:": uid @count @reverse ."
e[36mserver_1 |e[0m I0123 20:03:13.315812 1 server.go:321] Got schema: [predicate:“friend” value_type:UID directive:REVERSE count:true ]
e[36mserver_1 |e[0m I0123 20:03:13.319498 1 index.go:48] Deleting reverse index for friend
e[36mserver_1 |e[0m I0123 20:03:13.319691 1 index.go:54] Rebuilding reverse index for friend
e[36mserver_1 |e[0m I0123 20:03:13.323291 1 index.go:61] Deleting count index for friend
e[36mserver_1 |e[0m I0123 20:03:13.325432 1 index.go:66] Rebuilding count index for friend
e[36mserver_1 |e[0m I0123 20:03:13.332459 1 mutation.go:191] Done schema update predicate:“friend” value_type:UID directive:REVERSE count:true
e[36mserver_1 |e[0m I0123 20:03:13.334413 1 server.go:325] ALTER op: schema:": uid @count @reverse ." done
e[36mserver_1 |e[0m I0123 20:03:40.033819 1 draft.go:323] Creating snapshot at index: 1254. ReadTs: 1097.
e[36mserver_1 |e[0m I0123 20:08:10.031549 1 stream.go:255] Rolling up Sent 16 keys
e[36mserver_1 |e[0m I0123 20:08:10.035729 1 draft.go:836] Rollup on disk done. Rolling up 16 keys in LRU cache now…
e[36mserver_1 |e[0m I0123 20:08:10.036241 1 draft.go:846] Rollup in LRU cache done.
e[36mserver_1 |e[0m I0123 20:08:10.036344 1 draft.go:364] List rollup at Ts 1097: OK.
e[36mserver_1 |e[0m I0123 20:42:10.036903 1 draft.go:323] Creating snapshot at index: 3605. ReadTs: 3580.
e[36mserver_1 |e[0m I0123 20:42:47.465379 1 groups.go:388] Serving tablet for: relative
e[36mserver_1 |e[0m I0123 20:43:10.031620 1 stream.go:255] Rolling up Sent 19 keys
e[36mserver_1 |e[0m I0123 20:43:10.036190 1 draft.go:836] Rollup on disk done. Rolling up 19 keys in LRU cache now…
e[36mserver_1 |e[0m I0123 20:43:10.036365 1 draft.go:846] Rollup in LRU cache done.
e[36mserver_1 |e[0m I0123 20:43:10.036444 1 draft.go:364] List rollup at Ts 3580: OK.
e[36mserver_1 |e[0m I0123 20:43:40.033876 1 draft.go:323] Creating snapshot at index: 3707. ReadTs: 3630.
e[36mserver_1 |e[0m I0123 20:46:10.033999 1 draft.go:323] Creating snapshot at index: 3873. ReadTs: 3768.
e[36mserver_1 |e[0m I0123 20:48:10.031585 1 stream.go:255] Rolling up Sent 16 keys
e[36mserver_1 |e[0m I0123 20:48:10.036070 1 draft.go:836] Rollup on disk done. Rolling up 16 keys in LRU cache now…
e[36mserver_1 |e[0m I0123 20:48:10.036338 1 draft.go:846] Rollup in LRU cache done.
e[36mserver_1 |e[0m I0123 20:48:10.036478 1 draft.go:364] List rollup at Ts 3768: OK.
e[36mserver_1 |e[0m I0123 22:00:57.335667 1 http.go:406] Got alter request via HTTP from 76.14.116.59:61682
e[36mserver_1 |e[0m I0123 22:00:57.335893 1 server.go:271] Received ALTER op: schema:": uid @count @reverse ."
e[36mserver_1 |e[0m I0123 22:00:57.336628 1 server.go:321] Got schema: [predicate:“type” value_type:UID directive:REVERSE count:true ]
e[36mserver_1 |e[0m I0123 22:00:57.338845 1 groups.go:388] Serving tablet for: type
e[36mserver_1 |e[0m I0123 22:00:57.340969 1 index.go:48] Deleting reverse index for type
e[36mserver_1 |e[0m I0123 22:00:57.341127 1 index.go:54] Rebuilding reverse index for type
e[36mserver_1 |e[0m I0123 22:00:57.341391 1 index.go:61] Deleting count index for type
e[36mserver_1 |e[0m I0123 22:00:57.341522 1 index.go:66] Rebuilding count index for type
e[36mserver_1 |e[0m I0123 22:00:57.341871 1 mutation.go:158] Done schema update predicate:“type” value_type:UID directive:REVERSE count:true
e[36mserver_1 |e[0m I0123 22:00:57.343875 1 server.go:325] ALTER op: schema:": uid @count @reverse ." done
e[36mserver_1 |e[0m I0123 22:01:42.747359 1 http.go:406] Got alter request via HTTP from 76.14.116.59:61778
e[36mserver_1 |e[0m I0123 22:01:42.747627 1 server.go:271] Received ALTER op: schema:": [string] @index(term) ."
e[36mserver_1 |e[0m I0123 22:01:42.748413 1 server.go:321] Got schema: [predicate:“name” value_type:STRING directive:INDEX tokenizer:“term” list:true ]
e[36mserver_1 |e[0m I0123 22:01:42.755187 1 index.go:33] Deleting index for name
e[36mserver_1 |e[0m I0123 22:01:42.755341 1 index.go:38] Rebuilding index for name
e[36mserver_1 |e[0m I0123 22:01:42.757408 1 mutation.go:191] Done schema update predicate:“name” value_type:STRING directive:INDEX tokenizer:“term” list:true
e[36mserver_1 |e[0m I0123 22:01:42.759141 1 server.go:325] ALTER op: schema:": [string] @index(term) ." done
e[36mserver_1 |e[0m I0123 22:02:40.051886 1 draft.go:323] Creating snapshot at index: 13370. ReadTs: 13346.
e[36mserver_1 |e[0m I0123 22:03:10.031745 1 stream.go:255] Rolling up Sent 23 keys
e[36mserver_1 |e[0m I0123 22:03:10.036191 1 draft.go:836] Rollup on disk done. Rolling up 23 keys in LRU cache now…
e[36mserver_1 |e[0m I0123 22:03:10.036533 1 draft.go:846] Rollup in LRU cache done.
e[36mserver_1 |e[0m I0123 22:03:10.036642 1 draft.go:364] List rollup at Ts 13346: OK.
e[36mserver_1 |e[0m I0123 22:06:10.035277 1 draft.go:323] Creating snapshot at index: 13423. ReadTs: 13404.
e[36mserver_1 |e[0m I0123 22:08:10.031793 1 stream.go:255] Rolling up Sent 24 keys
e[36mserver_1 |e[0m I0123 22:08:10.035229 1 draft.go:836] Rollup on disk done. Rolling up 24 keys in LRU cache now…
e[36mserver_1 |e[0m I0123 22:08:10.035407 1 draft.go:846] Rollup in LRU cache done.
e[36mserver_1 |e[0m I0123 22:08:10.035502 1 draft.go:364] List rollup at Ts 13404: OK.
e[36mserver_1 |e[0m I0123 22:08:40.035233 1 draft.go:323] Creating snapshot at index: 13441. ReadTs: 13419.
e[36mserver_1 |e[0m I0123 22:13:10.031811 1 stream.go:255] Rolling up Sent 32 keys
e[36mserver_1 |e[0m I0123 22:13:10.036487 1 draft.go:836] Rollup on disk done. Rolling up 32 keys in LRU cache now…
e[36mserver_1 |e[0m I0123 22:13:10.036693 1 draft.go:846] Rollup in LRU cache done.
e[36mserver_1 |e[0m I0123 22:13:10.036787 1 draft.go:364] List rollup at Ts 13419: OK.
e[36mserver_1 |e[0m I0124 18:24:37.692593 1 http.go:406] Got alter request via HTTP from 76.14.116.59:51456
e[36mserver_1 |e[0m I0124 18:24:37.692853 1 server.go:271] Received ALTER op: schema:": int ."
e[36mserver_1 |e[0m I0124 18:24:37.693800 1 server.go:321] Got schema: [predicate:“testp1” value_type:INT ]
e[36mserver_1 |e[0m I0124 18:24:37.695613 1 groups.go:388] Serving tablet for: testp1
e[36mserver_1 |e[0m I0124 18:24:37.697634 1 mutation.go:158] Done schema update predicate:“testp1” value_type:INT
e[36mserver_1 |e[0m I0124 18:24:37.699366 1 server.go:325] ALTER op: schema:": int ." done
e[36mserver_1 |e[0m I0124 19:01:10.035618 1 draft.go:323] Creating snapshot at index: 13589. ReadTs: 13564.
e[36mserver_1 |e[0m I0124 19:03:10.031608 1 stream.go:255] Rolling up Sent 24 keys
e[36mserver_1 |e[0m I0124 19:03:10.036022 1 draft.go:836] Rollup on disk done. Rolling up 24 keys in LRU cache now…
e[36mserver_1 |e[0m I0124 19:03:10.036266 1 draft.go:846] Rollup in LRU cache done.
e[36mserver_1 |e[0m I0124 19:03:10.036530 1 draft.go:364] List rollup at Ts 13564: OK.
e[36mserver_1 |e[0m I0124 19:03:40.035114 1 draft.go:323] Creating snapshot at index: 13630. ReadTs: 13590.
e[36mserver_1 |e[0m I0124 19:06:10.035432 1 draft.go:323] Creating snapshot at index: 13755. ReadTs: 13604.
e[36mserver_1 |e[0m I0124 19:08:10.031697 1 stream.go:255] Rolling up Sent 34 keys
e[36mserver_1 |e[0m I0124 19:08:10.036776 1 draft.go:836] Rollup on disk done. Rolling up 34 keys in LRU cache now…
e[36mserver_1 |e[0m I0124 19:08:10.036987 1 draft.go:846] Rollup in LRU cache done.
e[36mserver_1 |e[0m I0124 19:08:10.037068 1 draft.go:364] List rollup at Ts 13604: OK.
e[36mserver_1 |e[0m I0124 19:16:10.036520 1 draft.go:323] Creating snapshot at index: 14340. ReadTs: 14201.
e[36mserver_1 |e[0m I0124 19:18:10.031655 1 stream.go:255] Rolling up Sent 25 keys
e[36mserver_1 |e[0m I0124 19:18:10.039672 1 draft.go:836] Rollup on disk done. Rolling up 25 keys in LRU cache now…
e[36mserver_1 |e[0m I0124 19:18:10.039887 1 draft.go:846] Rollup in LRU cache done.
e[36mserver_1 |e[0m I0124 19:18:10.039973 1 draft.go:364] List rollup at Ts 14201: OK.

1 Like

Hum, maybe is some issue with Docker network tho.

Try to do this using bineries Releases · dgraph-io/dgraph · GitHub

Download it and run Live Bulk localy to your docker IP or Localhost.

The Tour set up instructions do not use this Docker Compose config from the documentation, where everything is accessible via localhost within the container.

In the compose file, the Zero and Alpha are in different containers, so you should use the host name of the Zero from the Alpha container. i.e., --zero zero:5080 to refer to the zero container.

2 Likes

Thank you Daniel, that was it.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.