What grpc address should I use to access the dgraph cluster

I set up a cluster consist of 3 dgraphzero instances + 3 dgraph nodes.
dgraph are listening on (8080,9080), (8082,9082),(8084,9084)
I notice if I use the web UI via localhost:8080 to ingest the nodes, the all 3 dgraph nodes are asking if it can server a particular predicate, but if I ingest via the grpc endpoint (ie. localhost:9080) I didn’t see other dgraph nodes apart from the one that is serving the 9080 port ask to serve the new predicate.

Setup

dgraphzero -w wz --bindall=true --my "localhost:8888"
dgraphzero -w wz1 --peer "localhost:8888" --port 8890 --bindall=true -idx 2
dgraphzero -w wz1 --peer "localhost:8888" --port 8892 --bindall=true -idx 3

dgraph --idx 1 --peer "localhost:8888" --my "localhost:12345" --bindall=true --memory_mb=2048
dgraph --idx 2 --peer "localhost:8888" --my "localhost:12346" --port 8082 --grpc_port 9082 --workerport 12346 --bindall=true --memory_mb=2048
dgraph --idx 3 --peer "localhost:8888" --my "localhost:12347" --port 8084 --grpc_port 9084 --workerport 12347 --bindall=true --memory_mb=2048```

Hi @jzhu077, it doesn’t matter which grpc port you use for queries.

hi @peter, it turns out all data are being written to the same node even though there are multiple predicates. I thought there should be some kind of parallel ingest and data sharding. It feels like I am running a single instance of dgraph instead of a cluster.

There should definitely be data sharding occurring. The data for each predicate should be assigned to a different dgraph instance. Are you able to share the logs from the dgraph and dgraphzeros?

Sorry about that, I made a mistake in the config after I reset the cluster. It’s working. Thanks for your help.

All good, I’m glad you sorted it out.

btw since I only have 3 dgraphzero instances, is there a point of adding more than 3 dgraph nodes? Does it help with improving the speed? I noticed when I create a dgraph node it logs

2017/10/30 13:53:35 draft.go:693: Restarting node for group: 3
2017/10/30 13:53:35 raft.go:567: INFO: 3 became follower at term 2
2017/10/30 13:53:35 raft.go:316: INFO: newRaft 3 [peers: [3], term: 2, commit: 17, applied: 17, lastindex: 17, lastterm: 2]
2017/10/30 13:53:39 raft.go:749: INFO: 3 is starting a new election at term 2
2017/10/30 13:53:39 raft.go:580: INFO: 3 became candidate at term 3
2017/10/30 13:53:39 raft.go:664: INFO: 3 received MsgVoteResp from 3 at term 3
2017/10/30 13:53:39 raft.go:621: INFO: 3 became leader at term 3
2017/10/30 13:53:39 node.go:301: INFO: raft.node: 3 elected leader 3 at term 3

Is it better to have multiple nodes serving the same group?

In general, the number of dgraphzero instances can be different from the number of dgraph instances. We recommend having at least 3 dgraphzero instances.

Having more dgraph instances should spread the load so that each instance is doing less work, and therefore helps to increase throughput. You can also add replication as well, so that two (or more) dgraph instances are serving the same data.

There’s some documentation here in case you didn’t see it. https://docs.dgraph.io/deploy/#multiple-instances

I am getting the single node ingesting when I am using the cluster. I thought I fixed it but I am still getting the single node ingesting… Looks like the first dgraph node is taking all of the predicates. Did I miss any thing in the setup?
Here are the dgraphzero logs:

2017/10/30 16:19:38 node.go:554: RECEIVED: MsgAppResp 2-->1
2017/10/30 16:19:38 node.go:554: RECEIVED: MsgAppResp 2-->1
2017/10/30 16:19:38 node.go:554: RECEIVED: MsgAppResp 2-->1
2017/10/30 16:19:38 node.go:554: RECEIVED: MsgAppResp 2-->1
2017/10/30 16:19:38 node.go:554: RECEIVED: MsgAppResp 2-->1
2017/10/30 16:19:38 node.go:554: RECEIVED: MsgAppResp 2-->1
2017/10/30 16:19:38 node.go:554: RECEIVED: MsgAppResp 2-->1
2017/10/30 16:19:38 node.go:177: 		SENDING: MsgApp 1-->2
2017/10/30 16:19:38 node.go:554: RECEIVED: MsgReadIndex 2-->1
2017/10/30 16:19:38 node.go:554: RECEIVED: MsgAppResp 2-->1
2017/10/30 16:19:38 node.go:177: 		SENDING: MsgReadIndexResp 1-->2
2017/10/30 16:19:38 node.go:554: RECEIVED: MsgAppResp 2-->1
2017/10/30 16:19:38 node.go:554: RECEIVED: MsgAppResp 2-->1
2017/10/30 16:19:38 node.go:554: RECEIVED: MsgReadIndex 2-->1
2017/10/30 16:19:38 node.go:177: 		SENDING: MsgReadIndexResp 1-->2
2017/10/30 16:20:05 node.go:554: RECEIVED: MsgReadIndex 3-->1
2017/10/30 16:20:05 node.go:177: 		SENDING: MsgReadIndexResp 1-->3
2017/10/30 16:20:08 node.go:554: RECEIVED: MsgReadIndex 2-->1
2017/10/30 16:20:08 node.go:177: 		SENDING: MsgReadIndexResp 1-->2
2017/10/30 16:21:05 node.go:554: RECEIVED: MsgReadIndex 3-->1
2017/10/30 16:21:05 node.go:177: 		SENDING: MsgReadIndexResp 1-->3
2017/10/30 16:21:08 node.go:554: RECEIVED: MsgReadIndex 2-->1
2017/10/30 16:21:08 node.go:177: 		SENDING: MsgReadIndexResp 1-->2
2017/10/30 16:22:05 node.go:554: RECEIVED: MsgReadIndex 3-->1
2017/10/30 16:22:05 node.go:177: 		SENDING: MsgReadIndexResp 1-->3
2017/10/30 16:22:08 node.go:554: RECEIVED: MsgReadIndex 2-->1
2017/10/30 16:22:08 node.go:177: 		SENDING: MsgReadIndexResp 1-->2
2017/10/30 16:23:05 node.go:554: RECEIVED: MsgReadIndex 3-->1
2017/10/30 16:23:05 node.go:177: 		SENDING: MsgReadIndexResp 1-->3
2017/10/30 16:24:07 raft.go:253: Applied proposal: {Id:3895753382 Member:<nil> Tablet:group_id:1 predicate:"___commonkind" space:64762586  MaxLeaseId:0}
2017/10/30 16:24:07 node.go:177: 		SENDING: MsgAppResp 2-->1
2017/10/30 16:24:08 raft.go:253: Applied proposal: {Id:2289509746 Member:<nil> Tablet:group_id:1 predicate:"string_prop" space:57000000  MaxLeaseId:0}
2017/10/30 16:24:08 raft.go:253: Applied proposal: {Id:4155995118 Member:<nil> Tablet:group_id:1 predicate:"___orderable" space:55500000  MaxLeaseId:0}
2017/10/30 16:24:08 raft.go:253: Applied proposal: {Id:4271119824 Member:<nil> Tablet:group_id:1 predicate:"___child" space:43249800  MaxLeaseId:0}
2017/10/30 16:24:08 raft.go:253: Applied proposal: {Id:360075037 Member:<nil> Tablet:group_id:1 predicate:"int_prop" space:54500000  MaxLeaseId:0}
2017/10/30 16:24:08 raft.go:253: Applied proposal: {Id:113360965 Member:<nil> Tablet:group_id:1 predicate:"float_prop" space:55500000  MaxLeaseId:0}
2017/10/30 16:24:08 raft.go:253: Applied proposal: {Id:2763534684 Member:<nil> Tablet:group_id:1 predicate:"bool_prop" space:55500000  MaxLeaseId:0}
2017/10/30 16:24:08 raft.go:253: Applied proposal: {Id:3935270302 Member:<nil> Tablet:group_id:1 predicate:"_predicate_" space:173870115  MaxLeaseId:0}
2017/10/30 16:24:08 raft.go:253: Applied proposal: {Id:2931079240 Member:<nil> Tablet:group_id:1 predicate:"___kind" space:60762759  MaxLeaseId:0}
2017/10/30 16:24:08 node.go:177: 		SENDING: MsgAppResp 2-->1
2017/10/30 16:24:08 node.go:177: 		SENDING: MsgAppResp 2-->1
2017/10/30 16:24:08 node.go:177: 		SENDING: MsgAppResp 2-->1
2017/10/30 16:24:08 node.go:177: 		SENDING: MsgAppResp 2-->1
2017/10/30 16:24:08 node.go:177: 		SENDING: MsgReadIndex 2-->1
2017/10/30 16:24:08 node.go:554: RECEIVED: MsgReadIndexResp 1-->2
2017/10/30 16:24:08 node.go:177: 		SENDING: MsgReadIndex 2-->1
2017/10/30 16:24:08 node.go:554: RECEIVED: MsgReadIndexResp 1-->2
2017/10/30 16:24:08 node.go:177: 		SENDING: MsgReadIndex 2-->1
2017/10/30 16:24:08 node.go:554: RECEIVED: MsgReadIndexResp 1-->2
2017/10/30 16:24:07 raft.go:253: Applied proposal: {Id:3895753382 Member:<nil> Tablet:group_id:1 predicate:"___commonkind" space:64762586  MaxLeaseId:0}
2017/10/30 16:24:07 node.go:177: 		SENDING: MsgAppResp 3-->1
2017/10/30 16:24:08 raft.go:253: Applied proposal: {Id:2289509746 Member:<nil> Tablet:group_id:1 predicate:"string_prop" space:57000000  MaxLeaseId:0}
2017/10/30 16:24:08 raft.go:253: Applied proposal: {Id:4155995118 Member:<nil> Tablet:group_id:1 predicate:"___orderable" space:55500000  MaxLeaseId:0}
2017/10/30 16:24:08 raft.go:253: Applied proposal: {Id:4271119824 Member:<nil> Tablet:group_id:1 predicate:"___child" space:43249800  MaxLeaseId:0}
2017/10/30 16:24:08 raft.go:253: Applied proposal: {Id:360075037 Member:<nil> Tablet:group_id:1 predicate:"int_prop" space:54500000  MaxLeaseId:0}
2017/10/30 16:24:08 raft.go:253: Applied proposal: {Id:113360965 Member:<nil> Tablet:group_id:1 predicate:"float_prop" space:55500000  MaxLeaseId:0}
2017/10/30 16:24:08 raft.go:253: Applied proposal: {Id:2763534684 Member:<nil> Tablet:group_id:1 predicate:"bool_prop" space:55500000  MaxLeaseId:0}
2017/10/30 16:24:08 raft.go:253: Applied proposal: {Id:3935270302 Member:<nil> Tablet:group_id:1 predicate:"_predicate_" space:173870115  MaxLeaseId:0}
2017/10/30 16:24:08 raft.go:253: Applied proposal: {Id:2931079240 Member:<nil> Tablet:group_id:1 predicate:"___kind" space:60762759  MaxLeaseId:0}
2017/10/30 16:24:08 node.go:177: 		SENDING: MsgAppResp 3-->1
2017/10/30 16:24:08 node.go:177: 		SENDING: MsgAppResp 3-->1
2017/10/30 16:24:08 node.go:177: 		SENDING: MsgAppResp 3-->1
2017/10/30 16:24:08 node.go:177: 		SENDING: MsgAppResp 3-->1
2017/10/30 16:24:08 node.go:177: 		SENDING: MsgReadIndex 3-->1
2017/10/30 16:24:08 node.go:554: RECEIVED: MsgReadIndexResp 1-->3
2017/10/30 16:24:08 node.go:177: 		SENDING: MsgReadIndex 3-->1
2017/10/30 16:24:08 node.go:554: RECEIVED: MsgReadIndexResp 1-->3
2017/10/30 16:25:05 node.go:177: 		SENDING: MsgReadIndex 3-->1
2017/10/30 16:25:05 node.go:554: RECEIVED: MsgReadIndexResp 1-->3

dgraph:

Asking if I serve tablet: ___orderable
Asking if I serve tablet: string_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: bool_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: int_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: bool_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: string_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: float_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: string_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: int_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: float_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: int_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: float_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: float_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: float_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: float_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: float_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: float_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: float_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: bool_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: float_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: float_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: float_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: float_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: float_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: bool_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: float_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: string_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: bool_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: string_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: float_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: float_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: float_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: float_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: bool_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: float_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: string_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: bool_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: float_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: float_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: float_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: float_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: float_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: int_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: bool_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: float_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: bool_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: float_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: string_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: float_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: float_prop
Asking if I serve tablet: ___orderable
Asking if I serve tablet: int_prop
Asking if I serve tablet: bool_prop
Asking if I serve tablet: float_prop
Asking if I serve tablet: float_prop
Asking if I serve tablet: float_prop
Asking if I serve tablet: string_prop
Asking if I serve tablet: float_prop
Asking if I serve tablet: float_prop
Asking if I serve tablet: float_prop
Asking if I serve tablet: float_prop
Asking if I serve tablet: float_prop
Asking if I serve tablet: float_prop
Asking if I serve tablet: float_prop
2017/10/30 16:19:37 schema.go:90: Setting schema for attr string_prop: string, tokenizer: [], directive: NONE, count: false
2017/10/30 16:19:37 schema.go:90: Setting schema for attr ___orderable: bool, tokenizer: [], directive: NONE, count: false
2017/10/30 16:19:37 schema.go:90: Setting schema for attr float_prop: string, tokenizer: [], directive: NONE, count: false
2017/10/30 16:19:37 schema.go:90: Setting schema for attr int_prop: string, tokenizer: [], directive: NONE, count: false
2017/10/30 16:19:37 schema.go:90: Setting schema for attr bool_prop: string, tokenizer: [], directive: NONE, count: false
2017/10/30 16:21:05 wal.go:84: Writing snapshot to WAL: {Data:[9 1 0 0 0 0 0 0 0 16 1 26 15 108 111 99 97 108 104 111 115 116 58 49 50 51 52 53] Metadata:{ConfState:{Nodes:[1] XXX_unrecognized:[]} Index:25297 Term:2 XXX_unrecognized:[]} XXX_unrecognized:[]}
2017/10/30 16:22:05 wal.go:84: Writing snapshot to WAL: {Data:[9 1 0 0 0 0 0 0 0 16 1 26 15 108 111 99 97 108 104 111 115 116 58 49 50 51 52 53] Metadata:{ConfState:{Nodes:[1] XXX_unrecognized:[]} Index:36646 Term:2 XXX_unrecognized:[]} XXX_unrecognized:[]}
2017/10/30 16:23:05 wal.go:84: Writing snapshot to WAL: {Data:[9 1 0 0 0 0 0 0 0 16 1 26 15 108 111 99 97 108 104 111 115 116 58 49 50 51 52 53] Metadata:{ConfState:{Nodes:[1] XXX_unrecognized:[]} Index:39033 Term:2 XXX_unrecognized:[]} XXX_unrecognized:[]}
2017/10/30 16:25:05 wal.go:84: Writing snapshot to WAL: {Data:[9 1 0 0 0 0 0 0 0 16 1 26 15 108 111 99 97 108 104 111 115 116 58 49 50 51 52 53] Metadata:{ConfState:{Nodes:[1] XXX_unrecognized:[]} Index:39035 Term:2 XXX_unrecognized:[]} XXX_unrecognized:[]}
2017/10/30 16:19:07 HTTP server started.  Listening on port 8082
2017/10/30 16:19:07 pool.go:104: == CONNECT ==> Setting localhost:8888
2017/10/30 16:19:08 groups.go:114: Connected to group zero. State: counter:19 groups:<key:1 value:<members:<key:1 value:<id:1 group_id:1 addr:"localhost:12345" leader:true last_update:1509333545 > > > > groups:<key:2 value:<members:<key:2 value:<id:2 group_id:2 addr:"localhost:12346" > > > > zeros:<key:1 value:<id:1 addr:"localhost:8888" leader:true > > zeros:<key:2 value:<id:2 addr:"localhost:8890" > > zeros:<key:3 value:<id:3 addr:"localhost:8892" > > 
2017/10/30 16:19:08 draft.go:143: Node ID: 2 with GroupID: 2
2017/10/30 16:19:08 pool.go:104: == CONNECT ==> Setting localhost:8890
2017/10/30 16:19:08 pool.go:104: == CONNECT ==> Setting localhost:8892
2017/10/30 16:19:08 pool.go:104: == CONNECT ==> Setting localhost:12345
2017/10/30 16:19:08 node.go:246: Group 2 found 0 entries
2017/10/30 16:19:08 draft.go:702: New Node for group: 2
2017/10/30 16:19:08 raft.go:567: INFO: 2 became follower at term 0
2017/10/30 16:19:08 raft.go:316: INFO: newRaft 2 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2017/10/30 16:19:08 raft.go:567: INFO: 2 became follower at term 1
2017/10/30 16:19:08 groups.go:408: Unable to sync memberships. Error: rpc error: code = Unknown desc = Unknown cluster member
2017/10/30 16:19:08 raft.go:749: INFO: 2 is starting a new election at term 1
2017/10/30 16:19:08 raft.go:580: INFO: 2 became candidate at term 2
2017/10/30 16:19:08 raft.go:664: INFO: 2 received MsgVoteResp from 2 at term 2
2017/10/30 16:19:08 raft.go:621: INFO: 2 became leader at term 2
2017/10/30 16:19:08 node.go:301: INFO: raft.node: 2 elected leader 2 at term 2
2017/10/30 16:19:10 pool.go:104: == CONNECT ==> Setting localhost:12347
2017/10/30 16:19:09 worker.go:105: Worker listening at address: [::]:12347
2017/10/30 16:19:09 gRPC server started.  Listening on port 9084
2017/10/30 16:19:09 pool.go:104: == CONNECT ==> Setting localhost:8888
2017/10/30 16:19:09 HTTP server started.  Listening on port 8084
2017/10/30 16:19:10 groups.go:114: Connected to group zero. State: counter:21 groups:<key:1 value:<members:<key:1 value:<id:1 group_id:1 addr:"localhost:12345" leader:true last_update:1509333545 > > > > groups:<key:2 value:<members:<key:2 value:<id:2 group_id:2 addr:"localhost:12346" leader:true last_update:1509333548 > > > > groups:<key:3 value:<members:<key:3 value:<id:3 group_id:3 addr:"localhost:12347" > > > > zeros:<key:1 value:<id:1 addr:"localhost:8888" leader:true > > zeros:<key:2 value:<id:2 addr:"localhost:8890" > > zeros:<key:3 value:<id:3 addr:"localhost:8892" > > 
2017/10/30 16:19:10 draft.go:143: Node ID: 3 with GroupID: 3
2017/10/30 16:19:10 pool.go:104: == CONNECT ==> Setting localhost:8892
2017/10/30 16:19:10 pool.go:104: == CONNECT ==> Setting localhost:12345
2017/10/30 16:19:10 pool.go:104: == CONNECT ==> Setting localhost:12346
2017/10/30 16:19:10 pool.go:104: == CONNECT ==> Setting localhost:8890
2017/10/30 16:19:10 node.go:246: Group 3 found 0 entries
2017/10/30 16:19:10 draft.go:702: New Node for group: 3
2017/10/30 16:19:10 raft.go:567: INFO: 3 became follower at term 0
2017/10/30 16:19:10 raft.go:316: INFO: newRaft 3 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2017/10/30 16:19:10 raft.go:567: INFO: 3 became follower at term 1
2017/10/30 16:19:10 groups.go:408: Unable to sync memberships. Error: rpc error: code = Unknown desc = Unknown cluster member
2017/10/30 16:19:10 raft.go:749: INFO: 3 is starting a new election at term 1
2017/10/30 16:19:10 raft.go:580: INFO: 3 became candidate at term 2
2017/10/30 16:19:10 raft.go:664: INFO: 3 received MsgVoteResp from 3 at term 2
2017/10/30 16:19:10 raft.go:621: INFO: 3 became leader at term 2
2017/10/30 16:19:10 node.go:301: INFO: raft.node: 3 elected leader 3 at term 2

disk size:

735M    ./d1/p
28K     ./d1/wz
278M    ./d1/w
1012M   ./d1
12K     ./d2/p
28K     ./d2/wz1
24K     ./d2/w
68K     ./d2
12K     ./d3/p
28K     ./d3/wz1
24K     ./d3/w
68K     ./d3
1013M   ./

What is the timeline of bringing up the second two dgraph instances? It can take some time for predicates to be transferred between instances. Predicate balance/transfer is only considered around once per 10 min.

How did you load the initial dataset? Via live loader, bulk loader, or some other solution?

I started 3 dgraph intances up one after another. Then I was using a script to ingested the data via grpc. Does Predicate balance/transfer is only considered around once per 10 min. mean I should wait for 10 mins after I set up the schema?

Yeah, currently just one move per 10 mins. If you want to evenly spread them out during ingestion, you could give the address of all the Dgraph servers to the client. It would then randomly pick up a dgraph server to send data to. That way, all servers would see new predicates and get to serve them.

For various reasons, if Dgraph server X is the first one to see a new predicate, it would be the one to serve it. That’s why doing this at the client level helps.

thanks a lot for the suggestion. It would be nice if the 10mins predicate balancing and the suggestion is clearly stated in the documentation :slight_smile:

Please file a Github issue for the docs, we’ll fix the documentation.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.