Insert become slow until can't insert

Posted by lxwithgod:

dgraph zero --port_offset -2000
dgraph server --memory_mb 4096 --zero localhost:5080

LinkedList lj=new LinkedList<>();

    for(int i=1;i<100000000L;i++) {
        HashMap<String,String> json = new HashMap<>();
        json.put("name", "Alice");
        lj.add(json);
        if(i%10000==0){
            Mutation mu =
                    Mutation.newBuilder()
                            .setSetJson(ByteString.copyFromUtf8(Common.objectAsString(lj)))
                            .build();
            dgraphClient.newTransaction().mutate(mu);

            System.out.println(i);
            lj.clear();

        }

    }

manishrjain commented :

The transaction isn’t being committed.

lxwithgod commented :

ok,i try it

lxwithgod commented :

server is dead…
client in other machine different from server’s machine…

2017/11/20 14:52:05 mutation.go:148: Done schema update predicate:“predicate” value_type:STRING list:true
2017/11/20 14:52:09 groups.go:291: Asking if I can serve tablet for: name
2017/11/20 14:56:17 node.go:400: WARN: A tick missed to fire. Node blocks too long!
已杀死

deepakjois commented :

Like @manishrjain pointed out, you are not committing your transaction. Did you try my suggestion to modify your code in #25? Please post your final code here just to confirm.

lxwithgod commented :

ManagedChannel channel = ManagedChannelBuilder.forAddress(TEST_HOSTNAME, TEST_PORT).usePlaintext(true).build();
DgraphBlockingStub blockingStub = DgraphGrpc.newBlockingStub(channel);
DgraphClient dgraphClient = new DgraphClient(Collections.singletonList(blockingStub));

    // Set schema

// Operation op = Operation.newBuilder().setSchema(“name: string @index(exact) .”).build();
// dgraphClient.alter(op);

    // Add data

// System.out.println(json.toString());
LinkedList lj=new LinkedList<>();

    for(int i=1;i<100000000L;i++) {
        HashMap<String,String> json = new HashMap<>();
        json.put("name", "Alice");
        lj.add(json);
        if(i%20000==0){
            Mutation mu =
                    Mutation.newBuilder()
                            .setSetJson(ByteString.copyFromUtf8(Common.objectAsString(lj)))
                            .build();
            DgraphClient.Transaction transaction = dgraphClient.newTransaction();
            transaction.mutate(mu);
            transaction.commit();
            System.out.println(i);
            lj.clear();

        }

    }

deepakjois commented :

Your code looks alright. It looks like there is a problem in the server, but it may be caused when using the Java client specifically. It is hard to tell from the information you have given us so far.

I am writing some integration tests for the client right now, to ensure the behavior is exactly like Go client. I will do some load testing in EC2 and try and replicate your scenario.

lxwithgod commented :

thanks

lxwithgod commented :

I tried many times,i find When the point data reaches 4 million,Service crash。

deepakjois commented :

Sorry to hear that you are still having trouble. I will be trying to replicate this scenario soon. Could you please provide me the following info:

  • What is the amount of RAM on your machine
  • How are you running Dgraph server? Could you provide me with the arguments you are running it with?

lxwithgod commented :

20gb ram
ubuntun 16.04
i7-3620m
dgraph zero --port_offset -2000
dgraph server --memory_mb 10240 --zero localhost:5080

lxwithgod commented :

can you remote via temviewer,
my id
693 760 479
jje826

lxwithgod commented :

deepakjois commented :

I have setup two EC2 instances - one as client, and another as server, and I am running the following code. It is still running, and upto about 100000 mutations so far. Will wait for it to complete and see whether i can replicate the problem.

for(int i=1;i<100000000L;i++) {
    Transaction txn = dgraphClient.newTransaction();
    try {
      // Create data
      Person p = new Person();
      p.name = "Alice"+i;

      // Serialize it
      String json = gson.toJson(p);

      // Run mutation
      Mutation mu =
          Mutation.newBuilder().setSetJson(ByteString.copyFromUtf8(json.toString())).build();
      txn.mutate(mu);
      txn.commit();
      if ((i % 10000) == 0) {
         System.out.printf("%s: %d", LocalTime.now(),i);
      }

    } finally {
      txn.discard();
    }
}

deepakjois commented :

I have run the code for about 40min, and 550000 mutations have completed successfully. I am yet to see any error.

I just want to point out that for bulk loading, there are better ways to insert the data than inserting it one by one in a single thread, like the code above.

See documentation: https://docs.dgraph.io/deploy/#fast-data-loading

You can use the live loader (dgraph live --help) or the bulk loader (dgraph bulk --help). You could also make sure you make mutations concurrently in multiple threads.

You have to be a bit careful if you have a schema defined. You might encounter aborts if you try to update the same entities simultaneously, and you will need to handle them by retrying or something.

lxwithgod commented :

mybe,you try

for(int i=1;i<100000000L;i++) {
HashMap<String,String> json = new HashMap<>();
json.put(“name”, “Alice”);
lj.add(json);
if(i%20000==0){
Mutation mu =
Mutation.newBuilder()
.setSetJson(ByteString.copyFromUtf8(Common.objectAsString(lj)))
.build();
DgraphClient.Transaction transaction = dgraphClient.newTransaction();
transaction.mutate(mu);
transaction.commit();
System.out.println(i);
lj.clear();

    }

}

deepakjois commented :

I will change this line in my code:

p.name = "Alice"+i;

to

p.name = "Alice"

and try again.

This code is a modified version of DgraphJavaSample project that can be found in this repo. It effectively does the same thing as what your code is doing. I don’t see any difference between running the code as above, and what you have. If the problem cannot be replicated that way, then the cause lies elsewhere.

lxwithgod commented :

ok,mybe there are software conflict in my os.

lxwithgod commented :

dgraph server --memory_mb 2048 --zero localhost:5080
2017/11/27 17:01:27 groups.go:93: Current Raft Id: 0
2017/11/27 17:01:27 worker.go:99: Worker listening at address: [::]:7080
2017/11/27 17:01:27 pool.go:117: == CONNECT ==> Setting localhost:5080
2017/11/27 17:01:27 gRPC server started. Listening on port 9080
2017/11/27 17:01:27 HTTP server started. Listening on port 8080
2017/11/27 17:01:27 groups.go:113: Connected to group zero. Connection state: member:<id:1 group_id:1 addr:“localhost:7080” > state:<counter:5 groups:<key:1 value:<members:<key:1 value:<id:1 group_id:1 addr:“localhost:7080” > > > > zeros:<key:1 value:<id:1 addr:“localhost:5080” leader:true > > maxRaftId:1 >
2017/11/27 17:01:27 draft.go:139: Node ID: 1 with GroupID: 1
2017/11/27 17:01:27 node.go:231: Group 1 found 0 entries
2017/11/27 17:01:27 draft.go:683: New Node for group: 1
2017/11/27 17:01:27 raft.go:567: INFO: 1 became follower at term 0
2017/11/27 17:01:27 raft.go:315: INFO: newRaft 1 [peers: , term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2017/11/27 17:01:27 raft.go:567: INFO: 1 became follower at term 1
2017/11/27 17:01:27 groups.go:291: Asking if I can serve tablet for: predicate
2017/11/27 17:01:27 node.go:123: Setting conf state to nodes:1
2017/11/27 17:01:27 raft.go:749: INFO: 1 is starting a new election at term 1
2017/11/27 17:01:27 raft.go:580: INFO: 1 became candidate at term 2
2017/11/27 17:01:27 raft.go:664: INFO: 1 received MsgVoteResp from 1 at term 2
2017/11/27 17:01:27 raft.go:621: INFO: 1 became leader at term 2
2017/11/27 17:01:27 node.go:301: INFO: raft.node: 1 elected leader 1 at term 2
2017/11/27 17:01:27 mutation.go:147: Done schema update predicate:“predicate” value_type:STRING list:true
Got schema: [predicate:“phone” value_type:STRING directive:INDEX tokenizer:“exact” explicit:true ]
2017/11/27 17:01:50 groups.go:291: Asking if I can serve tablet for: phone
2017/11/27 17:01:50 mutation.go:147: Done schema update predicate:“phone” value_type:STRING directive:INDEX tokenizer:“exact” explicit:true
Got schema: [predicate:“call” value_type:UID directive:REVERSE count:true explicit:true ]
2017/11/27 17:01:50 groups.go:291: Asking if I can serve tablet for: call
2017/11/27 17:01:50 mutation.go:147: Done schema update predicate:“call” value_type:UID directive:REVERSE count:true explicit:true
2017/11/27 17:07:45 groups.go:291: Asking if I can serve tablet for: dummy
2017/11/27 17:17:27 wal.go:118: Writing snapshot to WAL, metadata: {ConfState:{Nodes:[1] XXX_unrecognized:} Index:13 Term:2 XXX_unrecognized:}, len(data): 27
2017/11/27 17:18:29 wal.go:118: Writing snapshot to WAL, metadata: {ConfState:{Nodes:[1] XXX_unrecognized:} Index:165 Term:2 XXX_unrecognized:}, len(data): 27
2017/11/27 17:19:27 wal.go:118: Writing snapshot to WAL, metadata: {ConfState:{Nodes:[1] XXX_unrecognized:} Index:168 Term:2 XXX_unrecognized:}, len(data): 27
已杀死

lxwithgod commented :

restart ,contuine insert
I have run the code for about 20min
a lot of error
goroutine 566921 [chan receive, 6 minutes]:
github.com/dgraph-io/dgraph/worker.(*node).ProposeAndWait(0xc42015bab0, 0x19120c0, 0xc4200140c8, 0xc57309afa0, 0x0, 0x0)
/home/travis/gopath/src/github.com/dgraph-io/dgraph/worker/draft.go:255 +0x591
created by github.com/dgraph-io/dgraph/worker.(*groupi).proposeDelta
/home/travis/gopath/src/github.com/dgraph-io/dgraph/worker/groups.go:595 +0x19a

goroutine 570023 [select]:
github.com/dgraph-io/dgraph/x.(*WaterMark).WaitForMark(0xc453994208, 0x19120c0, 0xc4200140c8, 0xac1, 0xfe6238, 0xc42015bae8)
/home/travis/gopath/src/github.com/dgraph-io/dgraph/x/watermark.go:109 +0x17f
github.com/dgraph-io/dgraph/worker.commitOrAbort(0x19120c0, 0xc4200140c8, 0xc58cbfe480, 0xc4200140c8, 0x0, 0x1d)
/home/travis/gopath/src/github.com/dgraph-io/dgraph/worker/mutation.go:460 +0x97
github.com/dgraph-io/dgraph/worker.(*node).commitOrAbort(0xc42015bab0, 0xdbe, 0x553cc881, 0xc58cbfe480)
/home/travis/gopath/src/github.com/dgraph-io/dgraph/worker/draft.go:377 +0x85
created by github.com/dgraph-io/dgraph/worker.(*node).processApplyCh
/home/travis/gopath/src/github.com/dgraph-io/dgraph/worker/draft.go:368 +0x403

goroutine 569478 [select]:
github.com/dgraph-io/dgraph/x.(*WaterMark).WaitForMark(0xc453994208, 0x19120c0, 0xc4200140c8, 0x91a, 0xfe6238, 0xc42015bae8)
/home/travis/gopath/src/github.com/dgraph-io/dgraph/x/watermark.go:109 +0x17f
github.com/dgraph-io/dgraph/worker.commitOrAbort(0x19120c0, 0xc4200140c8, 0xc5d0c0c1c0, 0xc4200140c8, 0x0, 0x1f)
/home/travis/gopath/src/github.com/dgraph-io/dgraph/worker/mutation.go:460 +0x97
github.com/dgraph-io/dgraph/worker.(*node).commitOrAbort(0xc42015bab0, 0xd1c, 0x231eecd9, 0xc5d0c0c1c0)
/home/travis/gopath/src/github.com/dgraph-io/dgraph/worker/draft.go:377 +0x85
created by github.com/dgraph-io/dgraph/worker.(*node).processApplyCh
/home/travis/gopath/src/github.com/dgraph-io/dgraph/worker/draft.go:368 +0x403