How many sides can each node have at most?

When I insert 10000 edges, it takes 46 seconds to insert, and it is slower to insert later. Is this the performance bottleneck of the database?
Here is the last execution result of the insert:

  "data": {
    "code": "Success",
    "message": "Done",
    "uids": {}
  "extensions": {
    "server_latency": {
      "parsing_ns": 4867594,
      "processing_ns": 78759405205
    "txn": {
      "start_ts": 5050055,
      "commit_ts": 5050063,
      "preds": [

Here is some query what I do:

	  "uid": "0x231861",
	  "keyword_uids": [{"uid": "0x7dd"},{"uid": "0x1ee3"}]

There may be more than 100000 side in each node

If there are 2 million nodes and each node has tens of thousands of sides, is there any good scheme or suggestion? Thank you!

By the way, this is the schema I created


I have also tested other insertion methods and inserting list data into predicates. It turns out that with the increase of data volume, the time of each insertion will be longer than that of the previous one. In this way, if there are hundreds of thousands of data in a list, the cost of inserting data will be huge. I think this is totally unacceptable for users with more list data. What is it Solution?

Would you provide us details such as what is the configuration of machine that you are using? I expect the insertion to get a little slower over time given that you have count index. If it is getting too slow, then this is something we should look into and figure out. Could you provide us with example dataset and we would run the live loader on it.

The machine has 48 cores and 32 GB of memory, and the remaining machine resources are still relatively large. If it is an index problem, is it theoretically not affecting the insert speed without adding count and reverse, I will try to wait for a while to feedback the results

After my actual test, if you don’t add count and reverse, the speed does not slow down, and the insertion speed of 10,000 edges is also very fast, about 1s. It seems that this can only be done, what is better? Any suggestions?

Yeah, that makes sense. We are looking into how we can improve the performance of updating the count index.

I think it would be faster to create the count index once the initial ingestion is complete, though, understand that the cluster will not available for writes while indexes are getting built. I am working on improving the speed of building the index as well as how we can make the cluster available for writes while indexes are getting built.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.