How to imporve query performance and be avoid from out of memory

What I want to do

I have a batch search and mutation with 200000 datas.
data schema has index predicate

What I did

I will search first:
query1(func: type(ip_address)) @filter(eq(ip_address.id, “xxx”)) {
uid
dgraph.type
other…
}
… so many
query200000(func: type(xxxx))@filter(eq(xxx,“xxx”) and @filterxxx{xxxx}

my data is 400M in disk
but memory used more than10G and out of memory

so what can I do?
purpose :
I want to find uid firstly, and put uid to every data json uid and update them

Change the root function to be like:

query1(func: eq(ip_address.id, “xxx”)) @filter(type(ip_address)) {
  uid
  dgraph.type
  # other…
}
1 Like

thank you,it is solved!

and I have some another questions:
1.Is it possible to change max32 to max64 to increase grpc call body size by modify code GrpcMaxSize?

2.I mutation a very large json,but it mutation so slow,What’s wrong with this? (8 cpu, 16g memory)

{
“subnet.id”:1,
“subnet.name”:“xxx”,
“ip_address_asso”:[
{
“ip_address.id”:1,
“ip_address.ip”:“192.168.0.2”,
“ip_address.msg”:“usermsg”
},…(here has 10000 ~ 50000 datas


{
“ip_address.id”:50000,
“ip_address.ip”:“1x.xx.xx.x”,
“ip_address.msg”:“usermsg”
}
]
}

  1. no idea, out of my scope of expertise.

  2. use smaller data sets? :man_shrugging:

See recommendation hardware:

Production checklist - Deploy.

I use reverse with uid predicate between ip_address and subnet,so in my one big json there has some problems that lead to the low performance ?