Out of memory when running a large query

A large query consists of ~100 blocks that has results exceed the memory limit of the VM(node) could potentially get a dgraph server instance killed due to OOM.

Other than optimising the order of the query block, ie run the block with filter first, to reduce the total result found, another suggestion is to use pagination, but from our testing, when paginating a result set of 1000 to 10 items/page, 1000 results are still found and stored in memory upfront.

Could you provide more detail about the pagination suggestion, please?
Other than these approaches is there anything else we can try to reduce the memory usage to avoid the OOMkilled issue?

Any suggestion would be much appreciated.

Could you expand a bit more about this? We stored the uids in memory in compressed form. Pagination would ensure that only the number of results that are asked for are used for the response.

If you can share a memory profile while the query is executing or a way to reproduce with the actual data, we can look at what is causing a spike and see if we can optimize it.

Initially, we were planning to use pagination to perform streaming, and we thought by using pagination, the memory usage would be reduced in a large query, but we found out it still find all results before it paginates the result. We were thinking about paginating and sending a set of results to the client side while the query is still going, ie send back the first x results that it found while search through the rest of the graph.

Streaming is not supported for now. If you can share a memory profile, that would help find the cause for the OOM.


This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.