What happend in dgraph?

it is really quite diffcult day today :disappointed_relieved:

this is my dgraph cluster’s memory take do you know what happened during data insert?
as far as I know LSM tree should not take all memeory , is bleve index? or badger vlog garbage collection?
can you give me an idea?
I have read the badger’s paper . i don’t think badger take a lot of memory

what happened during the memory skipping ?

hard to say. i found when you are inserting data, it will consume much memory but when it finished, the memory will decrease. E.g.
Server1:


Server2:

Server3:

https://docs.dgraph.io/howto/#retrieving-debug-information
hope this will help me :slightly_frowning_face:

I’m pushing a change to Dgraph which would allow you to set the Value Log to stay on disk. That might help. This could be because of the value log being mmapped.

1 Like

will this change be in the next dgraph version?

wow…
interesting …

waiting you for a long time sir !
I think badger is compacting when insert data quick, so badger apply more memory for compact until operator kill it ,as you see shanghai-Jerry has used about 27G . why not badger use block instead of applying more memory? operator killing will damage all cluster ,is a major production accident

1 Like

Nothing like that. Hard to say what is going on in your charts, without having memory profiling information.

FWIW, you can decrease the number of concurrent compactors running in Badger, via options (not exposed by Dgraph, but you can modify the code). However, I doubt that’s the cause here. Most likely, mmap is just using whatever memory is available in the system. Kernel decides how much is used. So, you could avoid mmapping the value log entirely to decrease disk usage. The code is in master. I’ll cut a release tomorrow.

ok, look forward to seeing tomorrow‘s release of dgraph. because i really need to cut the use of memory

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.