Is there OOM issue in v1.0.11?

Hi, before v1.0.11 my query return result, now I always get OOM and alpha restart.

Here are heap profiles.

After restart, server consume all memory (30 GB) and crashes with OOM again.

Finally after several restarts I can see this logs:

fatal error: runtime: out of memory

runtime stack:
runtime.throw(0x1477fea, 0x16)
	/usr/local/go/src/runtime/panic.go:608 +0x72
runtime.sysMap(0xc678000000, 0x4000000, 0x1fb1678)
	/usr/local/go/src/runtime/mem_linux.go:156 +0xc7
runtime.(*mheap).sysAlloc(0x1f979e0, 0x4000000, 0x1f979f8, 0x7f8a8e075860)
	/usr/local/go/src/runtime/malloc.go:619 +0x1c7
runtime.(*mheap).grow(0x1f979e0, 0x1, 0x0)
	/usr/local/go/src/runtime/mheap.go:920 +0x42
runtime.(*mheap).allocSpanLocked(0x1f979e0, 0x1, 0x1fb1688, 0x400)
	/usr/local/go/src/runtime/mheap.go:848 +0x337
runtime.(*mheap).alloc_m(0x1f979e0, 0x1, 0x4b, 0x7f8a0281ffff)
	/usr/local/go/src/runtime/mheap.go:692 +0x119
	/usr/local/go/src/runtime/mheap.go:759 +0x4c
runtime.(*mheap).alloc(0x1f979e0, 0x1, 0x7f8a0201004b, 0x7f8a8e0757c8)
	/usr/local/go/src/runtime/mheap.go:758 +0x8a
runtime.(*mcentral).grow(0x1f99fd8, 0x0)
	/usr/local/go/src/runtime/mcentral.go:232 +0x94
runtime.(*mcentral).cacheSpan(0x1f99fd8, 0x7f8a8e0757c8)
	/usr/local/go/src/runtime/mcentral.go:106 +0x2f8
runtime.(*mcache).refill(0x7f923979b7b0, 0xc00ba7614b)
	/usr/local/go/src/runtime/mcache.go:122 +0x95
	/usr/local/go/src/runtime/malloc.go:749 +0x32
	/usr/local/go/src/runtime/asm_amd64.s:351 +0x66

goroutine 14347 [running]:
	/usr/local/go/src/runtime/asm_amd64.s:311 fp=0xc0f7e5ecc8 sp=0xc0f7e5ecc0 pc=0x8de370
runtime.(*mcache).nextFree(0x7f923979b7b0, 0xc0f7e5ed4b, 0xf7afc2, 0xc00d782140, 0x3)
	/usr/local/go/src/runtime/malloc.go:748 +0xb6 fp=0xc0f7e5ed20 sp=0xc0f7e5ecc8 pc=0x88e9e6
runtime.mallocgc(0x800, 0x12d83c0, 0x14ba901, 0xc67349fb90)
	/usr/local/go/src/runtime/malloc.go:903 +0x793 fp=0xc0f7e5edc0 sp=0xc0f7e5ed20 pc=0x88f333
runtime.makeslice(0x12d83c0, 0x0, 0x100, 0xc67349fb90, 0x0, 0x0)
	/usr/local/go/src/runtime/slice.go:70 +0x77 fp=0xc0f7e5edf0 sp=0xc0f7e5edc0 pc=0x8c70b7, 0x17b3d4, 0xc01ccfa7b0, 0x8ec814)
	/go/src/ +0xd57 fp=0xc0f7e5ef80 sp=0xc0f7e5edf0 pc=0x10e0be7, 0xc28ac8e090, 0x174960, 0x17b3d4)
	/go/src/ +0x3a fp=0xc0f7e5efc0 sp=0xc0f7e5ef80 pc=0x10e12da
	/usr/local/go/src/runtime/asm_amd64.s:1333 +0x1 fp=0xc0f7e5efc8 sp=0xc0f7e5efc0 pc=0x8e0451
created by
	/go/src/ +0x3c2

goroutine 1 [semacquire, 1 minutes]:
	/usr/local/go/src/runtime/sema.go:56 +0x39
	/usr/local/go/src/sync/waitgroup.go:130 +0x64
	/go/src/ +0x688

And restarting stops.

What Dgraph version are you running? The heap profiles don’t line up with the code for v1.0.11.

I have to build custom docker image, because it’s only way how I was able to build custom tokenizer which works with dgraph. I built this image with this instructions:

FROM golang

RUN mkdir -p /dgraph/plugins
COPY tokenizer/nfd.go /dgraph/plugins
# build dgraph
    go get -v && \
    cd /go/src/ && \
    git fetch --all --tags --prune && \
    git checkout tags/v1.0.11 -b plugin && \
    cd dgraph/ && \
    go build && \
    chmod a+x dgraph && \
    mv dgraph /usr/local/bin/dgraph && \
    cd /dgraph/plugins && \
    go build -buildmode=plugin -o ./nfd.go && \
    rm ./nfd.go && \
    rm -r /go/src/*


VOLUME /dgraph

WORKDIR /dgraph

What’s the output of dgraph version here?

You can build a custom tokenizer with the same Go version used to build Dgraph. It’s not necessary to build another Docker image with Dgraph.

Here is output:

dgraph version 

Dgraph version   : 
Commit SHA-1     : 
Commit timestamp : 
Branch           : 
Go version       : go1.11.4

For Dgraph official documentation, visit
For discussions about Dgraph     , visit
To say hi to the community       , visit

Licensed variously under the Apache Public License 2.0 and Dgraph Community License.
Copyright 2015-2018 Dgraph Labs, Inc.

I have tried to build tokenizer with same version, but there was some issue with used libs for text parsing (I can try it again)

Are you available for a chat on the community Slack channel? You can DM me there and we can look into this together.