Fatal error: runtime: out of memory

Hi,

I have deployed dgraph on ec2 (created with docker-compose) using docker-compose as per the documentation.

However, Im getting the following error on dgraph alpha:

I0311 17:22:34.638192       1 node.go:83] Rolling up Created batch of size: 25 kB in 2.178793ms.
I0311 17:22:34.638391       1 node.go:83] Rolling up Sent 298 keys
I0311 17:22:34.642176       1 draft.go:832] Rolled up 298 keys. Done
I0311 17:22:34.642261       1 draft.go:353] List rollup at Ts 605: OK.
I0311 17:23:04.634368       1 draft.go:312] Creating snapshot at index: 879. ReadTs: 833.
I0311 17:25:34.640092       1 draft.go:312] Creating snapshot at index: 1106. ReadTs: 1050.
I0311 17:27:34.644976       1 node.go:83] Rolling up Created batch of size: 54 kB in 6.615183ms.
I0311 17:27:34.645215       1 node.go:83] Rolling up Sent 586 keys
I0311 17:27:34.648110       1 draft.go:832] Rolled up 586 keys. Done
I0311 17:27:34.648203       1 draft.go:353] List rollup at Ts 1050: OK.
fatal error: runtime: out of memory

runtime stack:
runtime.throw(0x14817a9, 0x16)
	/usr/local/go/src/runtime/panic.go:608 +0x72
runtime.sysMap(0xc010000000, 0x4000000, 0x1fc3378)
	/usr/local/go/src/runtime/mem_linux.go:156 +0xc7
runtime.(*mheap).sysAlloc(0x1fa96e0, 0x4000000, 0x1fa96f8, 0x7ff229da4288)
	/usr/local/go/src/runtime/malloc.go:619 +0x1c7
runtime.(*mheap).grow(0x1fa96e0, 0x1, 0x0)
	/usr/local/go/src/runtime/mheap.go:920 +0x42
runtime.(*mheap).allocSpanLocked(0x1fa96e0, 0x1, 0x1fc3388, 0x400)
	/usr/local/go/src/runtime/mheap.go:848 +0x337
runtime.(*mheap).alloc_m(0x1fa96e0, 0x1, 0x12, 0x7ff20c438fff)
	/usr/local/go/src/runtime/mheap.go:692 +0x119
runtime.(*mheap).alloc.func1()
	/usr/local/go/src/runtime/mheap.go:759 +0x4c
runtime.(*mheap).alloc(0x1fa96e0, 0x1, 0x7ff20c010012, 0x7ff229da4288)
	/usr/local/go/src/runtime/mheap.go:758 +0x8a
runtime.(*mcentral).grow(0x1faae98, 0x0)
	/usr/local/go/src/runtime/mcentral.go:232 +0x94
runtime.(*mcentral).cacheSpan(0x1faae98, 0x7ff229da4288)
	/usr/local/go/src/runtime/mcentral.go:106 +0x2f8
runtime.(*mcache).refill(0x7ff229eb1440, 0x7ff229debc12)
	/usr/local/go/src/runtime/mcache.go:122 +0x95
runtime.(*mcache).nextFree.func1()
	/usr/local/go/src/runtime/malloc.go:749 +0x32
runtime.systemstack(0x0)
	/usr/local/go/src/runtime/asm_amd64.s:351 +0x66
runtime.mstart()
	/usr/local/go/src/runtime/proc.go:1229

goroutine 25590 [running]:
runtime.systemstack_switch()
	/usr/local/go/src/runtime/asm_amd64.s:311 fp=0xc00fe10b58 sp=0xc00fe10b50 pc=0x8dfc60
runtime.(*mcache).nextFree(0x7ff229eb1440, 0x12, 0x0, 0x0, 0x0)
	/usr/local/go/src/runtime/malloc.go:748 +0xb6 fp=0xc00fe10bb0 sp=0xc00fe10b58 pc=0x8902d6
runtime.mallocgc(0x80, 0x1425fe0, 0xc00fe10d01, 0xf8d15a)
	/usr/local/go/src/runtime/malloc.go:903 +0x793 fp=0xc00fe10c50 sp=0xc00fe10bb0 pc=0x890c23
runtime.newobject(0x1425fe0, 0x0)
	/usr/local/go/src/runtime/malloc.go:1032 +0x38 fp=0xc00fe10c80 sp=0xc00fe10c50 pc=0x891008
github.com/dgraph-io/dgraph/vendor/github.com/dgraph-io/badger.(*DB).newTransaction(0xc0001da300, 0x890100, 0x0)
	/ext-go/1/src/github.com/dgraph-io/dgraph/vendor/github.com/dgraph-io/badger/txn.go:691 +0x5a fp=0xc00fe10cb0 sp=0xc00fe10c80 pc=0xe3e1fa
github.com/dgraph-io/dgraph/vendor/github.com/dgraph-io/badger.(*DB).NewTransactionAt(0xc0001da300, 0xffffffffffffffff, 0xfd42c600, 0x6879023a5daac7d7)
	/ext-go/1/src/github.com/dgraph-io/dgraph/vendor/github.com/dgraph-io/badger/managed_db.go:38 +0x47 fp=0xc00fe10cd8 sp=0xc00fe10cb0 pc=0xe35627
github.com/dgraph-io/dgraph/posting.getNew(0xc00f9edd60, 0x19, 0x19, 0xc0001da300, 0x0, 0x0, 0x0)
	/ext-go/1/src/github.com/dgraph-io/dgraph/posting/mvcc.go:234 +0x65 fp=0xc00fe10d90 sp=0xc00fe10cd8 pc=0xf90f85
github.com/dgraph-io/dgraph/posting.(*LocalCache).Get(0xc00f9f8ec0, 0xc00f9edd60, 0x19, 0x19, 0x19, 0x19, 0x0)
	/ext-go/1/src/github.com/dgraph-io/dgraph/posting/lists.go:212 +0xc2 fp=0xc00fe10de8 sp=0xc00fe10d90 pc=0xf8fba2
github.com/dgraph-io/dgraph/worker.(*queryState).handleUidPostings.func1(0x0, 0x16, 0xc00c6cc8a0, 0x0)
	/ext-go/1/src/github.com/dgraph-io/dgraph/worker/task.go:570 +0x265 fp=0xc00fe10f80 sp=0xc00fe10de8 pc=0x10f32d5
github.com/dgraph-io/dgraph/worker.(*queryState).handleUidPostings.func2(0xc00df8de00, 0xc00fa01c20, 0x0, 0x16)
	/ext-go/1/src/github.com/dgraph-io/dgraph/worker/task.go:678 +0x3a fp=0xc00fe10fc0 sp=0xc00fe10f80 pc=0x10f44da
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:1333 +0x1 fp=0xc00fe10fc8 sp=0xc00fe10fc0 pc=0x8e1d41
created by github.com/dgraph-io/dgraph/worker.(*queryState).handleUidPostings
	/ext-go/1/src/github.com/dgraph-io/dgraph/worker/task.go:677 +0x3d1

BLA BLA ..

goroutine 25388 [running]:
	goroutine running on other thread; stack unavailable
created by github.com/dgraph-io/dgraph/worker.(*queryState).handleUidPostings
	/ext-go/1/src/github.com/dgraph-io/dgraph/worker/task.go:677 +0x3d1

I have minimal load on the graph itself. Something is happening to the memory, that even my instance is getting screwed up.

Im running docker:latest , with the following command (as part of docker-compose) dgraph alpha --my=server:7080 --lru_mb=8096 --zero=zero:5080

Thank you for helping in advance! :pray:
Ranich.

Please share your version, your machine stats. If your docker is using the whole machine resources or it is a default VM. Also what is the size of your load. What tasks you’re doing.

You can try docker inspect | Docker Documentation
and docker stats | Docker Documentation

> docker stats

CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
bbc85987c291        eager_pare          318.03%             278.4MiB / 2.659GiB   10.22%              2.57MB / 3.19MB     0B / 9.82MB         37

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.