Panic: Slice bounds out of range when indexing

What I want to do

Build a string hash index from a predicate.

What I did

Started the index process, shortly after Dgraph crashed with an out of bounds error (see logs below). I can restart Dgraph to a workable state, but if the indexing is restarted the same error occurs.

I0111 13:18:25.356675       1 log.go:34] Rebuilding index for predicate 0-some_predicate (1/2): [19m01s] Scan (8): ~37.4 GiB/143 GiB at 30 MiB/sec. Sent: 42.5 GiB at 36 MiB/sec. jemalloc: 8.8 GiB
I0111 13:18:30.927858       1 log.go:34] Rebuilding index for predicate 0-some_predicate (1/2): [19m06s] Scan (8): ~37.6 GiB/143 GiB at 31 MiB/sec. Sent: 42.8 GiB at 36 MiB/sec. jemalloc: 8.3 GiB
panic: runtime error: slice bounds out of range [-3093826186:] [recovered]
        panic: runtime error: slice bounds out of range [-3093826186:]
        panic: 
== Recovering from initIndex crash ==
File Info: [ID: 1904, Size: 330278778, Zeros: 1]
isEnrypted: false checksumLen: 3424104960 
== Recovered ==


goroutine 37855 [running]:
github.com/dgraph-io/badger/v3/table.(*Table).initBiggestAndSmallest.func1.1(0xc0dbd66540)
        /go/pkg/mod/github.com/dgraph-io/badger/v3@v3.0.0-20210405181011-d918b9904b2a/table/table.go:351 +0x125
panic(0x20283a0, 0xc0db8cd2d8)
        /usr/local/go/src/runtime/panic.go:965 +0x1b9
github.com/dgraph-io/ristretto/z.(*MmapFile).Bytes(...)
        /go/pkg/mod/github.com/dgraph-io/ristretto@v0.0.4-0.20210504190834-0bf2acd73aa3/z/file.go:116
github.com/dgraph-io/badger/v3/table.(*Table).read(...)
        /go/pkg/mod/github.com/dgraph-io/badger/v3@v3.0.0-20210405181011-d918b9904b2a/table/table.go:418
github.com/dgraph-io/badger/v3/table.(*Table).readNoFail(0xc009513e00, 0xffffffff4797f576, 0xcc17b200, 0x10, 0xc0009a1790, 0x1)
        /go/pkg/mod/github.com/dgraph-io/badger/v3@v3.0.0-20210405181011-d918b9904b2a/table/table.go:422 +0xf4
github.com/dgraph-io/badger/v3/table.(*Table).initBiggestAndSmallest.func1(0xc009513e00)
        /go/pkg/mod/github.com/dgraph-io/badger/v3@v3.0.0-20210405181011-d918b9904b2a/table/table.go:381 +0x3a5
panic(0x20283a0, 0xc0db8cd2c0)
        /usr/local/go/src/runtime/panic.go:965 +0x1b9
github.com/dgraph-io/ristretto/z.(*MmapFile).Bytes(...)
        /go/pkg/mod/github.com/dgraph-io/ristretto@v0.0.4-0.20210504190834-0bf2acd73aa3/z/file.go:116
github.com/dgraph-io/badger/v3/table.(*Table).read(...)
        /go/pkg/mod/github.com/dgraph-io/badger/v3@v3.0.0-20210405181011-d918b9904b2a/table/table.go:418
github.com/dgraph-io/badger/v3/table.(*Table).readNoFail(0xc009513e00, 0xffffffff4797f576, 0xcc17b200, 0x7dca3cc9c776, 0x4, 0x4)
        /go/pkg/mod/github.com/dgraph-io/badger/v3@v3.0.0-20210405181011-d918b9904b2a/table/table.go:422 +0xf4
github.com/dgraph-io/badger/v3/table.(*Table).initIndex(0xc009513e00, 0x0, 0x8, 0x203004)
        /go/pkg/mod/github.com/dgraph-io/badger/v3@v3.0.0-20210405181011-d918b9904b2a/table/table.go:443 +0x168
github.com/dgraph-io/badger/v3/table.(*Table).initBiggestAndSmallest(0xc009513e00, 0x0, 0x0)
        /go/pkg/mod/github.com/dgraph-io/badger/v3@v3.0.0-20210405181011-d918b9904b2a/table/table.go:401 +0x85
github.com/dgraph-io/badger/v3/table.OpenTable(0xc013b27aa0, 0x0, 0x1000000, 0xf33333, 0x0, 0x3f847ae147ae147b, 0x1000, 0x0, 0x0, 0xc009082180, ...)
        /go/pkg/mod/github.com/dgraph-io/badger/v3@v3.0.0-20210405181011-d918b9904b2a/table/table.go:308 +0x227
github.com/dgraph-io/badger/v3/table.CreateTable(0xc028af7ec0, 0x23, 0xc08b393b90, 0xc028af7ec0, 0x23, 0xc00aeb9708)
        /go/pkg/mod/github.com/dgraph-io/badger/v3@v3.0.0-20210405181011-d918b9904b2a/table/table.go:273 +0x4a5
github.com/dgraph-io/badger/v3.(*levelsController).subcompact.func4.1(0x770, 0xaba6d0, 0xc0571bf4a0, 0x4)
        /go/pkg/mod/github.com/dgraph-io/badger/v3@v3.0.0-20210405181011-d918b9904b2a/levels.go:844 +0x7a
github.com/dgraph-io/badger/v3.(*levelsController).subcompact.func4(0xc0dc193a80, 0xc013e51650, 0x770, 0xc08b3966c0, 0xc0c3b37e00, 0xc08b393b90)
        /go/pkg/mod/github.com/dgraph-io/badger/v3@v3.0.0-20210405181011-d918b9904b2a/levels.go:851 +0x216
created by github.com/dgraph-io/badger/v3.(*levelsController).subcompact
        /go/pkg/mod/github.com/dgraph-io/badger/v3@v3.0.0-20210405181011-d918b9904b2a/levels.go:837 +0x614

Dgraph metadata

dgraph version Dgraph version : v21.03.2

Dgraph codename : rocket-2

Dgraph SHA-256 : 00a53ef6d874e376d5a53740341be9b822ef1721a4980e6e2fcb60986b3abfbf

Commit SHA-1 : b17395d33

Commit timestamp : 2021-08-26 01:11:38 -0700

Branch : HEAD

Go version : go1.16.2

jemalloc enabled : true

Run via Docker

2 Likes

I guess they planned for this failing - the whole function is in a defer with a recover() so that it can safely print out the above - though it would appear that it is panicking in the recover as it is trying to read the checksum of the data. Which makes sense when you see in our output that the checksum length is read as 330MB.

Does this happen with other index types?

I indexed an integer predicate shortly before, which was successful.
Also, I think the panic in the recover block is intentional, to print out debug info.

Yea totally but I am saying it panicked again within the recover, which was certainly not intended.

It seems the size of the table ( 330278778) is smaller than number of bytes trying to be read ( 3424104960) resulting in the negative slice index which than throws the error.
So probably the table itself or the checksum length is somehow malformed.
Or at this line there may be some integer wrap around: badger/table.go at v3.2103.0 · dgraph-io/badger · GitHub

yea that cast to int is suspect but I assume you are running on a 64bit machine? If so the int should be 64bits which obviously fully holds the uint32 read from the byte slice.

Yes, I am running Dgraph on a 64 bit machine. But as you said, that cast looks not very safe. Anyway, seems like on the fly indexes are not working for me as of now :frowning: