I ran benchmarks to marshal, then unmarshal a list of uint64 uids via Gogo Protocol Buffers and Flatbuffers. Here’re the results.
$ go test --bench . --benchmem ~/go/src/github.com/dgraph-io/experiments/flats
testing: warning: no tests to run
BenchmarkToAndFrom/Proto-10-4 3000000 556 ns/op 440 B/op 7 allocs/op
BenchmarkToAndFrom/Flatb-10-4 2000000 717 ns/op 440 B/op 12 allocs/op
BenchmarkToAndFrom/Proto-100-4 500000 4255 ns/op 3960 B/op 10 allocs/op
BenchmarkToAndFrom/Flatb-100-4 500000 3075 ns/op 3128 B/op 18 allocs/op
BenchmarkToAndFrom/Proto-1000-4 50000 35672 ns/op 35064 B/op 13 allocs/op
BenchmarkToAndFrom/Flatb-1000-4 100000 29088 ns/op 26680 B/op 24 allocs/op
BenchmarkToAndFrom/Proto-10000-4 2000 533612 ns/op 574200 B/op 22 allocs/op
BenchmarkToAndFrom/Flatb-10000-4 10000 287535 ns/op 452920 B/op 32 allocs/op
BenchmarkToAndFrom/Proto-100000-4 300 4005594 ns/op 6456064 B/op 32 allocs/op
BenchmarkToAndFrom/Flatb-100000-4 1000 3359384 ns/op 3615039 B/op 38 allocs/op
BenchmarkToAndFrom/Proto-1000000-4 50 35891282 ns/op 63185667 B/op 42 allocs/op
BenchmarkToAndFrom/Flatb-1000000-4 100 21533788 ns/op 27765056 B/op 44 allocs/op
BenchmarkToAndFrom/Proto-10000000-4 3 336125866 ns/op 603431680 B/op 52 allocs/op
BenchmarkToAndFrom/Flatb-10000000-4 5 216030227 ns/op 416909635 B/op 52 allocs/op
PASS
ok github.com/dgraph-io/experiments/flats 31.371s
The naming is as follows:
BenchmarkToAndFrom/<protocol>-<number of uids>-<number of cores>
Flatbuffers is clearly the winner after the first result, where we only had 10 uids. I don’t like the ugliness that Flatbuffers brings to our code base, but clearly it has a significant impact on our performance and memory allocations. And therefore, we should stick to it.
Benchmarking code is here: