Badger write is quite slow

package main

import (
"github.com/dgraph-io/badger"
"log"
"time"
"strconv"
"math/rand"
)

func main() {
opts := badger.DefaultOptions
opts.Dir = "/tmp/insertdgarph"
opts.ValueDir = "/tmp/insertdgarph"
db, err := badger.Open(opts)
if err != nil {
	log.Fatal(err)
}
defer db.Close()

start:=(time.Now().Unix())
var total=1000000
var sst string
for i := 0; i < 57; i++ {
	sst+="v"
}
addcount:=0
for i:=0;i<total;i++{
	err:=db.Update(func(txn *badger.Txn) error {
		return txn.Set([]byte(strconv.Itoa(rand.Intn(total))+"333333333"),[]byte(strconv.Itoa(rand.Intn(total))+sst))
	})
	if err!=nil {
		println(err.Error())
	}
	

	addcount++
	println(addcount);
}

println(addcount)
stop:= time.Now().Unix()
println(stop-start)

}

a simple demo from badger readme. the speed is really too slow 100 /s
do you have any idea?
the api is really not user friendly . I still confused about what the differents between managedb and db .
really need to improve

I also meet this “Invalid API request. Not allowed to perform this action using ManagedDB” for several time when I use the managedb api ? is hart to check source code, who can help me ?

It’s okay , the problem is solved now here is the code

package main

import (
"github.com/dgraph-io/badger"
"log"
"time"
"strconv"
"math/rand"
"sync"
)

func main() {
opts := badger.DefaultOptions
opts.Dir = "/tmp/insertdgarph"
opts.ValueDir = "/tmp/insertdgarph"
db, err := badger.Open(opts)
if err != nil {
	log.Fatal(err)
}
defer db.Close()

start:=(time.Now().Unix())
var total=10000000
var sst string
for i := 0; i < 57; i++ {
	sst+="v"
}
addcount:=0
var wg sync.WaitGroup
wg.Add(total)
for i:=0;i<total;i++{
	go func() {
		err:=db.Update(func(txn *badger.Txn) error {
			return txn.Set([]byte(strconv.Itoa(rand.Intn(total))+"333333333"),[]byte(strconv.Itoa(rand.Intn(total))+sst))
		})
		if err!=nil {
			println(err.Error())
		}
		defer wg.Done()
	}()

	addcount++
	println(addcount);
}
wg.Wait()
println(addcount)
stop:= time.Now().Unix()
println(stop-start)

}

I don’t know how go routines work but this really help
but there is a new problem occured

badger try to apply memory until operator kill it ,why not block insert data based on memory usage? just like java -Xmx4g

try to Managing transactions manually, that’s mean batch write

i tried your application, it took 112s to write the total data 1000000, it is 1000/s

use the second code it will 50000/s

Batch up your Sets, if possible. You can add hundreds of keys per batch. Obviously, also use goroutines.

If you’re doing things serially in one goroutine, you could use callbacks, so the Commit doesn’t block:

txn := db.NewTransaction()
defer txn.Discard()

err := txn.Set(...) // Handle error.
err = txn.Commit(func (err error) {
  defer wg.Done()
  if err != nil { println(err.Error()) } // Handle error.
})
// Handle error.

// Loop for all the sets.
// Once done looping, call wg.Wait()