Duplicated Rows?

So, I have some code running that updates a k,v pair. When I retrieve the data it’s properly updated, but if I cat the 000000.vlog file, it shows the historical entries, and the new entry. Is this expected behavior, is my code screwing up and adding multiple kv’s? An example:


Each write to value log is an append. Later, value logs can be GCed.

Awesome. Thanks, I just wanted to make sure it doesn’t fill my disk, and spend time looking through my code for a bug. Appreciate it… any idea how long before they can get GCed?

Garbage collection is done manually. The recommendation is to do it periodically, ideally during periods of low activity. See the docs on garbage collection: GitHub - dgraph-io/badger: Fast key-value DB in Go.

1 Like

Awesome. thank you for your help.

So, I’ve built an http server for api requests that use badgerDB… I have an endpoint to call GC:

Which runs this code:

func handleGC(w http.ResponseWriter, r *http.Request) {
        err := db.RunValueLogGC(0.7)
        if err != nil {
                fmt.Fprintf(w, err.Error())

        fmt.Fprintf(w, "success")

It responds with:
Value log GC attempt didn’t result in any cleanup.

I have 2 records in the DB with this info:


The size of the vlog is 224k with every updated “Set” from the start. Why isn’t /gc cleaning up these entries?

GC does not remove the latest value log. If there’s only one vlog, then GC won’t touch it.