Why is ristretto using so much memory?

Moved from GitHub ristretto/115

Posted by caofengl:

package main

import (
	"fmt"
	"math/rand"
	"strconv"

	"github.com/dgraph-io/ristretto"
)

const (
	maxCnt  = 1e7
	maxLoop = 1e8
)

func main() {
	cahchePool, err := ristretto.NewCache(&ristretto.Config{
		NumCounters: 1 * 1e7,
		MaxCost:     200 * (1 << 20), // allocate 200M, but running it need 2GB, why?(when i run this program, it kill by OOM)
		BufferItems: 64,
		Metrics:     true,
	})

	if err != nil {
		panic(err)
	}

	key := "def48b5abb6388d3cbbb18e550f47b4cYB6eRI3cK1VN2qCEHp8kvuMuH20dq10cYDcG2e"
	for i := 0; i < maxLoop; i++ {
		suffix := strconv.Itoa(rand.Intn(maxCnt))
		cahchePool.Set(key+suffix, suffix, int64(len(key+suffix)))
		if (i % maxCnt) == 0 {
			fmt.Println(cahchePool.Metrics)
		}
	}

	cnt := 0
	for i := 0; i < maxCnt; i++ {
		if _, found := cahchePool.Get(key + strconv.Itoa(i)); found {
			cnt++
		}
	}
	fmt.Println("cnt:", cnt, "\n", cahchePool.Metrics)
}

caofengl commented :

I used ristretto in our company’s project, and I really need help!

caofengl commented :

Can the author help give the correct code example(use 200M memory)? My English is very poor, thank you very much!

caofengl commented :

package main

import (
	"fmt"
	"math/rand"
	"strconv"
	"sync"
	"time"

	"github.com/dgraph-io/ristretto"
)

const (
	maxLoop = 1e8
	maxCnt  = 1e7
)

func main() {
	cost := int64(100 * 1024 * 1024)
	cache, err := ristretto.NewCache(&ristretto.Config{
		NumCounters: cost / 8,
		MaxCost:     cost / 88,  // key+value size
		BufferItems: 64,
	})

	if err != nil {
		panic(err)
	}

	key := `def48b5abb6388d3cbbb18e550f47b4cYB6eRI3cK1VN2qCEHp8kvuMuH20dq10cYDcG2e

	var wg sync.WaitGroup
 
	for i := 0; i < maxLoop/maxCnt; i++ {
		wg.Add(1)
		go func() {
			now := time.Now()
			for i := 0; i < maxCnt; i++ {
				val := strconv.Itoa(rand.Intn(maxCnt))
				cache.Set(key+val, val, 1)
			}
			fmt.Println(time.Since(now).Seconds())
			wg.Done()
		}()
	}
	wg.Wait()

	cnt := 0
	now := time.Now()
	for i := 0; i < maxCnt; i++ {
		_, found := cache.Get(key + strconv.Itoa(i))
		if found {
			cnt++
		}
	}
	fmt.Println(time.Since(now).Seconds())
	fmt.Println(cnt)
	time.Sleep(10 * 60 * time.Second)
}

write
196.145246674
196.594516091
198.114088657
198.93801043
201.966241441
202.236675922
204.914710305
204.955456302
205.115802574
205.308419901
read
24.981067788
count
1191525

caofengl commented :

package main

import (
	"fmt"
	"math/rand"
	"strconv"
	"sync"
	"time"

	"github.com/coocood/freecache"
)

const (
	maxLoop = 1e8
	maxCnt  = 1e7
)

func main() {
	cacheSize := 100 * 1024 * 1024 // 1GB
	cache := freecache.NewCache(cacheSize)
	expire := 60 * 60 // expire in 60 seconds

	key := `def48b5abb6388d3cbbb18e550f47b4cYB6eRI3cK1VN2qCEHp8kvuMuH20dq10cYDcG2e

	var wg sync.WaitGroup
	for i := 0; i < maxLoop/maxCnt; i++ {
		wg.Add(1)
		go func() {
			now := time.Now()
			for i := 0; i < maxCnt; i++ {
				val := strconv.Itoa(rand.Intn(maxCnt))
				cache.Set([]byte(key+val), []byte(val), expire)
			}
			fmt.Println(time.Since(now).Seconds())
			wg.Done()
		}()
	}
	wg.Wait()

	cnt := 0
	now := time.Now()
	for i := 0; i < maxCnt; i++ {
		_, err := cache.Get([]byte(key + strconv.Itoa(i)))
		if err == nil {
			cnt++
		}
	}
	fmt.Println(time.Since(now).Seconds())
	fmt.Println("entry count ", cnt, cache.EntryCount())
	time.Sleep(10 * 60 * time.Second)
}
write
143.327503682
144.094319665
144.148188694
145.014152054
145.317289531
146.429781022
146.803659841
146.842320393
146.899385914
146.907347694
read
9.807717003
count
entry count  868063 868063

caofengl commented :

memory contrash
freecache: 300M // keep 300M
ristretto: 1.7G        // up to 1.7G,  time.Sleep(10 * 60 * time.Second) -> 660M
1 Like

martinmr commented :

Hi.

Sorry for not answering your question more promptly. We have been at work trying to release Dgraph 2.0.0.

I am not really sure why the differences in memory exist but I suspect you might have run into the bug described in #107.

The key in your examples are very big so if they escape to the heap they will use a lot more memory. For now, I suggest using smaller keys (uint64 for example).

minhaj-shakeel commented :

Re-open the issue to test github-issue porting task.

Hello, has this issue been resolved? I would prefer not to refactor my keys.

Hi. The issue is not resolved yet but it’s on our plate right now. I’ll try the proposed solution and I’ll benchmark it to see if it works.

See also KeyToHash possibly letting key interfaces escape to heap which is what we suspect is the cause of the issue.

Changing ristretto to use byte arrays instead of interfaces has a modest improvement.

Before:

Fetching profile over HTTP from http://localhost:7777/debug/pprof/heap
Saved profile in /home/martinmr/pprof/pprof.ristretto_test.alloc_objects.alloc_space.inuse_objects.inuse_space.008.pb.gz
File: ristretto_test
Build ID: 2885a59b6d7682f199199dc06f8fa8a3a036a632
Type: inuse_space
Time: Sep 17, 2020 at 3:57pm (PDT)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top
Showing nodes accounting for 1923.09MB, 99.92% of 1924.59MB total
Dropped 3 nodes (cum <= 9.62MB)
Showing top 10 nodes out of 21
      flat  flat%   sum%        cum   cum%
 1201.58MB 62.43% 62.43%  1201.58MB 62.43%  github.com/dgraph-io/ristretto.(*lockedMap).Set
  342.51MB 17.80% 80.23%   504.01MB 26.19%  main.main.func2
  178.37MB  9.27% 89.50%   178.37MB  9.27%  github.com/dgraph-io/ristretto.(*sampledLFU).add (inline)
     153MB  7.95% 97.45%      153MB  7.95%  github.com/dgraph-io/ristretto.(*expirationMap).update
      24MB  1.25% 98.69%       24MB  1.25%  github.com/dgraph-io/ristretto.newCmRow (inline)
      16MB  0.83% 99.53%       16MB  0.83%  github.com/dgraph-io/ristretto/z.(*Bloom).Size (inline)
       7MB  0.36% 99.89%      160MB  8.31%  github.com/dgraph-io/ristretto.(*Cache).SetWithTTL
    0.64MB 0.033% 99.92%    40.64MB  2.11%  github.com/dgraph-io/ristretto.NewCache
         0     0% 99.92%      160MB  8.31%  github.com/dgraph-io/ristretto.(*Cache).Set (inline)
         0     0% 99.92%  1379.94MB 71.70%  github.com/dgraph-io/ristretto.(*Cache).processItems

After

➜  ~ go tool pprof -source_path /home/martinmr/go/src/github.com/dgraph-io -trim_path / http://localhost:7777/debug/pprof/heap

Fetching profile over HTTP from http://localhost:7777/debug/pprof/heap
Saved profile in /home/martinmr/pprof/pprof.ristretto_test.alloc_objects.alloc_space.inuse_objects.inuse_space.009.pb.gz
File: ristretto_test
Build ID: 62b363517c4daeebe8c4fd7f5ae9a3c5f6fb0345
Type: inuse_space
Time: Sep 17, 2020 at 3:58pm (PDT)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) top
Showing nodes accounting for 1654.09MB, 100% of 1654.09MB total
Showing top 10 nodes out of 21
      flat  flat%   sum%        cum   cum%
  856.23MB 51.76% 51.76%   856.23MB 51.76%  github.com/dgraph-io/ristretto.(*lockedMap).Set
  485.87MB 29.37% 81.14%   485.87MB 29.37%  github.com/dgraph-io/ristretto.(*sampledLFU).add (inline)
     153MB  9.25% 90.39%      153MB  9.25%  github.com/dgraph-io/ristretto.(*expirationMap).update
     106MB  6.41% 96.80%      264MB 15.96%  main.main.func2
      32MB  1.93% 98.73%       32MB  1.93%  github.com/dgraph-io/ristretto.newCmRow
      16MB  0.97% 99.70%       16MB  0.97%  github.com/dgraph-io/ristretto/z.(*Bloom).Size
       5MB   0.3%   100%      158MB  9.55%  github.com/dgraph-io/ristretto.(*Cache).SetWithTTL
         0     0%   100%      158MB  9.55%  github.com/dgraph-io/ristretto.(*Cache).Set (inline)
         0     0%   100%  1342.09MB 81.14%  github.com/dgraph-io/ristretto.(*Cache).processItems
         0     0%   100%   485.87MB 29.37%  github.com/dgraph-io/ristretto.(*defaultPolicy).Add

The biggest item is creating pointers to store in the map.

ROUTINE ======================== github.com/dgraph-io/ristretto.(*lockedMap).Set in home/martinmr/go/pkg/mod/github.com/dgraph-io/ristretto@v0.0.4-0.20200917224801-55930f9efc36/store.go
  856.23MB   856.23MB (flat, cum) 51.76% of Total
         .          .    179:           m.em.add(i.Key, i.Conflict, i.Expiration)
         .          .    180:   }
         .          .    181:
         .          .    182:   // block := z.Calloc(itemSize)
         .          .    183:   // sItem := (*storeItem)(unsafe.Pointer(&block[0]))
  524.54MB   524.54MB    184:   sItem := &storeItem{}
         .          .    185:   sItem.key = i.Key
         .          .    186:   sItem.conflict = i.Conflict
         .          .    187:   sItem.value = i.Value
         .          .    188:   sItem.expiration = i.Expiration
         .          .    189:   // sItem.block = &block
  331.69MB   331.69MB    190:   m.data[i.Key] = sItem
         .          .    191:}

As usual, the memory reported by the OS is a lot larger. It was over 3GB for both versions of ristretto. A bit lower for the version using byte arrays instead of interfaces.

I tried to use Calloc to manually allocate and free the pointers but I get a panic in line 795 in this file: src/runtime/mgcsweep.go - The Go Programming Language

I am not sure what I am doing wrong. I tried to store the pointer to the block inside the storeItem object itself. Maybe it’s that or calling delete on the map before I call Free on the block.

I looked into the Allocator but that seems to be for allocating a lot of objects that will be freed at once like in the Encoder which is a different use case than trying to free the memory as soon as each item is deleted from the map.

My changes are in the martinmr/bytes branch in ristretto. The calloc code is commented out.

This is the script I used (modified from the script the original user provided):

package main

import (
	"fmt"
	"math/rand"
	"net/http"
	"strconv"
	"sync"
	"time"

	"github.com/dgraph-io/ristretto"
	"github.com/dgraph-io/ristretto/z"

	_ "net/http/pprof"
)

const (
	maxCnt  = 1e7
)

func main() {
	buf := z.CallocNoRef(1)
	jem := len(buf) > 0
	z.Free(buf)
	if jem {
		fmt.Println("jemalloc enabled")
	}

	go func() {
		fmt.Println(http.ListenAndServe("localhost:7777", nil))
	}()

	cost := int64(100 * 1024 * 1024) // 100MB
	cache, err := ristretto.NewCache(&ristretto.Config{
		NumCounters: cost / 10,
		MaxCost:     cost,
		BufferItems: 64,
	})

	if err != nil {
		panic(err)
	}

	var wg sync.WaitGroup
	fmt.Println("Setting keys in cache.")
	for i := 0; i < 3; i++ {
		wg.Add(1)
		go func() {
			for i := 0; i < maxCnt; i++ {
				val := strconv.Itoa(rand.Intn(maxCnt))
				cache.Set([]byte(val), []byte(val), int64(len(val)))
			}
			wg.Done()
		}()
	}
	wg.Wait()
	fmt.Println("Finished setting keys")

	fmt.Println("Getting keys in cache.")
	cnt := 0
	for i := 0; i < maxCnt; i++ {
		_, found := cache.Get([]byte(strconv.Itoa(i)))
		if found {
			cnt++
		}
	}
	fmt.Printf("Finished reading keys, found %d keys\n", cnt)
	time.Sleep(10 * 60 * time.Second)
}

I checked what freecache is doing. There are two main differences.

  1. Freecache is not using a map to store values. They implemented their own data structure based on a ring buffer. It looks like this results in a much more compact data storage.
  2. Freecache is accounting for the size of their internal data in size calculations. This results in a lot less keys stored in the cache. Ristretto is not accounting for that so the number of keys in the cache at the end of the script is much higher.

I added the size of storeItem (72 bytes) to the cost and I get a much lower memory footprint. Still higher than freecache. Maybe rearranging the fields in the struct can reduce the size.

1 Like

Improve memory performance by martinmr · Pull Request #195 · dgraph-io/ristretto · GitHub This reduces memory usage although further optimization might be possible.

This change is merged now. It improves the memory usage. We’ll try to optimize further but right now we don’t have any definite ideas on how to do so.

Hey @martinmr I am also facing similar issues. I have set cache to say 80Gig but on some nodes i see cache being double of that. Here are config values for me -

ristretto-max-cost = 80Gig in bytes
ristretto-max-counters = “100000000”
enable-ristretto-metrics= “true”

I am also using set with TTL and providing cache item size as cost. This is in production and we have a significant amount of traffic, any help is appreciated.

1 Like