Block cache size usage

When I do table_options.block_cache->GetUsage() according to this page, I keep getting block_cache being null (it is a shared pointer).

I then trace the cache constructors and destructors to lru_cache.cc. With some printfs, what I find is that this object defined around here gets constructed and destroyed many times.

I am not sure if this is due to gorocksdb & garbage collection or everything is working as desired — the block cache is just a short-lived object.

I have run “dgraph” and “dgraphloader” and “dgraphassigner”.

In any case, according to comments, and the code, it seems that the size of the block cache is capped at 8M, which is tiny. I don’t think we should worry too much about it?

Good work with the 8MB finding @jchiu.

gotecbot has an API for setting BlockCache which has a default value of false. So maybe because it’s false, block cache isn’t even set?

Thanks @pawan. I am experimenting with setting cache and compressed cache explicitly. But I am still getting cache usage = 0. But when I printf in Insert calls to cache (shard), usage is not zero. I am puzzled. Need more digging.

Based on https://github.com/facebook/rocksdb/wiki/Memory-usage-in-RocksDB:

So decreasing block cache doesn’t really increase the disk I/O. It only means more space for page cache – raw compressed blocks. For our purposes, the block cache isn’t that useful. Because when we read a posting list, we keep that in memory for a while. So, there’s no need to read the uncompressed value again from RocksDB for a while, because we already have that cached in Go space.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.