At what level I should consider caching

Hi,

In a micro-service platform I have scenarios where a service is providing the same resource over and over.
For example, per request the User Repo can serve the same data more than once to many other micro-services.

this request for example mean searching for specific typed node with an indexed UUID:

search(func: has(user)) @filter(eq(id, "33e4ab50-128e-4669-90fe-e957e6410aa5"))

Also assuming having 1 billion nodes. from them 5 million relvant typed nodes.

Should I consider some kind of cache like etcd, memcache or it will be kinda redundant as Dgraph uses badger or inmemory cache?

Thanks

Dgraph does have an LRU cache, the size for which is set using lru_mb (memory_mb) in previous versions which should cache frequently used data.

search(func: has(user)) @filter(eq(id, β€œ33e4ab50-128e-4669-90fe-e957e6410aa5”))

Here I suppose, the id is an external id and would just correspond to one Dgraph node? In that case, you don’t need to have has(user), you could just use the id filter for the function at root.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.