I am new to the Graph database world but have experience in traditional relational databases. I am evaluating if Dgraph would be a good choice for one of my projects.
One of the key requirements I have is the ability to run multiple small standalone Dgraph instances on single machines with low resources (something like 2 CPUs, 1 GiB memory). The amount of data stored on a single instance will be quite small (let’s say <10k documents; ~20 types; <100 predicates in total). Frequency of queries and mutations will be very low (probably less than 5 simple queries per second). High availability and high performance are not a requirement in this context.
When reading the docs, I felt like Dgraph was not built for this “hobby size” scenario; and these questions came up:
- The docs state that 16 CPUs and 32 GiB of memory per machine are a common configuration. Is it possible to run Dgraph (both the Alpha and Zero node) on one tiny machine with e.g. 2 CPUs and 1 GiB of memory?
- I read that the standalone Docker image is not recommended for production. What is the reason for this; and what is the difference to a single-node setup?
- Would you recommend using Dgraph also for low-resource environments; or should I stick to a traditional RDBMS?