Performance Questions

So right now I have a single, fairly high end server machine (aka, 8 cores, 64G of RAM). When I do my data import, just using a simple Python program, the input gets slower and slower over time. I think what’s happening is that because I’m doing queries and then mutations, it is continually rebuilding the index. I have something on the order of 3 million nodes in the DB. My dgraph directory is 19G at the moment.

I am currently using the “single hit not for production” Docker image. But I assume this image is using as many CPUs and as much RAM as it wants?

What’s the RECOMMENDED way to set something like this up on a single machine? I assume there is a better way! (Below, a screenshot of my indexer taking lots of seconds to do stuff before I can continue to import…)

Thanks!