Setting up dgraph in a test environment

I’m getting ready to start integration and E2E tests for my application. In so doing, I’m trying to figure out

  • The minimum number of ports I need to expose to commit a small number of queries and mutations
  • Making sure that ports are set up correctly so zero and server can talk to each other.

I’ve had a look at the ports table, my understanding is that if the tests use a grpc client, the only port I need exposed is 9080, is this correct?

Also, please bear with me as I’m not a docker expert.

When comparing the setup in the tour with the setup from the docs, the former uses docker exec, and the latter docker run.

It is my understanding that using docker run will create a new container to run the server. In this case the server will need to communicate with zero through an external port exposed by the image running zero, is this correct? Does this mean that if zero is running on port 5080 within it’s image, in order for the server to talk to dgraph, this port must be exposed? The port chosen to export the internal 5080 port must then be supplied to the server by means of the --zero flag correct?

However, if we can get by with keeping things small, and we don’t need a separate docker image for the server, we can boot it up within the same image as zero's with docker exec (as is done in the tutorial). In this case, do we still need to expose port 5080?

In the following command (taken form the tutorial), is the localhost:5080 referring to the machine where docker is running, or an internal image address?

docker exec -it dgraph dgraph server --lru_mb 2048 --zero localhost:5080

Sorry for the long post, just trying to straighten out several ideas.

What’s on the Tour is just to use for learning, more specific recommendations with Docker should follow what’s in Dgraph’s documentation or what the Docker community recommends.

No, Dgraph needs Server and Zero. So you have to expose 5080, 9080 and 7080. And HTTP ports if gonna use Ratel and a HTTP client.

Probably not, but not recommended. It is a standard of Docker’s own use. Each container running a service/binary. Leaving a “fatty” container is not recommended by Docker community. And it does not help at all, you do not lose anything in following the recommendations.

It can binds locally.

Do like the docker-compose.yml example. That one follows the recommended approach.

1 Like

Just as an FYI for anyone else looking into running a minimal setup (for testing purposes, for example): If you’re

  • running a single container,
  • and within that container you run both zero and server with their default ports
  • and your client is using grpc to communicate with server

then the only port you need to expose/publish is 9080.

The particular setup that works for me:

Dgraph Docker setup for testing

docker run -p 9080:9080 --name dgraph-tests dgraph/dgraph dgraph zero
docker exec dgraph-tests dgraph server --lru_mb 4096 --zero localhost:5080

^^Forgive the formatting, hljs thinks it sql code

Client setup, dgraph-js

import * as dgraph from 'dgraph-js';
import grpc from 'grpc';

const clientStub = new dgraph.DgraphClientStub(
  'localhost:9080',
  grpc.credentials.createInsecure()
);

const dgraphClient = new dgraph.DgraphClient(clientStub);