I was able to get Dgraph up and loaded w/ Ratel into the Console.
I actually didn’t have to do anything with Endpoints / Proxies although we do have them up? I didn’t configure anything additionally on my end. We use Nginx
I just removed the AWS deploy constraints from yalls YML file and added in the Ratel YAML you sent
connected via :8000
I couldn’t get in with that Groot /Password login… and then once I changed the alpha url to port 8080 inside the client, it connected me through
@MichelDiz like I said, I’m kinda getting throw into the fire, idk much about this product. I was just asked to figure out the deployment and get it running
If you need login w/o the enterprise features, then you’ll need to use a reverse proxy in front on the Dgraph Alpha service endpoint or apart of that endpoint.
Another complimentary solution would be to use mutual TLS where the client submits a client certificate that allows access. If you do this directly to the Dgraph Alpha nodes, the docs are at: https://dgraph.io/docs/deploy/tls-configuration/. This will be a high level of complexity if you never set this up before.
In Kubernetes space, there’s a few service meshes that can do the Mutual TLS automatically, but for Docker Swarm, I am not sure, maybe Hashicorp’s Consul (https://www.consul.io/).
Adding some more, another reverse proxy that is popular w/ with Fabio (https://fabiolb.net/) and of course nginx. For oauth2 proxy there’s Oauth2 Proxy.
I mention the reverse-proxy - load balancer part is because this: if you have 3 dgraph alpha nodes for example, you’ll need a reverse proxy + load balancer to send traffic to one of three nodes, so that the traffic is distributed and you have high availability.
The caution is that if you use Enterprise features like ACLs for user accounts, or binary backups, those features will expire at the end of the trial period. You won’t be able to such features.
Side question, is your organization open to using Kubernetes? As you can see, distributed systems (clusters) are complex in general, more so especially stateful (data) distributed systems. Though Kubernetes also adds a layer or complexity in and of itself, but has a lot of features to make this process smoother, and has a the rich set of open source third party addons (reverse proxies called ingresses, cloud integrations, certificates, service meshes, dns, etc.). Also, Dgraph has a paid service dgraph cloud that can manage this if it becomes too complex (and the enterprise features don’t expire)
I believe we are going to eventually be moving from Docker to K8s, I have deployed OpenStack and installed a Mikro8ks Cluster for testing and breaking, have ingress and dashboard set up and then it will move to Devs to build apps/break things for me to look into?
We are looking at something official from Ubuntu/Canonical potentially, just depends if we will be moving to that private cloud hosting for some “political” clients we have
That makes sense. I tried microk8s, but personally was not comfortable with it as they try to handle all the automation for you, but then when things go wrong, I find myself googling snap package issues and or docker containerd vs k8s ocntianerd issues… Some ones I have considered or tinkered with in this case are listed below with my opinion on them, and disclaimer – only experienced hands-on with RKE.
Platform9 - looked good, but only support old versions of Ubuntu; Ubuntu 20.04 not yet supported.
RKE - doesn’t use kubeadm, but it’s own set of automation; very easy to set up, once requirements are met
The advantage for any automation the leverages off of kubeadm is that any tools that test compliance of your K8S implementation or security typically work with kubeadm setups. For other platforms, there needs to be customization, for example using a tool like Sonobuoy test harness to run Aqua CIS benchmark tests or K8S E2E tests.