Dgraph Ratel error in EKS

I have installed Dgraph on our development EKS cluster using this manifest as recommended for the setup -

The only change I did is expose each service as separate internal ELBs via Kustomize

For Ratel -

---
apiVersion: v1
kind: Service
metadata:
  name: dgraph-ratel-public
  annotations:
    external-dns.alpha.kubernetes.io/hostname: dev-eks-dgraph-ratel-mt-c01.domain.example.com
    service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: <cert-arn>
    service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "Environment=dev,EKS=true"
spec:
  type: LoadBalancer
  loadBalancerSourceRanges:
    - <dev-private-subnet>     # dev
  ports:
    - port: 8000
      targetPort: 8000
      name: ratel-default
    - port: 80
      targetPort: 8000
      name: ratel-http
    - port: 443
      targetPort: 8000
      name: ratel-https

The problem is Ratel UI fails to connect to the dgraph cluster.
I have tried using the alpha endpoint as well as the ratel endpoint. Where am I going wrong?

Have you checked about the URL? I think you are using an internal URL created just for k8s contex. You should use localhost or the IP provided by your cloud provider.

Ping @joaquin, do you have any idea?

It could take some time for the ELB to be created and further propagation for public DNS on Route53.

Does the ELB come up, and you can access through that ELB dns name? What happens if you curl using ELB address and DNS address for both https and http? For example*:

# examples:
ELB_NAME=12345678-default-dgphratel-1234-123456789.us-east-2.elb.amazonaws.com
DNS_NAME=dev-eks-dgraph-ratel-mt-c01.domain.example.com

# http
curl -i $ELB_NAME 
curl -i $DNS_NAME 

# https
curl -i https://$ELB_NAME 
curl -i https://$DNS_NAME 

I have never tried to use loadBalancerSourceRanges or have two ports, e.g. 8000 and 80 to the same backend port.

I would typically use something like this (or alternatively an ingress) using*:

---
apiVersion: v1
kind: Service
metadata:
  name: dgraph-ratel-public
  labels:
    app: dgraph-ratel
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-2:123456789012:certificate/12345678-1234-1234-1234-123456789012
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
    service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
    external-dns.alpha.kubernetes.io/hostname: ratel.dev.mycompany.com
spec:
  type: LoadBalancer
  ports:
  - name: https
    port: 443
    targetPort: 8000
  selector:
    app: dgraph-ratel
  • fictitious ACM ARN, ELB and DNS names

Lastly, assuming that this is external-dns is using public Route53 DNS w/ permissions, i.e. IAM role on EKS worker nodes or service account with trust to IAM Role.

All of this time, I was under that thinking that Ratel was not coming up, but it is the Alpha instead. DId you expose your alpha service resource in the similar way?

Ratel is a pure react client, so it accesses alpha through a exposed endpoint.

Assuming the alpha is also exposed with targetPort port 8080 with prot to either 80 or 443, you would access using (fictitious URL for example):

http://dev-eks-dgraph-alpha-mt-c01.domain.example.com
  or 
https://dev-eks-dgraph-alpha-mt-c01.domain.example.com

If you accessed ratel from using https url, then alpha has to be https url as well because the web browser could block it due to mixed content security.

Also, if you have data on the dgraph cluster that you don’t want to exposed to the public Internet when alpha is exposed, you may consider using internal LB, which is then accessed from bastion jump host or vpn, or at least add a SG to restrict to IP white list that can access the service.