have setup multicluster graph but not able to access grpc,
ratel ui able to access.
K8s is out of Dgraph’s scope. But share details about it. We can’t help you without undestanding what you did and what you have.
# This highly available config creates 3 Dgraph Zeros, 3 Dgraph
# Alphas with 3 replicas, and 1 Ratel UI client. The Dgraph cluster
# will still be available to service requests even when one Zero
# and/or one Alpha are down.
#
# There are 3 services can can be used to expose outside the cluster as needed:
# dgraph-zero-public - To load data using Live & Bulk Loaders
# dgraph-alpha-public - To connect clients and for HTTP APIs
# dgraph-ratel-public - For Dgraph UI
apiVersion: v1
kind: Service
metadata:
name: dgraph-zero-public
labels:
app: dgraph-zero
monitor: zero-dgraph-io
spec:
type: ClusterIP
ports:
- port: 5080
targetPort: 5080
name: grpc-zero
- port: 6080
targetPort: 6080
name: http-zero
selector:
app: dgraph-zero
---
apiVersion: v1
kind: Service
metadata:
name: dgraph-alpha-public
labels:
app: dgraph-alpha
monitor: alpha-dgraph-io
spec:
type: ClusterIP
ports:
- port: 8080
targetPort: 8080
name: http-alpha
- port: 9080
targetPort: 9080
name: grpc-alpha
selector:
app: dgraph-alpha
---
apiVersion: v1
kind: Service
metadata:
name: dgraph-ratel-public
labels:
app: dgraph-ratel
spec:
type: ClusterIP
ports:
- port: 8000
targetPort: 8000
name: http-ratel
selector:
app: dgraph-ratel
---
# This is a headless service which is necessary for discovery for a dgraph-zero StatefulSet.
# https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#creating-a-statefulset
apiVersion: v1
kind: Service
metadata:
name: dgraph-zero
labels:
app: dgraph-zero
spec:
ports:
- port: 5080
targetPort: 5080
name: grpc-zero
clusterIP: None
# We want all pods in the StatefulSet to have their addresses published for
# the sake of the other Dgraph Zero pods even before they're ready, since they
# have to be able to talk to each other in order to become ready.
publishNotReadyAddresses: true
selector:
app: dgraph-zero
---
# This is a headless service which is necessary for discovery for a dgraph-alpha StatefulSet.
# https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#creating-a-statefulset
apiVersion: v1
kind: Service
metadata:
name: dgraph-alpha
labels:
app: dgraph-alpha
spec:
ports:
- port: 7080
targetPort: 7080
name: grpc-alpha-int
clusterIP: None
# We want all pods in the StatefulSet to have their addresses published for
# the sake of the other Dgraph alpha pods even before they're ready, since they
# have to be able to talk to each other in order to become ready.
publishNotReadyAddresses: true
selector:
app: dgraph-alpha
---
# This StatefulSet runs 3 Dgraph Zero.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: dgraph-zero
spec:
serviceName: "dgraph-zero"
replicas: 3
selector:
matchLabels:
app: dgraph-zero
template:
metadata:
labels:
app: dgraph-zero
spec:
securityContext:
runAsUser: 0
serviceAccountName: ledger2-sa
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- dgraph-zero
topologyKey: kubernetes.io/hostname
containers:
- name: zero
image: default-route-openshift-image-registry.apps.okdtnd01n1.india.airtel.itm/ledger2/dgraph:1
resources:
limits:
cpu: "1"
memory: 1Gi
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5080
name: grpc-zero
- containerPort: 6080
name: http-zero
volumeMounts:
- name: datadir
mountPath: /data/dgraph
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
command:
- bash
- "-c"
- |
set -ex
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
idx=$(($ordinal + 1))
if [[ $ordinal -eq 0 ]]; then
exec dgraph zero --my=$(hostname -f):5080 --raft="idx=$idx" --replicas 3
else
exec dgraph zero --my=$(hostname -f):5080 --peer dgraph-zero-0.dgraph-zero.${POD_NAMESPACE}.svc.cluster.local:5080 --raft="idx=$idx" --replicas 3
fi
livenessProbe:
httpGet:
path: /health
port: 6080
initialDelaySeconds: 15
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
readinessProbe:
httpGet:
path: /health
port: 6080
initialDelaySeconds: 15
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
terminationGracePeriodSeconds: 60
volumes:
- name: datadir
persistentVolumeClaim:
claimName: datadir
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 2Gi
---
# This StatefulSet runs 3 replicas of Dgraph Alpha.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: dgraph-alpha
spec:
serviceName: "dgraph-alpha"
replicas: 3
selector:
matchLabels:
app: dgraph-alpha
template:
metadata:
labels:
app: dgraph-alpha
spec:
securityContext:
runAsUser: 0
serviceAccountName: ledger2-sa
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- dgraph-alpha
topologyKey: kubernetes.io/hostname
# Initializing the Alphas:
#
# You may want to initialize the Alphas with data before starting, e.g.
# with data from the Dgraph Bulk Loader: https://dgraph.io/docs/deploy/#bulk-loader.
# You can accomplish by uncommenting this initContainers config. This
# starts a container with the same /dgraph volume used by Alpha and runs
# before Alpha starts.
#
# You can copy your local p directory to the pod's /dgraph/p directory
# with this command:
#
# kubectl cp path/to/p dgraph-alpha-0:/dgraph/ -c init-alpha
# (repeat for each alpha pod)
#
# When you're finished initializing each Alpha data directory, you can signal
# it to terminate successfully by creating a /dgraph/doneinit file:
#
# kubectl exec dgraph-alpha-0 -c init-alpha touch /dgraph/doneinit
#
# Note that pod restarts cause re-execution of Init Containers. Since
# /dgraph is persisted across pod restarts, the Init Container will exit
# automatically when /dgraph/doneinit is present and proceed with starting
# the Alpha process.
#
# Tip: StatefulSet pods can start in parallel by configuring
# .spec.podManagementPolicy to Parallel:
#
# https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees
#
# initContainers:
# - name: init-alpha
# image: default-route-openshift-image-registry.apps.okdtnd01n1.india.airtel.itm/ledger2/dgraph:1
# command:
# - bash
# - "-c"
# - |
# trap "exit" SIGINT SIGTERM
# echo "Write to /dgraph/doneinit when ready."
# until [ -f /dgraph/doneinit ]; do sleep 2; done
# volumeMounts:
# - name: datadir
# mountPath: /data/dgraph
containers:
- name: alpha
image: default-route-openshift-image-registry.apps.okdtnd01n1.india.airtel.itm/ledger2/dgraph:1
resources:
limits:
cpu: "1"
memory: 1Gi
imagePullPolicy: IfNotPresent
ports:
- containerPort: 7080
name: grpc-alpha-int
- containerPort: 8080
name: http-alpha
- containerPort: 9080
name: grpc-alpha
volumeMounts:
- name: datadir
mountPath: /data/dgraph
env:
# This should be the same namespace as the dgraph-zero
# StatefulSet to resolve a Dgraph Zero's DNS name for
# Alpha's --zero flag.
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# dgraph versions earlier than v1.2.3 and v20.03.0 can only support one zero:
# `dgraph alpha --zero dgraph-zero-0.dgraph-zero.${POD_NAMESPACE}.svc.cluster.local:5080`
# dgraph-alpha versions greater than or equal to v1.2.3 or v20.03.1 can support
# a comma-separated list of zeros. The value below supports 3 zeros
# (set according to number of replicas)
command:
- bash
- "-c"
- |
set -ex
dgraph alpha --my=$(hostname -f):7080 --zero dgraph-zero-0.dgraph-zero.${POD_NAMESPACE}.svc.cluster.local:5080,dgraph-zero-1.dgraph-zero.${POD_NAMESPACE}.svc.cluster.local:5080,dgraph-zero-2.dgraph-zero.${POD_NAMESPACE}.svc.cluster.local:5080
livenessProbe:
httpGet:
path: /health?live=1
port: 8080
initialDelaySeconds: 15
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 15
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 6
successThreshold: 1
terminationGracePeriodSeconds: 600
volumes:
- name: datadir
persistentVolumeClaim:
claimName: datadir
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: dgraph-ratel
labels:
app: dgraph-ratel
spec:
selector:
matchLabels:
app: dgraph-ratel
template:
metadata:
labels:
app: dgraph-ratel
spec:
containers:
- name: ratel
image: default-route-openshift-image-registry.apps.okdtnd01n1.india.airtel.itm/ledger2/dgraph_ratel:1
resources:
limits:
cpu: "1"
memory: 1Gi
ports:
- containerPort: 8000
have added my .yml file can you check and let me know what can be done
This seems to be our example. Have you modified it?
How about your stats? is it Docker K8s? Rancher Labs k3s?
Minikube? AWS EKS? GKE?
What are the logs?
What is your experience with k8s? Why did you chose k8s over Docker?
we need multi cluster setup and as per our company policy we are using kubernetes.
all the service we can access but we cant access 9080 port in multi cluster setup
Okay. And what about the other questions? without context I can’t help.