I am currently using a microk8s setup locally to work with dgraph. Everytime i restart the cluster dgraph magically forgets about the schema that was previously uploaded. Do i have to set some settings for dgraph to remember the scheme between restarts ?
You should have a fixed volume for it. Claim storage and use it in your deployment. Or bind your path with the host.
That is the Kube config that is used to deploy the statefulset. It has a PV attached. Shouldn’t that be enough ?
It should, let me ping @joaquin .
Hi @challapradyumna,
Sometimes with certain storage classes (like hostpath), if the system is deployed to a different worker node, it may not have access to the same file system.
For sanity check on this, does the microk8s have a single worker node? or multiple worker nodes? What is the default storage class:
kubectl get storageclass
kubectl describe sc $(kubectl get sc | awk '/default/{print $1}')
One way we can verify, is to check the contents before and after restart. For example, before the pod is restarted, run kube exec
into an alpha pod and run this again to compare the disk usage, e.g. du -h /dgraph
.
If hostpath
is used, you can inspect where these are being stored, but kubectl describe -n kube-system pod/hostpath-provisioner-xxxxxxxxxx-xxxxx
(where xxxxxxxxxx-xxxxx
is random hash kubernetes assigns). And also inspect the content outside of Kubernetes.
You can also inspect the pod and pvc, before and after pod, make sure the same claim is used, e.g.
NS=default # change if you are not using default namespace
kubectl -n $NS describe pod dgraph-0
kubectl -n $NS get pvc
I’m using microk8s as a local dev environment. Hence, only single node. The default storage class is microk8s.io/hostpath
.
I did some of the sanity checks as mentioned above to check if the volume is being changed in anyway but no dice.
I was checking the startup logs and there were a lot of errors not sure which ones are important or not. Here is a paste bin if you can make more sense out of it. DGraph Startup Logs - Pastebin.com
The last line definitely says that it’s not able to parse the exisiting graphql schema but when i upload the same schema file it gets accepted without any issue. The cascade effect of this is that even after uploading the schema dgraph has to be restarted once again for the auth to work as the accepted headers will not be updated until restart (atleast that’s my understanding)