While figuring out how to set the access_token and whitelist configuration options in the helm chart, I noticed that when I deployed the chart with updated values, the alpha ConfigMap resource would be updated, but none of the updated changes would propogate to the alpha nodes.
In order to get the alpha nodes to read the updated configuration I had to blow away the nodes and rebuild the cluster.
Is there a better, recommended way of getting the alpha nodes to re-initialize with an updated config in a cluster?
This may seem awkward, but with immutable infrastructure patterns of Kubernetes, a pod is not repaired or updated, but rather destroyed and recreated. I am not sure how to get ConfigMap to cascade so that StatefulSet will update the pods it manages. I am aware of a few ways to have alpha get the new updates:
Force an update with StatefulSet. One way to do this would be to add a new environment variable (or another setting), e.g. kubectl edit sts/<release>-dgraph-alpha. This will delete the pods and recreate them.
Delete single alpha pod, wait for StatefulSet to restore it, then repeat for each pod.
kubectl restart <resource> could work, but I have not personally tried this option yet.
Besides the ConfigMap, you can also add environment variables to the container spec in the StatefulSet, e.g. DGRAPH_ALPHA_WHITELIST. Doing this will cause the StatefulSet to roll out changes, and delete the alpha pods and install the new pods with the updated env vars.