Error deploying the helm chart when specifying values.yaml file

Moved from GitHub charts/11

Posted by encryptblockr:

Error when specifying values.yaml file

helm install dgraph -f values.yaml --namespace dgraph dgraph/dgraph

Error: YAML parse error on dgraph/templates/alpha-statefulset.yaml: error converting YAML to JSON: yaml: line 74: did not find expected key

and

Error: YAML parse error on dgraph/templates/alpha-statefulset.yaml: error converting YAML to JSON: yaml: line 81: did not find expected key

encryptblockr commented :

@prashant-shahi @danielmai

can you provide sample example how nodeselector and tolerations should look like here in values.yaml file?

this is an issue if specifying values.yaml file

danielmai commented :

Can you share the values.yaml file you used?

encryptblockr commented :

this particular file is where issue is alpha-statefulset.yaml

honestly u can just try this command and modify the values.yaml file and you will get same error

helm install dgraph -f values.yaml --namespace dgraph dgraph/dgraph

its pretty easy to replicate

also i even cloned the whole repo and have all files locally and tried this

helm install dgraph --namespace dgraph .

and getting this error

Error: YAML parse error on dgraph/templates/alpha-statefulset.yaml: error converting YAML to JSON: yaml: line 46: could not find expected ':'

encryptblockr commented :

## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
#   imageRegistry: myRegistryName
#   imagePullSecrets:
#     - myRegistryKeySecretName

image:
  registry: docker.io
  repository: dgraph/dgraph
  tag: v1.2.1
  ## Specify a imagePullPolicy
  ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
  ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
  ##
  pullPolicy: Always
  ## Optionally specify an array of imagePullSecrets.
  ## Secrets must be manually created in the namespace.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ##
  # pullSecrets:
  #   - myRegistryKeySecretName
  ## Set to true if you would like to see extra information on logs
  ## It turns BASH and NAMI debugging in minideb
  ## ref:  https://github.com/bitnami/minideb-extras/#turn-on-bash-debugging
  ##
  debug: false

zero:
  name: zero
  monitorLabel: zero-dgraph-io
  ## StatefulSet controller supports automated updates. There are two valid update strategies: RollingUpdate and OnDelete
  ## ref: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets
  ##
  updateStrategy: RollingUpdate

  ## Partition update strategy
  ## https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#partitions
  ##
  # rollingUpdatePartition:

  ## StatefulSet controller supports relax its ordering guarantees while preserving its uniqueness and identity guarantees. There are two valid pod management policies: OrderedReady and Parallel
  ## ref: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#pod-management-policy
  ##
  podManagementPolicy: OrderedReady

  ## Number of dgraph zero pods
  ##
  replicaCount: 1

  ## Max number of replicas per data shard.
  ## i.e., the max number of Dgraph Alpha instances per group (shard).
  ##
  shardReplicaCount: 1

  ## zero server pod termination grace period
  ##
  terminationGracePeriodSeconds: 60

  ## Hard means that by default pods will only be scheduled if there are enough nodes for them
  ## and that they will never end up on the same node. Setting this to soft will do this "best effort"
  antiAffinity: soft

  # By default this will make sure two pods don't end up on the same node
  # Changing this to a region would allow you to spread pods across regions
  podAntiAffinitytopologyKey: "kubernetes.io/hostname"

  ## This is the node affinity settings as defined in
  ## https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
  nodeAffinity: {}

  ## Kubernetes configuration
  ## For minikube, set this to NodePort, elsewhere use LoadBalancer
  ##
  service:
    type: ClusterIP

  ## dgraph Pod Security Context
  securityContext:
    enabled: false
    fsGroup: 1001
    runAsUser: 1001

  ## dgraph data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  persistence:
    enabled: false
    # storageClass: "-"
    accessModes:
      - ReadWriteOnce
    size: 32Gi

  ## Node labels and tolerations for pod assignment
  ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
  ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
  ##
  nodeSelector:
    kops.k8s.io/instancegroup: "dgraph"

  tolerations:
    - key: "environment"
      operator: "Equal"
      value: "development"
      effect: "NoSchedule"


  ## Configure resource requests
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources:
    requests:
      memory: 100Mi

  ## Configure extra options for liveness and readiness probes
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
  ##
  livenessProbe:
    enabled: false
    port: 6080
    path: /health
    initialDelaySeconds: 15
    periodSeconds: 10
    timeoutSeconds: 5
    failureThreshold: 6
    successThreshold: 1

  readinessProbe:
    enabled: false
    port: 6080
    path: /state
    initialDelaySeconds: 15
    periodSeconds: 10
    timeoutSeconds: 5
    failureThreshold: 6
    successThreshold: 1

alpha:
  name: alpha
  monitorLabel: alpha-dgraph-io
  ## StatefulSet controller supports automated updates. There are two valid update strategies: RollingUpdate and OnDelete
  ## ref: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets
  ##
  updateStrategy: RollingUpdate

  ## Partition update strategy
  ## https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#partitions
  ##
  # rollingUpdatePartition:

  ## StatefulSet controller supports relax its ordering guarantees while preserving its uniqueness and identity guarantees. There are two valid pod management policies: OrderedReady and Parallel
  ## ref: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#pod-management-policy
  ##
  podManagementPolicy: OrderedReady

  ## Number of dgraph nodes
  ##
  replicaCount: 1

  ## zero server pod termination grace period
  ##
  terminationGracePeriodSeconds: 600

  ## Hard means that by default pods will only be scheduled if there are enough nodes for them
  ## and that they will never end up on the same node. Setting this to soft will do this "best effort"
  antiAffinity: soft

  # By default this will make sure two pods don't end up on the same node
  # Changing this to a region would allow you to spread pods across regions
  podAntiAffinitytopologyKey: "kubernetes.io/hostname"

  ## This is the node affinity settings as defined in
  ## https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
  nodeAffinity: {}

  ## Kubernetes configuration
  ## For minikube, set this to NodePort, elsewhere use LoadBalancer
  ##
  service:
    type: ClusterIP

  ## dgraph Pod Security Context
  securityContext:
    enabled: false
    fsGroup: 1001
    runAsUser: 1001

  ## dgraph data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  persistence:
    enabled: false
    # storageClass: "-"
    accessModes:
      - ReadWriteOnce
    size: 100Gi
    annotations: {}

  ## Node labels and tolerations for pod assignment
  ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
  ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
  ##
  nodeSelector:
    kops.k8s.io/instancegroup: "dgraph"

  tolerations:
    - key: "environment"
      operator: "Equal"
      value: "development"
      effect: "NoSchedule"

  ## Configure resource requests
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources:
    requests:
      memory: 100Mi
  ## Configure value for lru_mb flag
  ## Typically a third of available memory is recommended, keeping the default value to 2048mb
  lru_mb: 2048

  ## Configure extra options for liveness and readiness probes
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
  ##
  livenessProbe:
    enabled: false
    port: 8080
    path: /health?live=1
    initialDelaySeconds: 15
    periodSeconds: 10
    timeoutSeconds: 5
    failureThreshold: 6
    successThreshold: 1

  readinessProbe:
    enabled: false
    port: 8080
    path: /health
    initialDelaySeconds: 15
    periodSeconds: 10
    timeoutSeconds: 5
    failureThreshold: 6
    successThreshold: 1

ratel:
  name: ratel
  ## Number of dgraph nodes
  ##
  replicaCount: 1

  ## Kubernetes configuration
  ## For minikube, set this to NodePort, elsewhere use ClusterIP or LoadBalancer
  ##
  service:
    type: ClusterIP



  ## dgraph Pod Security Context
  securityContext:
    enabled: false
    fsGroup: 1001
    runAsUser: 1001

  ## Configure resource requests
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  ## resources:
  ##   requests:
  ##     memory: 256Mi
  ##     cpu: 250m

  ## Configure extra options for liveness and readiness probes
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
  ##
  livenessProbe:
    enabled: false
    port: 8000
    path: /
    initialDelaySeconds: 30
    periodSeconds: 10
    timeoutSeconds: 5
    failureThreshold: 6
    successThreshold: 1

  readinessProbe:
    enabled: false
    port: 8000
    path: /
    initialDelaySeconds: 5
    periodSeconds: 10
    timeoutSeconds: 5
    failureThreshold: 6
    successThreshold: 1

all i changed are the persistence to false, and added nodeSelector and tolerations and also reduced replicaCount

ok here is values.yaml file but not sure it will work for you as is…because u may not have the same nodeSelector and toleration i added…you can just try to simply use your own values.yaml…this issue occurs when specifying the values.yaml file so its a very easy to replicate issue

@danielmai

encryptblockr commented :

@prashant-shahi @danielmai

what this tells me is that the helm chart in the repository is not what the code in current master branch is…you must have built the helm chart from a older code…the current values.yaml file does not work…spent hours with this thing…tried many thing…its really painful going through something as simple as just deploying an app with helm

you dont even have to change anything in the values.yaml file…just specifying a values.yaml and this does not work ever

just trying to run with -f values.yaml will not work…meaning the current values,yaml in current master does not work for the helm chart dgraph/dgraph in the repository https://charts.dgraph.io

With the supplied values, able to get an error that doesn’t make sense. Doing analysis to get minimal steps to reproduce.

$ helm install dgraph --namespace dgraph --values encryptblockr.yaml dgraph/dgraph
Error: YAML parse error on dgraph/templates/alpha-statefulset.yaml: error converting YAML to JSON: yaml: line 92: did not find expected key

For minimal steps, I have seen errors where just disabling persistence causes an error.

helm install dgraph --namespace dgraph charts/charts/dgraph/ --set alpha.persistence.enabled=false --debug --dry-run

The error doesn’t make sense as this area is outside of the affinity and node affinity blocks.

Error: YAML parse error on dgraph/templates/alpha-statefulset.yaml: error converting YAML to JSON: yaml: line 85: did not find expected key
helm.go:84: [debug] error converting YAML to JSON: yaml: line 85: did not find expected key
YAML parse error on dgraph/templates/alpha-statefulset.yaml
helm.sh/helm/v3/pkg/releaseutil.(*manifestFile).sort
	/home/circleci/helm.sh/helm/pkg/releaseutil/manifest_sorter.go:146
helm.sh/helm/v3/pkg/releaseutil.SortManifests
	/home/circleci/helm.sh/helm/pkg/releaseutil/manifest_sorter.go:106
helm.sh/helm/v3/pkg/action.(*Configuration).renderResources
	/home/circleci/helm.sh/helm/pkg/action/action.go:159
helm.sh/helm/v3/pkg/action.(*Install).Run
	/home/circleci/helm.sh/helm/pkg/action/install.go:238
main.runInstall
	/home/circleci/helm.sh/helm/cmd/helm/install.go:229
main.newInstallCmd.func1
	/home/circleci/helm.sh/helm/cmd/helm/install.go:117
github.com/spf13/cobra.(*Command).execute
	/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:842
github.com/spf13/cobra.(*Command).ExecuteC
	/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:950
github.com/spf13/cobra.(*Command).Execute
	/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:887
main.main
	/home/circleci/helm.sh/helm/cmd/helm/helm.go:83
runtime.main
	/usr/local/go/src/runtime/proc.go:203
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1357

I suspect this doesn’t represent the source template yaml, but some intermediary state. Helm doesn’t provide very much useful information in this regard.

First. This is an unusual use case, as I have hard time to find examples of statefulset without persistent volumes. Nevertheless, this logic path is supported in the helm chart, and is not working.

in the interim, you can run helm template with values that work to render out a working manifest, and then edit these afterward.

This is fixed in dgraph-0.0.14: