Wednesday, February 19, 2020

Error: cannot delete Pods with local storage when kubectl drain in OKE

Symptom

In the Kubernetes world, we often need to upgrade Kubernetes Master nodes and worker nodes themselves.
In OKE (Oracle Kubernetes Engine)  we follow Master node upgrade guide  and Worker node upgrade guide

When we run kubectl drain <node name>  --ignore-daemonsets

We got :
error: cannot delete Pods with local storage (use --delete-local-data to override): monitoring/grafana-65b66797b7-d8gzv, monitoring/prometheus-adapter-8bbfdc6db-pqjck

Solution:

 The error is due to there is local storage (  emptyDir: {} ) attached in the pods.

For Statefulset 

Please use volumeClaimTemplates . OKE supports the automatic movement of PV PVC to a new worker node. It will detach and reattach the block storages to the new work node(assume they are in the same Availablity domain).  We don't need to worry about data migration which is excellent for statefulset.  example is:

volumeClaimTemplates:
  - metadata:
      name: prometheus-storage
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 50Gi
      selector:
        matchLabels:
          app: grafana
      storageClassName: oci
      volumeMode: Filesystem

For Deployment,

Because deployment is stateless, we can delete local data
kubectl drain  <node name> --ignore-daemonsets --delete-local-data
If the deployment has PV,PVC attached, same as statefulset,  OKE supports the automatic movement of PV PVC to a new worker node. It will detach and reattach the block storages to the new work node(assume they are in the same Availablity domain).  We don't need to worry about data migration.

No comments: