Henry Xie 's blog
Thursday, November 10, 2022
Apex Ords Operator for Kubernetes
Tuesday, November 08, 2022
OKE Admission Control Webhook Sample
Requirement:
Solution:
- Please refer github repo
- git clone https://github.com/HenryXie1/oke-admission-webhook
- go build -o oke-admission-webhook
- docker build --no-cache -t repo-url/oke-admission-webhook:v1 .
- rm -rf oke-admission-webhook
- docker push repo-url/oke-admission-webhook:v1
- ./deployment/webhook-create-signed-cert.sh --service oke-admission-webhook-svc --namespace kube-system --secret oke-admission-webhook-secret
- kubectl replace --force -f deployment/validatingwebhook.yaml
- kubectl replace --force -f deployment/deployment.yaml
- kubectl replace --force -f deployment/service.yaml
Demo:
Tuesday, January 05, 2021
Tip: Change default storageclass in Kubernetes
The below example is for OKE (Oracle Kubernetes Engine), the same concept for other Kubernetes
Change default storageclass from oci to oci-bv:
kubectl patch storageclass oci -p '{"metadata": {"annotations":{"storageclass.beta.kubernetes.io/is-default-class":"false"}}}'
kubectl patch storageclass oci-bv -p '{"metadata": {"annotations":{"storageclass.beta.kubernetes.io/is-default-class":"true"}}}'
Tuesday, December 29, 2020
Tip: Attach volume conflict Error in OKE
Symptom:
The pods with block volumes in OKE (oracle Kubernetes engine) are reporting such error:
Warning FailedAttachVolume 4m26s (x3156 over 4d11h) attachdetach-controller (combined from similar events): AttachVolume.Attach failed for volume "*******54jtgiq" : attach command failed, status: Failure, reason: Failed to attach volume: Service error:Conflict. Volume *****osr6g565tlxs54jtgiq currently attached. http status code: 409. Opc request id: *********D6D97
Solution:
There are quite a few reasons for that. One of them is as the error states: the volume is attached to another host instance, thus it can't be attached again.
To fix that, we can find attach status and VM instance details via volume id. Then manually detach the volume from the VM via SDK or console. The error would be gone
Friday, December 25, 2020
Tip: Ngnix ingress controller can't startup
Symptom:
We try to restart a pod of nginx ingress controller. After the restart, the pods can't startup
Error like
status.go:274] updating Ingress ingress-nginx-internal/prometheus-ingress status from [] to [{100.114.90.8 }]
I1226 02:11:14.106423 6 event.go:255] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ingress-nginx-internal", Name:"prometheus-ingress", UID:"e26f55f2-d87d-4efe-a4dd-5ae02768814a", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"46816813", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress ingress-nginx-internal/prometheus-ingress
I1226 02:11:49.153889 6 main.go:153] Received SIGTERM, shutting down
I1226 02:11:49.153931 6 nginx.go:390] Shutting down controller queues