Thursday, November 10, 2022

Apex Ords Operator for Kubernetes


We often need to provision Apex and Ords for Dev, Stage, Prod. 
This is the operator to automate Apex Oracle Application Express 19.1 and Ords oracle rest data service via Kubernetes CRD, it creates a brand new Oracle 19c database statefulset, apex, ords deployment plus load balancer in the Kubernetes cluster


Full details and source codes are on GitHub repository


Tuesday, November 08, 2022

OKE Admission Control Webhook Sample


We need to implement a policy requested by the security team that Kubernetes service should have an annotation : None Thus no security list will be updated by Kubernetes. This is an example that how we build our own admission controller which implements various policies from security or other teams. ie we can add only internal load balancer is allowed for internal service.....etc


  • Please refer github repo
  • git clone
  • go build -o oke-admission-webhook
  • docker build --no-cache -t repo-url/oke-admission-webhook:v1 .
  • rm -rf oke-admission-webhook
  • docker push repo-url/oke-admission-webhook:v1
  • ./deployment/ --service oke-admission-webhook-svc --namespace kube-system --secret oke-admission-webhook-secret
  • kubectl replace --force -f deployment/validatingwebhook.yaml
  • kubectl replace --force -f deployment/deployment.yaml
  • kubectl replace --force -f deployment/service.yaml


Tuesday, January 05, 2021

Tip: Change default storageclass in Kubernetes

The below example is for OKE (Oracle Kubernetes Engine), the same concept for other Kubernetes 

Change default storageclass from oci to oci-bv:

kubectl patch storageclass oci -p '{"metadata": {"annotations":{"":"false"}}}'

kubectl patch storageclass oci-bv -p '{"metadata": {"annotations":{"":"true"}}}'

Tuesday, December 29, 2020

Tip: Attach volume conflict Error in OKE


    The pods with block volumes in OKE (oracle Kubernetes engine) are reporting such error:

Warning  FailedAttachVolume  4m26s (x3156 over 4d11h)  attachdetach-controller  (combined from similar events): AttachVolume.Attach failed for volume "*******54jtgiq" : attach command failed, status: Failure, reason: Failed to attach volume: Service error:Conflict. Volume *****osr6g565tlxs54jtgiq currently attached. http status code: 409. Opc request id: *********D6D97


    There are quite a few reasons for that. One of them is as the error states:  the volume is attached to another host instance, thus it can't be attached again. 

    To fix that, we can find attach status and VM instance details via volume id. Then manually detach the volume from the VM via SDK or console. The error would be gone

Friday, December 25, 2020

Tip: Ngnix ingress controller can't startup


     We try to restart a pod of nginx ingress controller. After the restart, the pods can't startup 

Error like

status.go:274] updating Ingress ingress-nginx-internal/prometheus-ingress status from [] to [{ }]

I1226 02:11:14.106423       6 event.go:255] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ingress-nginx-internal", Name:"prometheus-ingress", UID:"e26f55f2-d87d-4efe-a4dd-5ae02768814a", APIVersion:"", ResourceVersion:"46816813", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress ingress-nginx-internal/prometheus-ingress

I1226 02:11:49.153889       6 main.go:153] Received SIGTERM, shutting down

I1226 02:11:49.153931       6 nginx.go:390] Shutting down controller queues


   Somehow the existing ingress rule "prometheus-ingress" is the causeRemove the rule then the pod can startup well. We can add the rule back after that.