Sunday, July 25, 2021

Kubectl Plugin for Oracle Database


We often need to provision new oracle databases for developers
This is the kubectl plugin to automate the creation of oracle database statefulset in the Kubernetes cluster


Full details and source codes are on the GitHub repository


Wednesday, July 14, 2021

Tip: Understand why no user concept in Kubernetes Authentication

 One main authentication mechanism in K8S is:   CA, Certificate, Key

There is no "user"  concept in K8S. Instead, it uses the client private key to identify the "user"

The workflow is like this:

  • Users(clients) have their private keys. These keys represent their unique id in K8S.
  • Go through the CSR approval process with these keys. We can add CN like user "John" into the CSR process to have a readable "user" id.
  • Once CSR is approved, we can get a signed certificate of the private key representing the user "John"
  • Then we can authenticate in K8S via  CA, Certificate, Key

Thursday, July 01, 2021

Pass CKA Exam Tips

On Thursday, I passed the CKA exam with a 93 score mark and get the certificate. I share some tips on how I achieve that.

  • 17 questions in 2 hours. 
  • Don't worry about copy and paste. You can copy it by clicking it when your mouse hovers on the "important text".
  • Read each question carefully. Always understand questions before starting to do it. Check the weight of each question. The high weight means more mark points in the question. 
  • Skip the difficult questions and make sure you get easy marks.  Only a 66 score mark is needed to pass the exam. 
  • Practise and create examples for each test point in the CKA curriculum
  • Strongly recommend this udemy course.  The practice and mock exams are great to prepare for CKA exams.
  • Practise all commands in the kubectl cheatsheet

Monday, June 28, 2021

Tip: Understand Golang methods and interface


  • methods are functions with a special receiver ( a normal or define type)
  • so we define a type and create a method based on this type,  so the logic is the first type  ---> define method.
  • As we have lots of common types, we use a type to group all sorts of methods
  • So the type is the centrepiece to think thorough


  • Golang interface is still a type, but not normal concrete type like string, int ....etc. Instead, it is an abstract type built on top of those concrete types.
  •  It has two elements:  a concrete type + value of the concrete type
  • According to the interface definition, which concrete types implements this interface, Golang automatically matches which concrete types are bond to this interface.
  • So in the same program, the same interface type can be different concrete types and values.
  • Concrete types have methods. We can call these methods like It is the same for the interface. We can call these methods like
  • A method can be called by both interface and its concrete type associated.
  • Use type assertion to get the value from the interface, then use the value to invoke other methods which are not defined in the interface.
  • One of the reasons why we have an interface is: since we also have lots of common methods, i.e. print string, play sports... all sorts of actions are considered as methods, so the genius part is we define an interface as a common signature of methods, so the centrepiece is not a TYPE, but methods,  it uses a common method (i.e. print string)  to group all sorts of types...

Friday, June 25, 2021

Kubebuilder Controller-runtime client_go_adapter.go Error


   We hit below error when we build an operator via kubebuilder 3.1 + controller runtime v0.8.3 + kuberentes 1.20.2

./../../pkg/mod/ cannot use &latencyAdapter{...} (type *latencyAdapter) as type metrics.LatencyMetric in field value:

*latencyAdapter does not implement metrics.LatencyMetric (wrong type for Observe method)

have Observe(string, url.URL, time.Duration)

want Observe(context.Context, string, url.URL, time.Duration)

../../../pkg/mod/ cannot use &resultAdapter{...} (type *resultAdapter) as type metrics.ResultMetric in field value:

*resultAdapter does not implement metrics.ResultMetric (wrong type for Increment method)

have Increment(string, string, string)

want Increment(context.Context, string, string, string)


 It is related to controller runtime v0.8.3 has compatible issues with Kubernetes 0.21 modules. See link
To fix it, find go.mod, change the module from 0.21 to 0.20.2

Wednesday, June 09, 2021

Tip: Understand Golang Channel Directions

  • Channel is used to communicate among Goroutines.
  • Always image a channel as a pipe in the brain. A pipe of any concrete type of Golang, like string, int..etc. This pipe connects Goroutines
  • There is sending side of this pipe on one Goroutines, and there is an ending side of this pipe on another Goroutines
  • main() is also a Goroutine. 
  • We need to identify which Goroutine is sending side, which is the ending side.
  • Sending side is like.

sends chan<- string 
  • the chan<- string  means there is a string on the sending side of pipe   
  • We use it like    sends <- "my sending string"
  • Ending side is like.
ends <-chan string
  • the <-chan string  means there is a string on the ending side of pipe   
  • We use it like    ending_message <-ends or <-ends
  • for range <channel name>: it's used on the ending side the pipe to fetch values
  • time.Ticker is a good example of ending side of pipe
type Ticker struct { C <-chan Time // The channel on which the ticks are delivered. // contains filtered or unexported fields }

  • Below is another advance example of sending / ending side
    • os.Signal is the sending side of channel sigs
      • another process or thread from OS
    • main() goroutines is the ending side of channel sigs
// Registering signals - INT and TERM sigs := make(chan os.Signal, 1) signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM) // Process blocked here and waiting for signal(term/int) <-sigs

Sunday, May 23, 2021

Tip: Can't find docker networking namespace via ip netns list


    In ubuntu, we start a docker container, try to find docker networking namespace via "ip netns list". The output is empty.


   The docker by default , it records netns on /var/run/docker/netns. While "ip netns list" is checking /var/run/netns


 stop all containers , rm -rf /var/run/netns,  ln -s /var/run/docker/netns  /var/run/netns


To find netns id of container use

docker ps ---> find container ID

docker inspect <contain ID> |grep netns

Thursday, May 13, 2021

Tip: Bind Error when running multiple schedulers in K8S

Error details: 

I0530 09:25:29.097683       1 serving.go:331] Generated self-signed cert in-memory

failed to create listener: failed to listen on listen tcp bind: address already in use


     It's due to the default scheduler is running on the same node. We can move the 2nd scheduler to another node to fix this. 

Thursday, April 22, 2021

Tip: curl: (23) Failed writing body


When we run 

curl -sSL -o /usr/local/bin/argocd$VERSION/argocd-linux-amd64

We get error

curl: (23) Failed writing body (0 != 1369)


 Tt is due to  "/usr/local/bin/argocd"  is on the /usr/local/bin  directory which is owned by root user while we use normal user to run curl.

To fix it , change "/usr/local/bin/argocd" to be "/tmp/argocd"

Wednesday, April 14, 2021

Tip: git can't communicate with github after unset http.proxy


    We used to have an HTTP proxy to access Github. It was working fine. When we take off HTTP proxy via "git config --global -e", use "git config --global -l" to confirm it is taken off.

   However, it still can't communicate with GitHub. Error like 

 kex_exchange_identification: Connection closed by remote host fatal: Could not read from remote repository


   It is due to we use ssh to communicate with GitHub, while there are extra HTTP proxy settings in ~/.ssh/config file

ProxyCommand=socat - PROXY:<proxy-server>:%h:%p,proxyport=80

Take them off will fix the issue. 

Tuesday, April 13, 2021

Tip: When OPA gatekeeper stuck


    We hit issues that all kubectl command stuck like kubectl get pod...etc

    initially, we thought it is a Kubernetes control plane issue but confirmed with the cloud provider, the control plane has some communication issues with the webhook


  It turns out the OPA gatekeeper was stuck and cause webhook issues with the control plane.


1. Delete webhook

kubectl delete gatekeeper-validating-webhook-configuration

2. It will stabilize the communications with the control plane

3. Delete and redeploy opa keeper deployment 

Thursday, April 08, 2021

Tip: error: failed to load key pair tls: failed to parse private key


    When we kubectl create secret tls ..., we hit below error

error: failed to load key pair tls: failed to parse private key


    It is likely the private key file is encrypted with a passphrase.

   Use openssl to unencrypt it and use the new key for kubectl 

openssl rsa -in encrypted-private.key -out unencrypted.key

 Enter pass phrase for ...... 



Wednesday, April 07, 2021

Tip: Pods keep crashloopbackoff


 Pods always crashloopbackoff 

"kubectl describe pod..."  does not give meaningful info, as well as "kubectl get events"


One of the likely reason is related to pod security policy. My situation is the existing pod security policy does not allow Nginx or Apache to run. It does not have



  # apache or nginx need escalation to root to function well

  allowPrivilegeEscalation: true

So the pods keep crashloopbackoff. To fix it is to add the above into the pod security policy.

Saturday, April 03, 2021

Tip: Istio TLS secrets, Gateway, VirtualService namespace scope

There is some confusion about where we should put istio objects. Is it in the istio-system or users namespace?

Here are some tips:

For TLS,mTLS CA, certs, key management in istio, the Kubernetes secrets should be created in the istio-system. Not in users' namespace

Gateway and VirtualService need to be created on the users' namespace 

Tuesday, March 09, 2021

Tip: Slow Wi-FI Speed due to the second Monitor


  The wifi speed dropped to half after we connected a second monitor for extended display.


  It is likely the 2 monitors have different refresh rate,60 Hertz vs 59 Hertz. Make them the same could help.

How to find which type of VMs pods are running via promQL


     Users need to know which type of VMs their pods are running. i.e. users wanna verify pods are running on GPU VMs


In Prometheus, we have 2 metrics:  kube_pod_info{} and kube_node_lables{}

kube_node_labels often has a label to tell which type of VM it is. 

We can use "node" to join these 2 metrics to provide a report to users

sum( kube_pod_info{}) by(pod,node) *on(node) group_left(label_beta_kubernetes_io_instance_type) sum(kube_node_labels{}) by (node,label_beta_kubernetes_io_instance_type)

Please refer official promQL doc 

Tip: create grafana API for it:

curl -g -k -H "Authorization: Bearer ******"\(kube_pod_info{}\)by\(pod,node\)*on\(node\)group_left\(label_beta_kubernetes_io_instance_type\)sum\(kube_node_labels{}\)by\(node,label_beta_kubernetes_io_instance_type\)

Als refer my blog how to convert promQL into grafana API call

Monday, March 08, 2021

How to convert PromQL into Grafana API call


     We use promQL to fetch some metadata of a Kubernetes cluster. i.e existing namespaces

sum(kube_pod_info) by (namespace)

We would like to convert it to a grafana API call, so other apps can consume this metadata


  • First, we need to generate an API token. Refer grafana doc 
  • Second, below is a curl example to consume it:
curl -k -H "Authorization: Bearer e*****dfwefwef0="\(kube_pod_info\)by\(namespace\)

Thursday, February 25, 2021

Istio install against different Docker Repos


       With istioctl, it has built-in manifests. However, these manifests or docker images may not be accessible in the corporate network, or users use other docker repo other than  How to install it?


  • istioctl manifest generate --set profile=demo > istio_generate_manifests_demo.yaml
  • find docker images path in the yaml ,download and upload them to your internal docker repo.
  • edit the file with right docker image path of internal docker repo
  • kubectl apply -f istio_generate_manifests_demo.yaml
  • istioctl verify-install -f istio_generate_manifests_iad_demo.yaml
  • to purge the deployment:
    • istioctl x uninstall --purge

Tuesday, February 16, 2021

Tip: Pod FQDN in Kubernetes

Pods from deployment, statefulset. daemonset exposed by service


i.e  172-12-32-12.test-svc.test-namespace.svc.cluster.local


Isolated Pods:


i.e  172-12-32-12.test-namespace.pod.cluster.local

Wednesday, February 03, 2021

Tip: Kubernetes intermittent DNS issues of pods


     The pods get "unknown name" or "no such host" for the external domain name. i.e.

The issues are intermittent.


  • Follow k8s guide and check all  DNS pods are running well. 
  • One possible reason is one or a few of namespaces in /etc/resolv.conf of hosts may not be able to solve the DNS name
    • i.e. * is  corp intranet name, it needs to be resolved by corp name servers. however, in normal cloud VM setup, we have name server option in the /etc/resolv.conf,  in this case has no idea for *, thus we have intermittent issues
    • To solve this, we need to update DHCP server, remove from /etc/resolv.conf
    • kubectl rollout restart deployment coredns -n kube-system
  • One possible reason is some of the nodes have network issues which DNS pods are not functioning well.  use below commands to test DNS pods. 

kubectl -n kube-system get po -owide|grep coredns |awk '{print $6 }' > /tmp/1.txt

cat /tmp/1.txt  | while read -r line; do echo $line | awk '{print "curl -v --connect-timeout 10 telnet://"$1":53", "\n"}'; done
  • Enable debug log of DNS pods per  k8s guide
  • test the DNS and kubectl tail all DNS pods to get debug info
kubectl -n kube-system logs -f deployment/coredns --all-containers=true --since=1m |grep testcorp

  • You may get log like

INFO] - 48702 "AAAA IN udp 78 false 512" NXDOMAIN qr,aa,rd 171 0.000300408s

[INFO] - 64047 "A IN udp 78 false 512" NXDOMAIN qr,aa,rd 171 0.000392158s 

  • The /etc/resolv.conf has  "options ndots:5"  which may impact the external domain DNS resolution. To use full qualified name can mitigate the issue. -->  (there is a .  at the end)
  • Disable coredns AAAA (IPv6) queries. it will reduce NXDOMAIN (not found), thus reduce the fail rate back to the dns client
    • Add below into coredns config file. refer coredns rewrite
    • rewrite stop type AAAA A
  • Install node local DNS to speed DNS queries. Refer kubernetes doc
  • test dig +all many times, it will show authorization section
;; AUTHORITY SECTION:     4878    IN      NS     4878    IN      NS
    • to find out which DNS server  timeout
  • Add below parameter in /etc/resolv.conf to improve DNS query performance
    • options single-request-reopen   refer manual
    • options single-request   refer manual
  • Another solution is to use an external name:

    // code placeholder
    apiVersion: v1
    kind: Service
      name: test-stage
      namespace: default
      - port: 636
        protocol: TCP
        targetPort: 636
      type: ExternalName

Tuesday, February 02, 2021

Tip: A Command to get all resources and subresources in Kuberentes Cluster

 list=($(kubectl get --raw / | jq -r '.paths[] | select(. | startswith("/api"))')); for tgt in ${list[@]}; do aruyo=$(kubectl get --raw ${tgt} | jq .resources); if [ "x${aruyo}" != "xnull" ]; then echo; echo "===${tgt}==="; kubectl get --raw ${tgt} | jq -r ".resources[] | .name,.verbs"; fi; done

Tip: Use oci cli to reboot a VM

oci compute instance action --action SOFTRESET --region us-ashburn-1 --instance-id  <instance id you can get from kubectl describe node>

oci compute instance get  --region us-ashburn-1 --instance-id  <instance id you can get from kubectl describe node>

sometimes, you may get 404 error if you omit " --region us-ashburn-1"

Tip: Collect console serial Logs of Oracle Cloud Infrastructure

oci compute console-history capture   --region us-ashburn-1 --instance-id <instance-ocid>

--> oci compute console-history get  --region us-ashburn-1 --instance-console-history-id <OCID from the command before> 

--> oci compute console-history get-content --region us-ashburn-1  --length 1000000000 --file /tmp/logfile.txt --instance-console-history-id <OCID from the command before>

Tuesday, January 05, 2021

Tip: Change default storageclass in Kubernetes

The below example is for OKE (Oracle Kubernetes Engine), the same concept for other Kubernetes 

Change default storageclass from oci to oci-bv:

kubectl patch storageclass oci -p '{"metadata": {"annotations":{"":"false"}}}'

kubectl patch storageclass oci-bv -p '{"metadata": {"annotations":{"":"true"}}}'