Wednesday, December 01, 2021

How to expose kube api server via nginx proxy

Requirement:

    Kubernetes API (Control Plane) are often sitting behind the firewall. To provide more security and load balancing, we need to set up an nginx proxy in front of them. There are 2 solutions.

Solution1: Use L4 TCP proxy pass of nginx

nginx stream core module provides L4 TCP UDP proxy pass functionalities. link
To proxy pass K8S API on port 6443 via nginx listening port 8888, we can implement the below code in nginx.conf:
stream {
    upstream api {
    server kubernetes.default.svc.cluster.local:6443;
    }
server {
    listen 8888;
    proxy_timeout 20s;
    proxy_pass API;
    }
}
kubeconfig has below elements:
  • the server is pointing nginx proxy ie https://myapi.myk8s.com:8888
  • certificate-authority is the CA of K8S API CA( not the CA of myapi.myk8s.com )
  • client-certificate: path/to/my/client/cert
  • client-key: path/to/my/client/key

Solution2: Use L7 Https proxy pass of nginx

nginx HTTP core module provides L7 SSL proxy pass functionalities. link
To proxy pass K8S API on https://myapi.myk8s.com/api/  via nginx listening 443 SSL, we can implement the below code in nginx.conf
http {
    upstream api {
        kubernetes.default.svc.cluster.local:6443;
    }
    server {
      listen              443 ssl;
      server_name         myapi.myk8s.com;
      ssl_certificate     /etc/nginx/ssl/tls.crt;
      ssl_certificate_key /etc/nginx/ssl/tls.key;
      location / {
        root /usr/local/nginx/html;
        index index.htm index.html;
      }
      location /api/ {
        rewrite ^/api(/.*)$ $1 break;
        proxy_pass https://api;
       
      }
    }
}
kubeconfig has below elements:
  • the server is pointing nginx proxy ie https://myapi.myk8s.com/api/
  • certificate-authority is the CA of myapi.myk8s.com (not K8S API CA)
  • can't use client-certificate and client-key like we do on L4 TCP proxy pass
  • Because TLS traffic to kube API server 6443 is regular anonymous TLS from nginx proxy, API server won't allow it. To solve it:
    • Option 1: use JWT token via OpenID connect
users:
- name: testuser
  user:
    auth-provider:
      config:
        idp-issuer-url: https://openid.myk8s.com/dex
        client-id: oidc-loginapp
        id-token: eyJhbGciOiJSUzI1NiIs....****
      name: oidc

    •  Option 2: Use mTLS and add client-certificate and client-key in the nginx proxy pass settings.

location /api/ {
        rewrite ^/api(/.*)$ $1 break;
        proxy_pass https://api;
        proxy_ssl_certificate         /etc/nginx/k8s-client-certificate.pem;
        proxy_ssl_certificate_key     /etc/nginx/k8s-client-key.key;
        proxy_ssl_session_reuse on;
      }



Wednesday, November 17, 2021

Tip: kube-apiserver can't start after adding a parameter

 Symptom:

We add a new oidc parameter for kube-apiserver to integrate with openID Dex.

The parameter is  --oidc-groups-prefix=oidc:

After that, kubelet can't start kube-apiserver static pod, and no obvious error reported

Solution:

The issue is to ":"  which is special character. It prevents kubelet to parse the parameter.

The right way is to quote it.  "--oidc-groups-prefix=oidc:" See more details in this github thread

Sunday, October 24, 2021

Tip: Error: vault kv get invalid character - in numeric literal

Symptom:

When we use vault kv get to fetch CA from pki endpoint, we hit error

 $ vault kv get pki/ca/pem
Error reading pki/ca/pem: invalid character '-' in numeric literal

Workaround:

The proper way to fetch it is:

 $ vault kv get -field=certificate pki/ca/pem

or

$ vault read -field=certificate pki/ca/pem


Wednesday, September 22, 2021

Error: invalid bearer token, oidc: verify token: oidc: expected audience

Symptom:

After we implemented dex + github via link. With example-app,we are able to get ID-token via http://127.0.0.1:5555/
With ID-token, we construct kubeconfig, but when we access k8s cluster we hit error:

error: You must be logged in to the server (Unauthorized)

In kube api server logs, we see error:

invalid bearer token, oidc: verify token: oidc: expected audience \"123456\" got [\"example-app\"]]"

 Triage:

Check payload and verify JWT ID-token on https://jwt.io

Check dex container logs 

Find similar issues in github link1 link2

Solution:

It turns out the client-id is not matched. 

The client-id set on K8S API server (--oidc-client-id) link   needs to match the client-id in example-app.

In above example, “123456” is the one I set on K8S API server, while client-id is “example-app”  in the example-app which caused the problem

Saturday, August 21, 2021

Kubectl Plugin for Oracle Database

 Requirement:

We often need to provision new oracle databases for developers
This is the kubectl plugin to automate the creation of oracle database statefulset in the Kubernetes cluster

Solution:

Full details and source codes are on the GitHub repository

Demo:



Friday, August 20, 2021

Tip: what are the GVK GVR CRD CR Scheme in Kuberentes Core API

GVK:
  • GVK stands for Group Version Kind 
  • Each Kind in K8S has Group and Version. i.e. Kind "Pod" is in Group "core" , Version "v1". Refer to official API doc
  • GVK  is defined to associate Group, Version and Kind
  • Each GVK map to a given root Go type in the package
  • Source code definition is  here 
GVR:
  • GVR stands for Group Version Resource
  • GVR is a "use" or "instance" of GVK in the K8S API
  • The command "kubectl api-resources"  gives us a list of GVR in the K8S cluster
CRD:
  • CRD stands for Custom Resource Definition
  • Each CRD is like Kind in K8S, so it also has Group, Version
  • CRD is the extension of the K8S API. Refer to official doc
  • Once it is defined, it acts like GVK in K8S API.
CR:
  • CR stands for Custom Resource
  • CR is a "use" or "instance" of CRD in the K8S API
  • Once it is instantiated, it acts like GVR in K8S API.
  • The command "kubectl api-resources"  gives us a list including both GVR and CR in the cluster.
Scheme:
  • The scheme is defined to keep track of a given GO type mapping to a given GVK. 
    • For example, we define   myexample.io/api/v1.mykind{}
    • The scheme is going to map it to the API group we defined in CRD: batchv1.myexample.io/v1
    • {
          kind:  mykind
          apiVersion: batchv1.myexample.io/v1
      }
  • Source code definition is here.

Wednesday, August 11, 2021

Tip: Git Squash and Force Push Tips

  • Try to avoid changing files on the local branch while updating contents on the web for the remote branch simultaneously. We may forget or lose track of what we did on the web and cause conflict with the local branch.
  • Always use "git rm" to remove files, so git has the index to track them. 
  • git rebase interactive mode is a wonderful tool to squash commits. Refer link
  • git force push is an alternative way to overwrite or squash commits for the remote branch. 
    • git push -f -u origin <branch name> 

Tuesday, July 20, 2021

Tip: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox

Kubelet report such error when you deploy a pod: 

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox : "*****"  error getting ClusterInformation: connection is unauthorized: Unauthorized

It's due to CNI is not set up well or is not functioning on the node. 

Redeploy the CNI provider may help. i.e. Flannel or Calico

Friday, July 16, 2021

Tip: How to get Go Client in Kubernetes Operator Reconcile()

Requirement:

      When we build a K8S operator via kubebuilder, we often need to interact with Control Plane. By default, we use controller-runtime client 

 However, when we use this client to fulfil some functions of kubectl i.e. drain, we hit an error:

 cannot use r.Client (variable of type client.Client) as kubernetes.Interface value in struct literal: missing method AdmissionregistrationV1

Solution:

The error indicates the controller-runtime client does not implement method AdmissionregistrationV1, so we can't use it, instead, we init a new GO Client in the reconcile(). Sample code is like

"k8s.io/client-go/kubernetes"
ctrlconfig "sigs.k8s.io/controller-runtime/pkg/client/config"

cfg, err := ctrlconfig.GetConfig()
if err != nil {
log.Log.Error(err, "unable to get kubeconfig")
return err
}
kubeclientset, err := kubernetes.NewForConfig(cfg)
if err != nil {
return err
}


Wednesday, July 14, 2021

Tip: Understand why no user concept in Kubernetes Authentication

 One main authentication mechanism in K8S is:   CA, Certificate, Key

There is no "user"  concept in K8S. Instead, it uses the client private key to identify the "user"

The workflow is like this:

  • Users(clients) have their private keys. These keys represent their unique id in K8S.
  • Go through the CSR approval process with these keys. We can add CN like user "John" into the CSR process to have a readable "user" id.
  • Once CSR is approved, we can get a signed certificate of the private key representing the user "John"
  • Then we can authenticate in K8S via  CA, Certificate, Key

Thursday, July 01, 2021

Pass CKA Exam Tips

On Thursday, I passed the CKA exam with a 93 score mark and get the certificate. I share some tips on how I achieve that.

  • 17 questions in 2 hours. 
  • Don't worry about copy and paste. You can copy it by clicking it when your mouse hovers on the "important text".
  • Read each question carefully. Always understand questions before starting to do it. Check the weight of each question. The high weight means more mark points in the question. 
  • Skip the difficult questions and make sure you get easy marks.  Only a 66 score mark is needed to pass the exam. 
  • Practise and create examples for each test point in the CKA curriculum
  • Strongly recommend this udemy course.  The practice and mock exams are great to prepare for CKA exams.
  • Practise all commands in the kubectl cheatsheet

Monday, June 28, 2021

Tip: Understand Golang methods and interface

Methods:

  • methods are functions with a special receiver ( a normal or define type)
  • so we define a type and create a method based on this type,  so the logic is the first type  ---> define method.
  • As we have lots of common types, we use a type to group all sorts of methods
  • So the type is the centrepiece to think thorough

Interface:

  • Golang interface is still a type, but not normal concrete type like string, int ....etc. Instead, it is an abstract type built on top of those concrete types.
  •  It has two elements:  a concrete type + value of the concrete type
  • According to the interface definition, which concrete types implements this interface, Golang automatically matches which concrete types are bond to this interface.
  • So in the same program, the same interface type can be different concrete types and values.
  • Concrete types have methods. We can call these methods like my-type.my-methods(). It is the same for the interface. We can call these methods like my-interface.my-methods()
  • A method can be called by both interface and its concrete type associated.
  • Use type assertion to get the value from the interface, then use the value to invoke other methods which are not defined in the interface.
  • One of the reasons why we have an interface is: since we also have lots of common methods, i.e. print string, play sports... all sorts of actions are considered as methods, so the genius part is we define an interface as a common signature of methods, so the centrepiece is not a TYPE, but methods,  it uses a common method (i.e. print string)  to group all sorts of types...

Friday, June 25, 2021

Kubebuilder Controller-runtime client_go_adapter.go Error

Symptom:

   We hit below error when we build an operator via kubebuilder 3.1 + controller runtime v0.8.3 + kuberentes 1.20.2

./../../pkg/mod/sigs.k8s.io/controller-runtime@v0.8.3/pkg/metrics/client_go_adapter.go:134:3: cannot use &latencyAdapter{...} (type *latencyAdapter) as type metrics.LatencyMetric in field value:

*latencyAdapter does not implement metrics.LatencyMetric (wrong type for Observe method)

have Observe(string, url.URL, time.Duration)

want Observe(context.Context, string, url.URL, time.Duration)

../../../pkg/mod/sigs.k8s.io/controller-runtime@v0.8.3/pkg/metrics/client_go_adapter.go:135:3: cannot use &resultAdapter{...} (type *resultAdapter) as type metrics.ResultMetric in field value:

*resultAdapter does not implement metrics.ResultMetric (wrong type for Increment method)

have Increment(string, string, string)

want Increment(context.Context, string, string, string)

Solution:

 It is related to controller runtime v0.8.3 has compatible issues with Kubernetes 0.21 modules. See link
To fix it, find go.mod, change the module from 0.21 to 0.20.2

Wednesday, June 09, 2021

Tip: Understand Golang Channel Directions

  • Channel is used to communicate among Goroutines.
  • Always image a channel as a pipe in the brain. A pipe of any concrete type of Golang, like string, int..etc. This pipe connects Goroutines
  • There is sending side of this pipe on one Goroutines, and there is an ending side of this pipe on another Goroutines
  • main() is also a Goroutine. 
  • We need to identify which Goroutine is sending side, which is the ending side.
  • Sending side is like.

sends chan<- string 
  • the chan<- string  means there is a string on the sending side of pipe   
  • We use it like    sends <- "my sending string"
  • Ending side is like.
ends <-chan string
  • the <-chan string  means there is a string on the ending side of pipe   
  • We use it like    ending_message <-ends or <-ends
  • for range <channel name>: it's used on the ending side the pipe to fetch values
  • time.Ticker is a good example of ending side of pipe
type Ticker struct { C <-chan Time // The channel on which the ticks are delivered. // contains filtered or unexported fields }

  • Below is another advance example of sending / ending side
    • os.Signal is the sending side of channel sigs
      • another process or thread from OS
    • main() goroutines is the ending side of channel sigs
// Registering signals - INT and TERM sigs := make(chan os.Signal, 1) signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM) // Process blocked here and waiting for signal(term/int) <-sigs

Sunday, May 23, 2021

Tip: Can't find docker networking namespace via ip netns list

Symptom:

    In ubuntu, we start a docker container, try to find docker networking namespace via "ip netns list". The output is empty.

Reason:

   The docker by default , it records netns on /var/run/docker/netns. While "ip netns list" is checking /var/run/netns

Workaround:  

 stop all containers , rm -rf /var/run/netns,  ln -s /var/run/docker/netns  /var/run/netns

Tip:

To find netns id of container use

docker ps ---> find container ID

docker inspect <contain ID> |grep netns

Thursday, May 13, 2021

Tip: Bind Error when running multiple schedulers in K8S

Error details: 

I0530 09:25:29.097683       1 serving.go:331] Generated self-signed cert in-memory

failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use

Reason:

     It's due to the default scheduler is running on the same node. We can move the 2nd scheduler to another node to fix this. 


Thursday, April 22, 2021

Tip: curl: (23) Failed writing body

Symptom: 

When we run 

curl -sSL -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/download/$VERSION/argocd-linux-amd64

We get error

curl: (23) Failed writing body (0 != 1369)

Reason:

 Tt is due to  "/usr/local/bin/argocd"  is on the /usr/local/bin  directory which is owned by root user while we use normal user to run curl.

To fix it , change "/usr/local/bin/argocd" to be "/tmp/argocd"


Wednesday, April 14, 2021

Tip: git can't communicate with github after unset http.proxy

Symptom:

    We used to have an HTTP proxy to access Github. It was working fine. When we take off HTTP proxy via "git config --global -e", use "git config --global -l" to confirm it is taken off.

   However, it still can't communicate with GitHub. Error like 

 kex_exchange_identification: Connection closed by remote host fatal: Could not read from remote repository

Reason:

   It is due to we use ssh to communicate with GitHub, while there are extra HTTP proxy settings in ~/.ssh/config file

Host=github.com

ProxyCommand=socat - PROXY:<proxy-server>:%h:%p,proxyport=80

Take them off will fix the issue. 


Tuesday, April 13, 2021

Tip: When OPA gatekeeper stuck

Symptom:

    We hit issues that all kubectl command stuck like kubectl get pod...etc

    initially, we thought it is a Kubernetes control plane issue but confirmed with the cloud provider, the control plane has some communication issues with the webhook

Solution:

  It turns out the OPA gatekeeper was stuck and cause webhook issues with the control plane.

Workaround:

1. Delete webhook

kubectl delete validatingwebhookconfigurations.admissionregistration.k8s.io gatekeeper-validating-webhook-configuration

2. It will stabilize the communications with the control plane

3. Delete and redeploy opa keeper deployment 

Thursday, April 08, 2021

Tip: error: failed to load key pair tls: failed to parse private key

 Symptom:

    When we kubectl create secret tls ..., we hit below error

error: failed to load key pair tls: failed to parse private key

Reason:

    It is likely the private key file is encrypted with a passphrase.

   Use openssl to unencrypt it and use the new key for kubectl 

openssl rsa -in encrypted-private.key -out unencrypted.key

 Enter pass phrase for ...... 

 

 

Wednesday, April 07, 2021

Tip: Pods keep crashloopbackoff

 Symptom:

 Pods always crashloopbackoff 

"kubectl describe pod..."  does not give meaningful info, as well as "kubectl get events"

Reason:

One of the likely reason is related to pod security policy. My situation is the existing pod security policy does not allow Nginx or Apache to run. It does not have

 allowedCapabilities:

  - NET_BIND_SERVICE

  # apache or nginx need escalation to root to function well

  allowPrivilegeEscalation: true


So the pods keep crashloopbackoff. To fix it is to add the above into the pod security policy.


Saturday, April 03, 2021

Tip: Istio TLS secrets, Gateway, VirtualService namespace scope

There is some confusion about where we should put istio objects. Is it in the istio-system or users namespace?

Here are some tips:

For TLS,mTLS CA, certs, key management in istio, the Kubernetes secrets should be created in the istio-system. Not in users' namespace

Gateway and VirtualService need to be created on the users' namespace 

Tuesday, March 09, 2021

Tip: Slow Wi-FI Speed due to the second Monitor

 Symptom:

  The wifi speed dropped to half after we connected a second monitor for extended display.

Tip:

  It is likely the 2 monitors have different refresh rate,60 Hertz vs 59 Hertz. Make them the same could help.





How to find which type of VMs pods are running via promQL

Requirement:

     Users need to know which type of VMs their pods are running. i.e. users wanna verify pods are running on GPU VMs

Solution:

In Prometheus, we have 2 metrics:  kube_pod_info{} and kube_node_lables{}

kube_node_labels often has a label to tell which type of VM it is. 

We can use "node" to join these 2 metrics to provide a report to users

sum( kube_pod_info{}) by(pod,node) *on(node) group_left(label_beta_kubernetes_io_instance_type) sum(kube_node_labels{}) by (node,label_beta_kubernetes_io_instance_type)

Please refer official promQL doc 

Tip: create grafana API for it:

curl -g -k -H "Authorization: Bearer ******" https://grafana.testtest.com/api/datasources/proxy/1/api/v1/query?query=sum\(kube_pod_info{}\)by\(pod,node\)*on\(node\)group_left\(label_beta_kubernetes_io_instance_type\)sum\(kube_node_labels{}\)by\(node,label_beta_kubernetes_io_instance_type\)

Als refer my blog how to convert promQL into grafana API call

Monday, March 08, 2021

How to convert PromQL into Grafana API call

Requirement:

     We use promQL to fetch some metadata of a Kubernetes cluster. i.e existing namespaces

sum(kube_pod_info) by (namespace)

We would like to convert it to a grafana API call, so other apps can consume this metadata

Solution:

  • First, we need to generate an API token. Refer grafana doc 
  • Second, below is a curl example to consume it:
curl -k -H "Authorization: Bearer e*****dfwefwef0=" https://grafana-test.testtest.com/api/datasources/proxy/1/api/v1/query?query=sum\(kube_pod_info\)by\(namespace\)

Thursday, February 25, 2021

Istio install against different Docker Repos

Requirement:

       With istioctl, it has built-in manifests. However, these manifests or docker images may not be accessible in the corporate network, or users use other docker repo other than docker.io.  How to install it?

Solution:

  • istioctl manifest generate --set profile=demo > istio_generate_manifests_demo.yaml
  • find docker images path in the yaml ,download and upload them to your internal docker repo.
  • edit the file with right docker image path of internal docker repo
  • kubectl apply -f istio_generate_manifests_demo.yaml
  • istioctl verify-install -f istio_generate_manifests_iad_demo.yaml
  • to purge the deployment:
    • istioctl x uninstall --purge

Tuesday, February 16, 2021

Tip: Pod FQDN in Kubernetes

Pods from deployment, statefulset. daemonset exposed by service

FQDN is  pod-ip-address.svc-name.my-namespace.svc.cluster.local

i.e  172-12-32-12.test-svc.test-namespace.svc.cluster.local

not 172.12.32.12.test-svc.test-namespace.svc.cluster.local

Isolated Pods:

FQDN is  pod-ip-address.my-namespace.pod.cluster.local

i.e  172-12-32-12.test-namespace.pod.cluster.local

Wednesday, February 03, 2021

Tip: Kubernetes intermittent DNS issues of pods

 Symptom:

     The pods get "unknown name" or "no such host" for the external domain name. i.e. test.testcorp.com

The issues are intermittent.

Actions:

  • Follow k8s guide and check all  DNS pods are running well. 
  • One possible reason is one or a few of namespaces in /etc/resolv.conf of hosts may not be able to solve the DNS name  test.testcorp.com
    • i.e. *testcorp.com is  corp intranet name, it needs to be resolved by corp name servers. however, in normal cloud VM setup, we have name server option 169.254.169.254 in the /etc/resolv.conf,  in this case 169.254.169.254 has no idea for *.testcorp.com, thus we have intermittent issues
    • To solve this, we need to update DHCP server, remove 169.254.169.254 from /etc/resolv.conf
    • kubectl rollout restart deployment coredns -n kube-system
  • One possible reason is some of the nodes have network issues which DNS pods are not functioning well.  use below commands to test DNS pods. 

kubectl -n kube-system get po -owide|grep coredns |awk '{print $6 }' > /tmp/1.txt

cat /tmp/1.txt  | while read -r line; do echo $line | awk '{print "curl -v --connect-timeout 10 telnet://"$1":53", "\n"}'; done
  • Enable debug log of DNS pods per  k8s guide
  • test the DNS and kubectl tail all DNS pods to get debug info
kubectl -n kube-system logs -f deployment/coredns --all-containers=true --since=1m |grep testcorp

  • You may get log like

INFO] 10.244.2.151:43653 - 48702 "AAAA IN test.testcorp.com.default.svc.cluster.local. udp 78 false 512" NXDOMAIN qr,aa,rd 171 0.000300408s

[INFO] 10.244.2.151:43653 - 64047 "A IN test.testcorp.com.default.svc.cluster.local. udp 78 false 512" NXDOMAIN qr,aa,rd 171 0.000392158s 

  • The /etc/resolv.conf has  "options ndots:5"  which may impact the external domain DNS resolution. To use full qualified name can mitigate the issue. test.testcorp.com --> test.testcorp.com.  (there is a .  at the end)
  • Disable coredns AAAA (IPv6) queries. it will reduce NXDOMAIN (not found), thus reduce the fail rate back to the dns client
    • Add below into coredns config file. refer coredns rewrite
    • rewrite stop type AAAA A
  • Install node local DNS to speed DNS queries. Refer kubernetes doc
  • test dig test.testcorp.com +all many times, it will show authorization section
;; AUTHORITY SECTION:
test.com.     4878    IN      NS      dnsmaster1.test.com.
test.com.     4878    IN      NS      dnsmaster5.test.com.
    • to find out which DNS server  timeout
  • Add below parameter in /etc/resolv.conf to improve DNS query performance
    • options single-request-reopen   refer manual
    • options single-request   refer manual
  • Another solution is to use an external name:

    // code placeholder
    apiVersion: v1
    kind: Service
    metadata:
      annotations:
      name: test-stage
      namespace: default
    spec:
      externalName: test-stage.testcorp.com
      ports:
      - port: 636
        protocol: TCP
        targetPort: 636
      type: ExternalName

Tuesday, February 02, 2021

Tip: A Command to get all resources and subresources in Kuberentes Cluster

 list=($(kubectl get --raw / | jq -r '.paths[] | select(. | startswith("/api"))')); for tgt in ${list[@]}; do aruyo=$(kubectl get --raw ${tgt} | jq .resources); if [ "x${aruyo}" != "xnull" ]; then echo; echo "===${tgt}==="; kubectl get --raw ${tgt} | jq -r ".resources[] | .name,.verbs"; fi; done

Tip: Use oci cli to reboot a VM

oci compute instance action --action SOFTRESET --region us-ashburn-1 --instance-id  <instance id you can get from kubectl describe node>

oci compute instance get  --region us-ashburn-1 --instance-id  <instance id you can get from kubectl describe node>

sometimes, you may get 404 error if you omit " --region us-ashburn-1"

Tip: Collect console serial Logs of Oracle Cloud Infrastructure

oci compute console-history capture   --region us-ashburn-1 --instance-id <instance-ocid>

--> oci compute console-history get  --region us-ashburn-1 --instance-console-history-id <OCID from the command before> 

--> oci compute console-history get-content --region us-ashburn-1  --length 1000000000 --file /tmp/logfile.txt --instance-console-history-id <OCID from the command before>


Tuesday, January 05, 2021

Tip: Change default storageclass in Kubernetes

The below example is for OKE (Oracle Kubernetes Engine), the same concept for other Kubernetes 

Change default storageclass from oci to oci-bv:

kubectl patch storageclass oci -p '{"metadata": {"annotations":{"storageclass.beta.kubernetes.io/is-default-class":"false"}}}'


kubectl patch storageclass oci-bv -p '{"metadata": {"annotations":{"storageclass.beta.kubernetes.io/is-default-class":"true"}}}'