Tuesday, December 29, 2020

Tip: Attach volume conflict Error in OKE

Symptom:

    The pods with block volumes in OKE (oracle Kubernetes engine) are reporting such error:

Warning  FailedAttachVolume  4m26s (x3156 over 4d11h)  attachdetach-controller  (combined from similar events): AttachVolume.Attach failed for volume "*******54jtgiq" : attach command failed, status: Failure, reason: Failed to attach volume: Service error:Conflict. Volume *****osr6g565tlxs54jtgiq currently attached. http status code: 409. Opc request id: *********D6D97

Solution:

    There are quite a few reasons for that. One of them is as the error states:  the volume is attached to another host instance, thus it can't be attached again. 

    To fix that, we can find attach status and VM instance details via volume id. Then manually detach the volume from the VM via SDK or console. The error would be gone

Friday, December 25, 2020

Tip: Ngnix ingress controller can't startup

Symptom:

     We try to restart a pod of nginx ingress controller. After the restart, the pods can't startup 

Error like

status.go:274] updating Ingress ingress-nginx-internal/prometheus-ingress status from [] to [{100.114.90.8 }]

I1226 02:11:14.106423       6 event.go:255] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ingress-nginx-internal", Name:"prometheus-ingress", UID:"e26f55f2-d87d-4efe-a4dd-5ae02768814a", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"46816813", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress ingress-nginx-internal/prometheus-ingress

I1226 02:11:49.153889       6 main.go:153] Received SIGTERM, shutting down

I1226 02:11:49.153931       6 nginx.go:390] Shutting down controller queues

Workaround:

   Somehow the existing ingress rule "prometheus-ingress" is the causeRemove the rule then the pod can startup well. We can add the rule back after that.

Wednesday, November 18, 2020

Tip: Change status of PVC from Released to Available

Symptoms:

    When users delete PVC in Kubernetes, the PV status stays on "Released".  Users would like to recreate the PVC with the same PV but failed. new PVC status always stays on "Pending".

Solution:

   We need to manually clear the status to make it "Available" via below command

 kubectl patch pv  <pv name> -p '{"spec":{"claimRef": null}}'

Tuesday, November 10, 2020

Tip: OpenSSL SSL_connect: SSL_ERROR_SYSCALL

 Symptoms:

We use curl -v https://<domain> to test if the network traffic is allowed
The expected result would be like 
*  Trying 12.12.12.12:443...
* TCP_NODELAY set
* Connected to ***  port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS alert, unknown CA (560):
However, we see this error :
*  Trying 12.12.12.12:443...
* TCP_NODELAY set
* Connected to *** port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to *** :443

Solution:

From the output, we see 443 is open but the TLS handshake, Server hello is missing. We have mid-tiers to handle TLS certificates. So it is very likely that the network is interrupted between  LB and mid-tiers where TLS is being handled.  It would be a good approach to double-check firewall ports between them. :)

Another reason is ingress controller pods were stuck may bounce or scale up to workaround it


Sunday, October 11, 2020

Tip: Error Http 504 gateway timeout on ingress controller

 Symptom:

    We have micro-services behind our ingress controller in our Kubernetes cluster. We are hitting HTTP 504 error in our ingress controller logs intermittently.

100.112.95.12 - - [01/Oct/2020:20:32:13 +0000] "GET /mos/products?limit=50&offset=0&orderBy=Name%3Aasc HTTP/2.0" 504 173 "https://ep******" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:78.0) Gecko/20100101 Firefox/78.0" 1578 180.004 [ingress-nginx-external2-mag-oke-products-svc-8080] [] 10.96.63.211:8080, 10.96.63.211:8080, 10.96.63.211:8080 0, 0, 0 60.001, 60.001, 60.002 504, 504, 504 c5b8cb67927d3997b4019e9830762694

  Bounce ingress controller would fix the issues temporarily.

Solution:

  We find the issues are caused parameters of nginx which stated

https://github.com/kubernetes/ingress-nginx/issues/4567

Add below annotations into ingress rules to fix it

nginx.ingress.kubernetes.io/proxy-connect-timeout: "5"

nginx.ingress.kubernetes.io/proxy-next-upstream-timeout: "10"


Friday, October 02, 2020

Tip:Node and Namespace drop down menu missing node names in Grafana

 Symptom:

      We have Prometheus and Grafana setup running well. Suddenly the node and namespace drop-down list disappeared.  No config changes were made. 


Solution:

   It is very likely the kube-state-metrics service have problems. That's the place grafana get the info from.

   Bounce the pod or recreate the deployment to fix it



Tuesday, September 22, 2020

RMAN-04022: target database mount id % does not match channel's mount id %

Symptom:

  When we run rman target / and run  "crosscheck archivelog all", we hit an error

RMAN-04022: target database mount id ****  does not match channel's mount id ***

Solution:

   It is quite possible the DB is duplicated with rman. The default channel is still on the old one which is not the current DB. To specifically allocate a disk channel to fix it

run

{

allocate channel disk1 device type disk;

crosscheck archivelog all;

}

Sunday, August 30, 2020

Tip: A few commands to debug Issues with Kubelet

sudo systemctl status -l kubelet
kubectl describe node <name>
sudo journalctl -u kubelet | grep ready
sudo systemctl restart docker
kubectl cluster-info dump  --- to detailed cluster info

Tip: Impersonate users on kubectl

We can impersonate users with the --as= and the --as-group= flags.

kubectl auth can-i create pods --as=me

Monday, August 17, 2020

Tip: remove linux files with special characters

ls -ltr

-rw-rw-r-- 1 henryxie henryxie    0 Apr 22 12:14 --header

-rw-rw-r-- 1 henryxie henryxie    0 Apr 22 12:14 -d


 To remove these 2 files

rm -v -- "-d"

rm -v -- "--header"

Wednesday, August 05, 2020

Tip: Pods are not created while deployment is created

Symptom:

  We have a normal deployment which was working fine. When we test it on a new Kubernetes cluster, the deployment is created well, but the pod is not created. No warning or error messages.
 "kubectl describe deployment"  does not show any hints. Pod security policy check is good, RBAC privilege check is good.

OldReplicaSets:    <none>
NewReplicaSet:     livesqlstg-admin-678df959b4 (0/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  16s   deployment-controller  Scaled up replica set livesqlstg-admin-678df959b4 to 1

Solution:

  The reason is we have resource quota implemented on the namespace. 
 spec:
  hard:
    configmaps: "10"
    limits.cpu: "10"
    limits.memory: 20Gi
    persistentvolumeclaims: "10"
    ....

By doing that, we need an additional resource section in the deployment yaml file.  ie
      resources:
              requests:
                  memory: "10Gi"
                  cpu: "1"
              limits:
                  memory: "10Gi"
                  cpu: "1"
 It would be good for Kubernetes to give users some warnings for that. 

Wednesday, July 29, 2020

Tip: No route to host issues in Kubernetes Pods

Symptom:

    We see intermittent the network issues in OKE (Oracle Kubernetes Engine). ingress controller pods have difficult to access other services.  We use curl to test the network port, we get an error like below:
 
$ curl -v telnet://10.244.97.24:9090
* Expire in 0 ms for 6 (transfer 0x560b9cdd7dd0)
*   Trying 10.244.97.24...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x560b9cdd7dd0)
* connect to 10.244.97.24 port 9090 failed: No route to host

Solution:

   There are quite a few reasons for that. Check my another blog 
   In this case, it is related to firewall ports open. 
  By default, the network team open all ingress and egress ports for the same worker nodes Subnet which means no firewall among all worker nodes.  However, it was set stateful.  As Kubernetes overly network heavily depends on UDP which is stateless, so we need to open ports as stateless

Thursday, July 02, 2020

How To RMAN Backup Oracle Database 19c running in Kubernetes

Requirement:

   We have an Oracle Database 19c running in OKE( Oracle Kubernetes Engine). We would like to use rman to backup DB to Object storage of  Cloud. We use Oracle Cloud Infrasture (OCI) as an example. The same concept applied to other Clouds.

Steps:

  • Create a docker image with python 3 and Oracle OCI CLI installed. Please refer official doc how to install Oracle OCI CLI. Also, Dockerfile can be found via  GitHub repo 
  • Create a statefulset using the docker image. Yaml files can be found via GitHub Repo
  • Download the rman backup module of OCI. link
  • Follow the instructions of installation. link
    • Attention: when we set up oci cli, the config file should not be in the docker image, but to the persistent block storage volume. ie /opt/oracle/diag/.oci/config and export OCI_CLI_CONFIG_FILE=/opt/oracle/diag/.oci/config
    •  Attention: when we set up rman backup module and create wallet files,  all config files should not be put in the docker image, but to the persistent block storage volume. ie /opt/oracle/diag/
      • java -jar oci_install.jar \
      • -host https://objectstorage.us-phoenix-1.oraclecloud.com \
      • -pvtKeyFile /opt/oracle/diag/.oci/testuser_ww-oci_api_key.pem \
      • -pubFingerPrint 52:b6:0e:2e:***:a1 \
      • -uOCID "ocid1.user.oc1..aaaaahjia***adfe" \
      • -tOCID "ocid1.tenancy.oc1..aanh7gl5**dfe" \
      • -walletDir /opt/oracle/diag/.oci/opc_wallet \
      • -configFile /opt/oracle/diag/.oci/opc_wallet/opcAUTOCDB.ora \
      • -libDir $ORACLE_HOME/lib \
      • -bucket BUK-OBJECT-STORAGE-BAK-TEMP \
      • -proxyHost yourproxy.com \
      • -proxyPort 80
    • Use java- jar oci_installer.jar -h for more details
    • Tip:If you have libopc.so in place in $ORACLE_HOME/lib which is in docker image, we can ignore the warning of  downloading part of the process
    • Tip: You can copy opc_wallet to other servers or OKE clusters without doing oci cli and java -jar oic_installer.jar steps .
    • Tip: If you see error " KBHS-00713: HTTP client error '', check http_proxy and https_proxy settings. Rman backup to object storage module uses  HTTP HTTPS protocols. 
    • Tip: If you see error " KBHS-01012: ORA-28759 occurred during wallet operation; WRL file:/home/oracle/opc_wallet ",  it maybe due to there are some old opc<sid>.ora config files in $ORACLE_HOME/dbs. DB always try to read the config file in ./dbs instead of using parameters. Remove the files should clear it
    • To avoid error "KBHS-01006: Parameter OPC_HOST was not specified", we need to put all parameters in opcAUTOCDB.ora in the rman script.
  • Test RMAN backup inside your statefulset DB pod
    • rman target /
    • SET ENCRYPTION ON IDENTIFIED BY 'testtest' ONLY;
    • run {
    • SET ENCRYPTION ON IDENTIFIED BY 'changeme' ONLY;
    • ALLOCATE CHANNEL t1 DEVICE TYPE sbt PARMS "SBT_LIBRARY=/opt/oracle/product/19c/dbhome_1/lib/libopc.so ENV=(OPC_HOST=https://objectstorage.us-phoenix-1.oraclecloud.com/n/testnamespace, OPC_WALLET='LOCATION=file:/opt/oracle/diag/.oci/opc_wallet CREDENTIAL_ALIAS=alias_oci', OPC_CONTAINER=TEST-OBJECT-STORAGE-RMAN, OPC_COMPARTMENT_ID=ocid1.compartment.oc1..aa****sddfeq, OPC_AUTH_SCHEME=BMC)";
    • backup current controlfile;
    • }

Monday, June 15, 2020

Dockerfile for Oracle Database 19.5 image with patches applied

Summary:

Here is the github link for Dockerfile of Oracle Database 19.5 image with patches applied

https://github.com/HenryXie1/Dockerfile/tree/master/OracleDatabase

The docker image has 19.3 installed and apply below patches to 19.5
OCT_RU_DB_p30125133_190000_Linux-x86-64.zip  OCT_RU_OJVM_p30128191_190000_Linux-x86-64.zip  
p30083488_195000DBRU_Linux-x86-64.zip

The docker image has updates to facilitate automated block storage provision in  OKE (Oracle Kubernetes Engine)

The docker image creates three different volumes for  Oradata,  Fast Recovery Area (FRA)  and Diagnose area (diag). The three would help to keep datafiles safe, dedicated space for recovery and separated place for diagnosing avoid filling up Data and FRA places.

The testdb yaml files utilize oci-bv (Container Storage Interface -- CSI based)  of OKE

Sunday, June 14, 2020

Tip: Sending build context to Docker daemon when Docker build

Symptom:

  When we run docker build
Sending build context to Docker daemon...
   After a while, we hit out of space issue.

Solution:

When docker build large image like oracle database, we better only keep only 1 version DB downloaded binary file in the docker build directory. 
By default docker daemon sending build context will include all the zip files in it (include unused version zip files), it may cause unnecessary space pressure.

Tip: Error: OCI runtime create failed: container_linux.go:349 starting container process caused "exec: \"/bin/sh\": stat /bin/sh: permission denied"


Symptom:

When we run docker build for an image, we got below error:
OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"/bin/sh\": stat /bin/sh: permission denied": unknown
The error happens on the line " From base"

Solution:

The reason is due to the base image somehow is not available or ruined. We need to refresh the base image, then the issue is gone


Friday, June 12, 2020

Kustomize Error: trouble configuring builtin PatchTransformer with config

Symptom:

When we run kustomize build , we got below:
Error: trouble configuring builtin PatchTransformer with config: `
patch: users_namespace_label_required_tp_patch.yaml
target:
  kind: K8sRequiredLabelsPodtemplate
`: unable to parse SM or JSON patch from [users_namespace_label_required_tp_patch.yaml]

Solution:

In the kustomization.yaml, there is patch section like below
resources:
- ../../new-namespace-base/

patches:
- patch: users_namespace_label_required_tp_patch.yaml
  target:
    kind: K8sRequiredLabelsPodtemplate

patch is the typo, it should be path.

patches:
path: users_namespace_label_required_tp_patch.yaml
  target:
    kind: K8sRequiredLabelsPodtemplate


Tuesday, May 26, 2020

Tip: Add Resources via Kustomize edit add

kustomize edit add support globbing strings

Add directory
kustomize edit add resource <dir>

Add Yaml files
kustomize edit add resource  ./*.yaml

Tip: List Only Filename in Different Lines

ls  *.yaml  --format single-column

Monday, May 25, 2020

Tip: remote: error: Your commit has been blocked due to certain binary file(s) being oversized or not allowed.

Symptom:

  When we git push a large file into gitlab, we hit this error
remote: error: Your commit has been blocked due to certain binary file(s) being oversized or not allowed.

Solution:

git reset   --soft HEAD~1.   (2 or 3 depends how far to rollback)
git commit -m   "your message"
git pull  (to merge the changes)
git push

Tip: Add Password Into Existing Private Keys

Add Password:

openssl rsa -aes256 -in your-private-key.pem -out your-private-key-encrypted.pem
writing RSA key
Enter PEM pass phrase:   ****
Verifying - Enter PEM pass phrase: ****

Remove Password:

openssl rsa -in  your-private-key-encrypted.pem -out your-private-key.pem
Private key passphrase:  ****


Saturday, May 23, 2020

Tip: find sed awk egrep

find ./ -type f -exec sed -i -e 's/oldstring/newstring/g' {} \;
find ./ -name *.yaml | while read -r filename; do echo "test" >> "$filename"; done
kubectl get rolebindings  --all-namespaces |egrep 'strings1|strings' |grep psp |awk '{print $2" -n "$2 }'

Goal: convert string from "adc@adc.com  ocid.aa******" to
"
- apiGroup: rbac.authorization.k8s.io 
  kind: User 
  name: ocid.aa*** # adc@adc.com
"
Tip:
Put all strings in 1.txt
cat 1.txt  | while read -r line; do echo $line | awk '{print "- apiGroup: rbac.authorization.k8s.io","\n", " kind: User","\n"," name: "$2" # "$1, "\n"}'; done


Tip: Find List of Changed Files after Git Commit

git diff-tree --no-commit-id --name-only -r ab8776d..a7d508b
ab8776d and  a7d508b are git commit SHA

Wednesday, May 20, 2020

Kubectl Stop Working After Upgrade Ubuntu

Symptom:

   After we upgrade ubuntu to 20.04, kubectl stop working with OKE (Oracle Kubernetes Engine).  Error is like
$kubectl version
Could not find platform independent libraries <prefix>
Could not find platform dependent libraries <exec_prefix>
Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>]
Fatal Python error: initfsencoding: Unable to get the locale encoding
ModuleNotFoundError: No module named 'encodings'
Current thread 0x00007f2d902e0740 (most recent call first):
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: getting credentials: exec: signal: aborted (core dumped)

Solution:

  The issue is related to Kube config of OKE. In OKE, we need to use oci to authenticate users to access OKE control plane. The upgrade of ubuntu somehow breaks the python env that oci cli depends.  To fix it, we need to re-install oracle cloud oci cli. Please refer link

Sunday, May 17, 2020

A few simple Tips on Kubernetes


  • Core DNS is deployment and Flannel is DaemonSet
  • Persistent Volume (PV)  is not under the namespace
  • Persistent Volume Claim(PVC) is under the namespace
  • Delete PVC will delete PV automatically by default (Oracle Kubernetes Engine). Change retain policy if necessary
  • Drain the node before rebooting worker node
  • Both TCP and UDP must be open in the worker node subnet.
  • If UDP is open after VMs are up and running, we may need to recreate VMs to let docker daemon to work with new settings

Tip:Name or service not known isues in Kubernetes

Symptom:

    We got below error when we try to psql into Postgres in Kubernetes Pods. The error is intermittent.
psql: could not translate host name “test-dbhost” to address: Name or service not known

We use this command to test Kube DNS service, curl -v telnet://10.96.5.5:53
The result is also intermittent and DNS resolution is kind of working but very slow

Solution:

   We have checked Kube-DNS service is up and running.  The core DNS Pods are up and well.
We also found if the pods are in the same worker node, they are working fine. However, if cross the nodes, we hit issues. It seems issues on the node to node communications.

   Finally, we find the TCP is open but UDP is not open in the node subnet. We have to open the UDP.
After UDP is open, the intermittent issues are still existing. It is quite possibly related to docker daemon stuck in the old settings.  We need to rolling restart worker nodes to fix it.

To rolling restart worker node:

1. Assume you have nodes available in the same AD of OKE, kubectl drain will move pv,pvc to the new node automatically for statefulset and deployment
2. kubectl drain <node> --ignore-daemonsets  --delete-local-data
3. reboot the node
4. kubectl uncordon <node>

Tip: Clean Oracle DB Diagnose Home Automatcally

Requirement:

  Oracle DB can generate a huge amount of trace files and fill up the file system. Tired to clean Oracle DB trace files and incident files?

Solution:

SHORTP_POLICY : Retention for ordinary trace files
LONGP_POLICY : Retention for like incident files

adrci> set control (SHORTP_POLICY = 360) ===>15days
adrci> set control (LONGP_POLICY = 2160) ===>90 Days
adrci> show control

Purging Trace files manually:

Following command will manually purge all tracefiles older than 2 days (2880 minutes):
adrci> purge -age 4880 -type trace
adrci> purge -age 129600 -type ALERT ===> purging ALERT older than 90 days
adrci> purge -age 43200 -type INCIDENT ===> purging INCIDENT older than 30 days
adrci> purge -age 43200 -type TRACE ===> purging TRACE older than 30 days
adrci> purge -age 43200 -type CDUMP ===> purging CDUMP older than 30 days
adrci> purge -age 43200 -type HM ===> purging HM older than 30 days
adrci> show tracefile -rt

Crontab to purge files Automatically
00 20 * * * adrci exec="set home diag/rdbms/****;purge -age 4880 -type trace;purge -age 43200 -type INCIDENT;purge -age 43200 -type CDUMP"

Thursday, April 30, 2020

Steps to implement ConfigOverride for JDBC DataSource in WebLogic via WebLogic Kubernetes Operator

Here are steps how we implement a config override JDBC DataSource in WebLogic via WebLogic Kubernetes Operator.
  • More details , please refer  https://oracle.github.io/weblogic-kubernetes-operator/userguide/managing-domains/configoverrides/
  • Create secret for db connections
    • kubectl -n test-poc-dev create secret generic dbsecret --from-literal=username=weblogic --from-literal=password=**** --from-literal=url=jdbc:oracle:thin:@(DESCRIPTION=(ENABLE=BROKEN)(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=testdb.oraclevcn.com)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=testdb)))
    • kubectl -n test-poc-dev label secret dbsecret weblogic.domainUID=test-poc-dev-domain1
  • Create Datasource  jdbc-AppDatasource module when you build docker images
  • Some good examples on github link
  • Create configmap for jdbc-MODULENAME.xml , in this case it is jdbc-AppDatasource.xml
    • cd kubernetes/samples/scripts/create-weblogic-domain/domain-home-in-image/weblogic-domains/test-poc-dev-domain1/
    • 2 files jdbc-AppDatasource.xml  version.txt in configoverride directory 
    • kubectl -n test-poc-dev create cm test-poc-dev-domain1-override-cm --from-file ./configoverride
      kubectl -n test-poc-dev label cm test-poc-dev-domain1-override-cm weblogic.domainUID=test-poc-dev-domain1

    • Add below in the spec section of domain.yaml
      • spec:
            ....
            configOverrides: test-poc-dev-domain1-override-cm
            configOverrideSecrets: [dbsecret]
    • bouncing WebLogic servers to make it effective. 
    • Debugging please refer https://oracle.github.io/weblogic-kubernetes-operator/userguide/managing-domains/configoverrides/#debugging

Wednesday, April 15, 2020

Error: ExternaName not working In Kubernetes

Symptom:

   We have ExternName for our DB service: test-db-svc:

apiVersion: v1
kind: Service
metadata:
  name: test-db-svc
  namespace: test-stage-ns
spec:
  externalName: 10.10.10.10
  ports:
  - port: 1521
    protocol: TCP
    targetPort: 1521
  sessionAffinity: None
  type: ExternalName

   After we upgrade Kubernetes Master nodes, DNS service stops resolving the ExternalName

 curl -v telnet://test-db-svc:1521
* Could not resolve host: test-db-svc; Name or service not known
* Closing connection 0curl: 
(6) Could not resolve host: test-db-svc; Name or service not known

Solution:

   It is due to the new version of Kubernetes doesn't support IP address on ExternalName. We need to replace it with FQDN

apiVersion: v1
kind: Service
metadata:
  name: test-db-svc
  namespace: test-stage-ns
spec:
  externalName: testdb.testdomain.com
  ports:
  - port: 1521
    protocol: TCP
    targetPort: 1521
  sessionAffinity: None
  type: ExternalName


Tuesday, April 14, 2020

Tip: use curl to test network port and DNS service in the Pod

curl is installed in most of docker images by default. Most of pods have it
We can use curl to test if network ports are open and DNS service is working

Example:  To test DB service port 1521
curl -v telnet://mydb.testdb.com:1521

*   Trying 1.1.1.1:1521...
*   TCP_NODELAY set
*   connect to 1.1.1.1:1521 port 1521 failed: Connection timed out
*   Failed to connect to port 1521: Connection timed out
*  Closing connection 0curl: (7) Failed to connect to  Connection timed out


It tells us DNS is working as we see the IP address 1.1.1.1
But port is not open.

Sunday, April 12, 2020

Oracle Non-RAC DB StatefulSet HA 1 Command Failover Test in OKE

Requirement:

OKE has a very powerful Block Volume management built-in. It can find, detach, reattach block storage volumes among different worker nodes seamlessly. Here is what we are going to test.
We create an Oracle DB statefulset on OKE. Imagine we have hardware or OS issue on the worker node and test HA failover to another worker node with only 1 command (kubectl drain).
Below things happen automatically when draining the node
  • OKE will shutdown DB pod
  • OKE will detach PV on the worker node
  • OKE will find a new worker node in the same AD
  • OKE will attach PV in the new worker node
  • OKE will start DB pod in the new worker node
DB in statefulset is not RAC, but with the power of OKE, we can failover a DB to new VM in less than a few minutes

Solution:

  • Create service for DB statefulset
    $ cat testsvc.yaml 
    apiVersion: v1
    kind: Service
    metadata:
      labels:
         name: oradbauto-db-service
      name: oradbauto-db-svc
    spec:
      ports:
      - port: 1521
        protocol: TCP
        targetPort: 1521
      selector:
         name: oradbauto-db-service
  • Create a DB statefulset, wait about 15 min to let DB fully up
    $ cat testdb.yaml 
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: oradbauto
      labels:
        app: apexords-operator
        name: oradbauto
    spec:
      selector:
         matchLabels:
            name: oradbauto-db-service
      serviceName: oradbauto-db-svc
      replicas: 1
      template:
        metadata:
            labels:
               name: oradbauto-db-service
        spec:
          securityContext:
             runAsUser: 54321
             fsGroup: 54321
          containers:
            - image: iad.ocir.io/espsnonprodint/autostg/database:19.2
              name: oradbauto
              ports:
                - containerPort: 1521
                  name: oradbauto
              volumeMounts:
                - mountPath: /opt/oracle/oradata
                  name: oradbauto-db-pv-storage
              env:
                - name: ORACLE_SID
                  value: "autocdb"
                - name: ORACLE_PDB
                  value: "autopdb"
                - name:  ORACLE_PWD
                  value: "whateverpass"
      volumeClaimTemplates:
      - metadata:
          name: oradbauto-db-pv-storage
        spec:
          accessModes: [ "ReadWriteOnce" ]
          resources:
            requests:
              storage: 50Gi

  • Image we have hardware issues on this node, we need to failover to a new node 
    • Before Failover: Check the status of PV and Pod. and the pod is running on the node  1.1.1.1
    • Check any if other  pods running on the node will be affected
    • We have a node ready in the same AD as statefulset Pod
    • kubectl get pv,pvc
      kubectl get po -owide
      NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
      oradbauto-0 1/1 Running 0 20m 10.244.3.40 1.1.1.1 <none> <none>
    • 1 command to failover DB to new worker node
      • kubectl drain  <node name> --ignore-daemonsets --delete-local-data
      • kubectl drain  1.1.1.1    --ignore-daemonsets --delete-local-data
      • No need to update MT connection string as DB servicename is untouched and transparent to new DB pod
    • After failover: Check the status of PV and Pod. and the pod is running on the new node 
      • kubectl get pv,pvc
      • kubectl get pod -owide
  • The movement of PV,PVC  work on  volumeClaimTemplates as well as  the PV,PVC when we create them via yaml files with storage class "oci"

Monday, April 06, 2020

Tip: Helm v3.x Error: timed out waiting for the condition

Error: timed out waiting for the condition

Use --debug to find out more trace information
helm  ********  --wait --debug


Error: container has runAsNonRoot and image has non-numeric user , cannot verify user is non-root

Symptom:

When we enable Pod Security Policy in OKE (Oracle Kubernete Engine) . We only allow nonroot user running in the Pods. However, we build an application with Oracle Linux base docker image and use oracle . We still get
Error: container has runAsNonRoot and image has non-numeric user , cannot verify user is non-root

Solution:

The error is very obvious , oracle is non-numeric , we need to update it to be 1000.
In the Dockerfile  : USER oracle --> USER 1000

Thursday, March 19, 2020

Error: The required information to complete authentication was not provided

Symptom:

    When we run  "oci os ns get"  in Oracle Cloud, we get below error:
WARNING: Your computer time: 2020-03-19T08:21:13.667808+00:00 differs from the server time: 2020-03-19T08:27:28+00:00 by more than 5 minutes. This can cause authentication errors connecting to services.
ServiceError:
{
    "code": "NotAuthenticated",
    "message": "The required information to complete authentication was not provided.",
    "opc-request-id": "iad-1:LadKpOv52VZyJpLcapW1oD_MfXrxSEpICkJh90iR5Xke2k437wa7PQUaP99kuGSQ",
    "status": 401
}

Solution:

   We often ignore the warning message. The box OS we run oci is indeed had more than 5 minutes time differences with the server.
   Sync the local time and fix the issue

Wednesday, March 04, 2020

How To Add TLS Certificate Into Ingress in OKE

Requirement:

         In order to secure the traffic, we need to deploy  TLS certificates into our ingress running in OKE.  We are going to use self-signed certificates to demonstrate it. 

Solution:

  • Generate self-signed certificates via openssl
    • openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout tls.key -out tls.crt  -config req.conf -extensions 'v3_req'
    • req.conf:
      
      [req]
      distinguished_name = ingress_tls_prometheus_test
      x509_extensions = v3_req
      prompt = no
      [ingress_tls_prometheus_test]
      C = US
      ST = VA
      L = NY
      O = BAR
      OU = BAR
      CN = www.bar.com
      [v3_req]
      keyUsage = keyEncipherment, dataEncipherment
      extendedKeyUsage = serverAuth
      subjectAltName = @alt_names
      [alt_names]
      DNS.1 = prometheus.bar.com
      DNS.2 = grafana.bar.com
      DNS.3 = alertmanager.bar.com

    • To verify it:    openssl x509 -in tls.crt -noout -text
  • Create Kubernetes TLS secret for that
    • kubectl create secret tls tls-prometheus-test --key tls.key --cert tls.crt -n monitoring
  • Add TLS section into the ingress yaml file. Example:
    • apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: prometheus-ingress
        namespace: monitoring
        annotations:
          kubernetes.io/ingress.class: "nginx"
      spec:
        tls:
        - hosts:
          - prometheus.bar.com
          secretName: tls-prometheus-test
        - hosts:
          - grafana.bar.com
          secretName: tls-prometheus-test
        - hosts:
          - alertmanager.bar.com
          secretName: tls-prometheus-test
        rules:
        - host: prometheus.bar.com
          http:
            paths:
            - path: /
              backend:
                serviceName: prometheus-k8s
                servicePort: 9090
        - host: grafana.bar.com
          http:
            paths:
            - path: /
              backend:
                serviceName: grafana
                servicePort: 3000
        - host: alertmanager.bar.com
          http:
            paths:
            - path: /
              backend:
                serviceName: alertmanager-main
                servicePort: 9093
      
      
  • Ingress controller would redirect http traffic to https traffic automatically for these 3 domains 
  • Spoof IP address for DNS names via the below entry and take off www proxy of the browser if necessary.

Thursday, February 20, 2020

Example of Pod Security Policy for Apache Httpd in OKE

Requirement:

As Pod Security Policy is enabled in Kubernetes Cluster, we need a PSP (Pod Security Policy) for Apache Httpd Server. How to create an Apache Httpd docker image, please refer to note. Http Server needs some special features other than normal applications.
Here is a PSP example which is tested in OKE (Oracle Kubernetes Engine).

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: oke-restricted-psp
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default'
    seccomp.security.alpha.kubernetes.io/defaultProfileName:  'runtime/default'
spec:
  privileged: false
  allowedCapabilities:
  - NET_BIND_SERVICE
  # Required to prevent escalations to root.
  allowPrivilegeEscalation: true
  # Allow core volume types.
  volumes:
    - 'configMap'
    - 'emptyDir'
    - 'projected'
    - 'secret'
    - 'downwardAPI'
    # Assume that persistentVolumes set up by the cluster admin are safe to use.
    - 'persistentVolumeClaim'
  hostNetwork: false
  hostIPC: false
  hostPID: false
  runAsUser:
    # Require the container to run without root privileges.
    rule: 'MustRunAsNonRoot'
  seLinux:
    # This policy assumes the nodes are using AppArmor rather than SELinux.
    rule: 'RunAsAny'
  supplementalGroups:
    rule: 'MustRunAs'
    ranges:
      # Forbid adding the root group.
      - min: 1
        max: 65535
  fsGroup:
    rule: 'MustRunAs'
    ranges:
      # Forbid adding the root group.
      - min: 1
        max: 65535
  readOnlyRootFilesystem: false
---
# Cluster role which grants access to the restricted pod security policy
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: oke-restricted-psp-clsrole
rules:
- apiGroups:
  - extensions
  resourceNames:
  - oke-restricted-psp
  resources:
  - podsecuritypolicies
  verbs:
  - use

Wednesday, February 19, 2020

Error: cannot delete Pods with local storage when kubectl drain in OKE

Symptom

In the Kubernetes world, we often need to upgrade Kubernetes Master nodes and worker nodes themselves.
In OKE (Oracle Kubernetes Engine)  we follow Master node upgrade guide  and Worker node upgrade guide

When we run kubectl drain <node name>  --ignore-daemonsets

We got :
error: cannot delete Pods with local storage (use --delete-local-data to override): monitoring/grafana-65b66797b7-d8gzv, monitoring/prometheus-adapter-8bbfdc6db-pqjck

Solution:

 The error is due to there is local storage (  emptyDir: {} ) attached in the pods.

For Statefulset 

Please use volumeClaimTemplates . OKE supports the automatic movement of PV PVC to a new worker node. It will detach and reattach the block storages to the new work node(assume they are in the same Availablity domain).  We don't need to worry about data migration which is excellent for statefulset.  example is:

volumeClaimTemplates:
  - metadata:
      name: prometheus-storage
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 50Gi
      selector:
        matchLabels:
          app: grafana
      storageClassName: oci
      volumeMode: Filesystem

For Deployment,

Because deployment is stateless, we can delete local data
kubectl drain  <node name> --ignore-daemonsets --delete-local-data
If the deployment has PV,PVC attached, same as statefulset,  OKE supports the automatic movement of PV PVC to a new worker node. It will detach and reattach the block storages to the new work node(assume they are in the same Availablity domain).  We don't need to worry about data migration.

Tuesday, February 18, 2020

Tip: A few commands to triage Kubernetes Network related Issues

tcptraceroute 100.95.96.3 53

nc -vz 147.154.4.38 53TCP port

nc -vz -u 147.154.4.38 53 UDP port

tcpdump -ni ens3 port 31530

for p in $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name); do kubectl logs --namespace=kube-system $p; done

Thursday, February 13, 2020

Tip: Calico BGP IP AUTODETECTION Issue and Troubleshoot in OKE

Symptoms:

  We follow Calico doc to deploy a Calico instance.  After that,  Calico Nodes are always not ready though the pods are up and running.

Error in logs:
Unable to open configuration file /etc/calico/confd/config/bird6.cfg: No such file or directory 

Verify commands
# ./calico-node -bird-ready
2020-02-13 20:27:38.757 [INFO][5132] health.go 114: Number of node(s) with BGP peering established = 0
calico/node is not ready: BIRD is not ready: BGP not established with 10.244.16.1,10.244.15.1

Solutions:

     One possible reason is there are a few network interfaces in the host. Calico may choose the wrong interface with the auto-detection method. Details refer
https://docs.projectcalico.org/networking/ip-autodetection#change-the-autodetection-method
     It can be solved by setting an environment variable for calico node daemonset.  In OKE VMs with Oracle Linux, the primary network interface is ens*  ie, ens3.
     kubectl set env daemonset/calico-node -n kube-system IP_AUTODETECTION_METHOD=interface=ens*

Wednesday, February 12, 2020

Cross Namespace Ingress Usage Example in OKE

Requirement:

      The normal use case to create ingress is to create one in the application namespace where application services and TLS certificates/keys are sitting.
      In the enterprise world, the security team is not comfortable to store TLS private keys in the application namespace. TLS private keys need to be stored securely in the namespace of the ingress controller.   In this case, we need to create ingress in "ingress controller" namespace instead of the application namespace.   We need to find a way to let ingress in "ingress controller" namespace to point to services in the application namespace (cross namespace service ).  Below is the solution of how we can achieve that in OKE ( Oracle Kubernetes Engine).

Solution:

  • Create TLS secrets in ingress controller namespace. Refer doc
    • $ openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout tls.key -out tls.crt  -config req.conf -extensions 'v3_req'
      
      req.conf:
      [req]
      distinguished_name = ingress_tls_prometheus_test
      x509_extensions = v3_req
      prompt = no
      [ingress_tls_prometheus_test]
      C = US
      ST = VA
      L = NY
      O = BAR
      OU = BAR
      CN = www.bar.com
      [v3_req]
      keyUsage = keyEncipherment, dataEncipherment
      extendedKeyUsage = serverAuth
      subjectAltName = @alt_names
      [alt_names]
      DNS.1 = prometheus.bar.com
      DNS.2 = grafana.bar.com
      DNS.3 = alertmanager.bar.com
    • kubectl create secret tls tls-prometheus-test  --key tls.key --cert tls.crt -n ingress-nginx
  • The key to using services in different namespaces is ExternalName.  It is working in OKE, but may not be working other  Cloud providers. One of the externalname examples is:
    • apiVersion: v1
      kind: Service
      metadata:
        annotations:
        name: prometheus-k8s-svc
        namespace: ingress-nginx
      spec:
        externalName: prometheus-k8s.monitoring.svc.cluster.local
        ports:
        - port: 9090
          protocol: TCP
          targetPort: 9090
        type: ExternalName
  • Create ingress in ingress controller namespace.
    • apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: prometheus-ingress
        namespace: ingress-nginx
        annotations:
          kubernetes.io/ingress.class: "nginx"
      spec:
        tls:
        - hosts:
          - prometheus.bar.com
          - grafana.bar.com
          - alertmanager.bar.com
          secretName: tls-prometheus-test
        rules:
        - host: prometheus.bar.com
          http:
            paths:
            - path: /
              backend:
                serviceName: prometheus-k8s-svc
                servicePort: 9090
        - host: grafana.bar.com
          http:
            paths:
            - path: /
              backend:
                serviceName: grafana-svc
                servicePort: 3000
        - host: alertmanager.bar.com
          http:
            paths:
            - path: /
              backend:
                serviceName: alertmanager-main-svc
                servicePort: 9093

Tuesday, February 11, 2020

How To Generate Self-Signed Multiple SAN Certificate Using OpenSSL

Requirement:

    In our development services, we often need to have self-signed certificates. Sometimes we need to add SAN into such a certificate.  Below is how we use OpenSSL to achieve it.

Solution:

$ openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout tls.key -out tls.crt  -config req.conf -extensions 'v3_req'

req.conf:

[req]
distinguished_name = ingress_tls_prometheus_test
x509_extensions = v3_req
prompt = no
[ingress_tls_prometheus_test]
C = US
ST = VA
L = NY
O = BAR
OU = BAR
CN = www.bar.com
[v3_req]
keyUsage = keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1 = prometheus.bar.com
DNS.2 = grafana.bar.com
DNS.3 = alertmanager.bar.com

To verify it:    openssl x509 -in tls.crt -noout -text

Tuesday, February 04, 2020

Sample Nginx Ingress Controller Integrate With Prometheus Grafana in OKE (Oracle Kubernetes Engine)

Requirement:

Ingress is proved to be very useful and efficient in Kubernetes world. The concept of ingress can be found official Kubernetes doc.  Ingress is similar to bigip on-premise. It provides rich functions for how to route and control ingress traffic in OKE. It is also adopted by the OKE team.
This note is based on https://kubernetes.github.io/ingress-nginx/   version 0.26
Kubernetes version needs to be at least v1.14.0
You would need cluster-admin role to proceed

Installation Steps:

  • git clone https://github.com/HenryXie1/Prometheus-Granafa-Ingress-OKE.git
  • cd  Prometheus-Granafa-Ingress-OKE/ingress-controllers/nginx
  • kubectl create -f ingress-controller.yaml
  • It will create internal Loadbalancer in OKE
  • typical output is
    • kubectl get po -n ingress-nginx
      NAME                                       READY   STATUS    RESTARTS   AGE
      nginx-ingress-controller-d7976cdbd-d2zr6   1/1     Running   0          71m
      kubectl get svc -n ingress-nginx
      NAME            TYPE           CLUSTER-IP     EXTERNAL-IP       PORT(S)                      AGE
      ingress-nginx   LoadBalancer   10.96.197.52   123.123.123.123   80:32155/TCP,443:31641/TCP   70m

Access Prometheus via Ingress Controller:

  • About how to install Prometheus, please refer Install Prometheus and Grafana with High Availability in OKE (Oracle Kubernetes Engine)
  • Steps of accessing Prometheus via ingress controller
    • Spoof IP address for DNS names via below entry and take off www proxy of the browser
      • 123.123.123.123         prometheus.bar.com  grafana.bar.com  alertmanager.bar.com
    • prometheus.bar.com
    • grafana.bar.com  
    • alertmanager.bar.com
  • Ingress for Grafana
    • apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: grafana-ingress
        annotations:
          kubernetes.io/ingress.class: "nginx"
      spec:
        rules:
        - host: grafana.bar.com
          http:
            paths:
            - path: /
              backend:
                serviceName: grafana
                servicePort: 3000
  • Ingress for Alert manager
    • apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: alertmanager-ingress
        annotations:
          kubernetes.io/ingress.class: "nginx"
      spec:
        rules:
        - host: alertmanager.bar.com
          http:
            paths:
            - path: /
              backend:
                serviceName: alertmanager-main
                servicePort: 9093
  • Ingress for Prometheus
    • apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        name: prometheus-ingress
        annotations:
          kubernetes.io/ingress.class: "nginx"
      spec:
        rules:
        - host: prometheus.bar.com
          http:
            paths:
            - path: /
              backend:
                serviceName: prometheus-k8s
                servicePort: 9090

Install Prometheus and Grafana with High Availability in OKE (Oracle Kubernetes Engine)

Requirement:

To monitor and get metrics of containerized environment managed by K8S, we are going to use Prometheus and Grafana. They can provide a visualized dashboard for K8S systems with useful charts.
We also update the Prometheus kind to use storageclass "oci" where TSDB data of Prometheus would be stored
Prometheus is a statefulset with replicas = 2 by default which provides high availability
Kubernetes version needs to be at least v1.14.0
You would need cluster-admin role to proceed

Installation Steps:

  • git clone https://github.com/HenryXie1/Prometheus-Granafa-Ingress-OKE.git
  • cd  Prometheus-Granafa-Ingress-OKE
  • kubectl create -f manifests/setup
  • kubectl create -f manifests/
  • Storage section of yaml to ask Prometheus to use block storage of OCI.   In the future, we need to adopt CSI for storageclass of OKE "oci-bv". 
    •     volumeClaimTemplate:
            spec:
              storageClassName: "oci"
              selector:
                matchLabels:
                  app: prometheus
              resources:
                requests:
                  storage: 100Gi
  • Typical output is
    • $ kubectl get po -n monitoring
      NAME                                   READY   STATUS    RESTARTS   AGE
      alertmanager-main-0                    1/2     Running   9          35m
      alertmanager-main-1                    2/2     Running   0          35m
      alertmanager-main-2                    2/2     Running   0          23m
      grafana-65b66797b7-zdntc               1/1     Running   0          34m
      kube-state-metrics-6cf548479-w9dtq     3/3     Running   0          34m
      node-exporter-2kw4v                    2/2     Running   0          34m
      node-exporter-9wv7j                    2/2     Running   0          34m
      node-exporter-lphfg                    2/2     Running   0          34m
      node-exporter-s2f2f                    2/2     Running   0          34m
      prometheus-adapter-8bbfdc6db-6pnsk     1/1     Running   0          34m
      prometheus-k8s-0                       3/3     Running   0          34m
      prometheus-k8s-1                       3/3     Running   1          23m
      prometheus-operator-65fbfd78b8-7dq5r   1/1     Running   0          35m

Test Access the Dashboards

  • Prometheus

$ kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090
Then access via http://localhost:9090


  • Grafana

$ kubectl --namespace monitoring port-forward svc/grafana 3000
Then access via http://localhost:3000 and use the default grafana user:password of admin:admin.

  • Alert Manager

$ kubectl --namespace monitoring port-forward svc/alertmanager-main 9093
Then access via http://localhost:9093

Integrate Ingress with Prometheus :

Uninstallation Steps:

  • kubectl delete --ignore-not-found=true -f manifests/ -f manifests/setup