Tuesday, October 30, 2018

Where To Modify Starting Arguments of K8S Core Components

Requirement:

  We have a kubernetes cluster running on docker images. Details  refer github doc for Oracle K8S manual installation  . Sometimes we need to add/modify starting arguments of kubernetes core components. ie we need to add TaintBasedEvictions=true for kube-controller-manager component to enable alpha feature. Or we need to add argument for etcd component.

Solution:

    By default, the  manifests files are in master node in  /etc/kubernetes/manifests .You would see these 4 yaml files. Backup the files and do the changes and restart kubernetes cluster.
etcd.yaml  kube-apiserver.yaml  kube-controller-manager.yaml  kube-scheduler.yaml

Where To Find kube-controller-manager Cmd from Kubernetes Cluster

Requirement:

   We need to find  kube-controller-manager file from Kubernetes Cluster which is running on docker images.  refer github doc for Oracle K8S manual installation
    All core binaries are stored inside the docker images.

Solution:

Find the controller docker instance

#docker ps |grep controller
7fc8302ccfe3        887b8144f94f                                                         "kube-controller-man…"   15 hours ago        Up 15 hours                             k8s_kube-controller-manager_kube-controller-manager-instance-cas-mt2_kube-system_d739245871cdd71020650b11ac854d60_0

docker exec into the docker instance and find kube-controller-manager

#docker exec -it k8s_kube-controller-manager_kube-controller-manager-instance-cas-mt2_kube-system_d739245871cdd71020650b11ac854d60_0 /bin/bash
bash-4.2# which kube-controller-manager
/usr/local/bin/kube-controller-manager

Use docker cp to copy the file out of docker instance .

#docker cp k8s_kube-controller-manager_kube-controller-manager-instance-cas-mt2_kube-system_d739245871cdd71020650b11ac854d60_0:/usr/local/bin/kube-controller-manager /bin/



Saturday, October 27, 2018

Golang OCI SDK Create Bucket and Upload Files Into OCI Object Storage

See details in Github link

How To Run Tcpdump With Logs Rotating

Requirement:

    We need to get tcp traffic on busy systems to diagnose the network related issues.  Tcpdump is a great tool but it also dumps huge amount of data which fill up disk easily.

Solution:

tcpdump has rotation built in. Use below command:
-C  8000*1,000,000 byet --> around 8G each file size
-W total 9 files to keep

nohup tcpdump -i bond0 -C 8000 -W 9 port 5801 -w tcpdump-$(hostname -s).pcap -Z root &

Tuesday, October 23, 2018

Turn Off Checksum Offload For K8S with Oracle UEK4 Kernel

Symptom:

     We create K8S via Oracle Doc in Oracle OCI.  mysql server, service, phpadmin server ,service are created fine.  However we have problems that Pods can't communicate with other Pods. We created a debug container (refer blog here )with network tools to attach the  network stack of phpadmin pod. We find we can't access the port , nc -vz  <ip> 3306  is timing out, however ping <mysql ip> is fine

Solution:

   Dive deeper , we  see docker0  network interface (ip addr) has its orginal IP address (172.17.*.* ), it does not have flannel network ip address we created when we init K8S (192.168.*.*)  . It means docker daemon has issues to work with flannel network and not associated with flannel CNI well.

   By default, they should. It turns out it is related to broadcom driver with UEK4 kernel.
Refer: github doc
see terr## Disable TX checksum offloading so we don't break VXLAN
######################################
BROADCOM_DRIVER=$(lsmod | grep bnxt_en | awk '{print $1}')
if [[ -n "$${BROADCOM_DRIVER}" ]]; then
   echo "Disabling hardware TX checksum offloading"
   ethtool --offload $(ip -o -4 route show to default | awk '{print $5}') tx off
fiaform-kubernetes-installer)

So we need to turn off checksum offload and bounce K8S.
Here are steps (run on all K8S nodes) :
#ethtool --offload $(ip -o -4 route show to default | awk '{print $5}') tx offActual changes:tx-checksumming: off       tx-checksum-ipv4: off       tx-checksum-ipv6: offtcp-segmentation-offload: off       tx-tcp-segmentation: off [requested on]       tx-tcp6-segmentation: off 
#kubeadm-setup.sh stop#kubeadm-setup.sh restart

Monday, October 22, 2018

Datapatch CDB / PDB hits ORA-06508

Symptom:

     When we patch PSU on CDB / PDB , we need to ./datapatch -verbose under Opatch

     It reports
      Patch 26963039 apply (pdb PDB$SEED): WITH ERRORS                                                                            
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/26963039/21649415/
26963039_apply_CASCDBSB_PDBSEED_2018Mar08_01_32_17.log (errors)                                                                                                                        
    Error at line 113749: sddvffnc: factor=Database_Hostname,error=ORA-06508: PL/SQL: could not find ......     

Reason:

     Patch 21555660 (Database PSU 12.1.0.2.5, Oracle JavaVM Component ) is not in place of CDB/PDBs. It needs outages to upgrade this OJVM to pass thorough.

     Check both CDB and PDBs for that as the component applies to each PDBs

    sql: select comp_name, version from  dba_registry where comp_name like   '%JAVA Virtual Machin%' and status = 'VALID';

Solution:

 Upgrade OJVM in CDB and PDBs if not in place. To make sure they are on the page.

Saturday, October 20, 2018

How to Create Docker Images For Oracle DB 18.3 APEX 18.1 and ORDS 18.2

Scope:

We would like to containize livesql sandbox.  The purpose is to create docker images for Oracle Database 18.3 , APEX 18.1  ORDS 18.2

Database Part:

  • Go to github and download all the scripts of  Database18.3  from Oracle Github
    • Refer readme doc on the github to understand how dockfile works on DB
    • put them into directory (ie  /u01/build/db18.3 )
  • Download LINUX.X64_180000_db_home.zip  from OTN  and put it the same directory as scripts from github  (ie  /u01/build/db18.3)
  • If your servers are behind proxy, Add below 2 lines into Dockerfile to let new image to access  internet. ( change the proxy name if necessary)
    • HTTP_PROXY=http://yourproxy.com:80 
    • HTTPS_PROXY=http://yourproxy.com:80 
  • cd  /u01/build/db18.3   and  docker build -t oracle/database:18.3.0-ee . 
  • It will build the image for Database 18.3 ( use docker images to check )
  • To create volumes outside docker to hold all datafiles and related config files
    • mkdir -p /u01/build/db18.3/oradata
    • chown -R 54321:54321  /u01/build/db18.3/oradata    (54321 is the UID of oracle user from Docker image)
docker run -itd --name testdb  -p 1528:1521 -p 5500:5500  -e ORACLE_SID=LTEST  -e ORACLE_PDB=ltestpdb  -e ORACLE_PWD=<password>  -v /u01/build/db18.3/oradata:/opt/oracle/oradata   oracle/database:18.3.0-ee
    • it will create a new CDB with name LTEST and a new PDB with name ltestpdb for you
    • We can run this command again and again. It will detect the DB was created , not create a new one
    • use  'docker logs testdb'   to check status
    • use  'docker exec -t testdb   /bin/bash'   to  get into the docker  container to inspect

APEX 18.1 Part:

  • Go to otn 
    • Download apex18.1 zip 
    • upload it to /u01/build/db18.3/oradata/ and unzip it
    • chown  -R 54321:54321 ./apex 
    • use  'docker exec -t livesql_testdb   /bin/bash'  get into the docker  container 
    • cd  /opt/oracle/oradata/apex
    • sqlplus / as sysdba
    • alter session set container=ltestpdb;
    • install APEX inside the docker container
@apexins SYSAUX SYSAUX TEMP /i/
— Run the apex_rest_config command
@apex_rest_config.sql

  • Change and unlock the apex related accounts
  • alter user APEX_180100 identified by <password>;
  • alter user APEX_INSTANCE_ADMIN_USER identified by <password>;
  • alter user APEX_LISTENER identified by <password>;
  • alter user APEX_PUBLIC_USER identified by <password>;
  • alter user APEX_REST_PUBLIC_USER identified by <password>;
  • alter user APEX_180100 account unlock;
  • alter user APEX_INSTANCE_ADMIN_USER account unlock;
  • alter user APEX_LISTENER account unlock;
  • alter user APEX_PUBLIC_USER account unlock;
  • alter user APEX_REST_PUBLIC_USER account unlock;

ORDS 18.2 Part:

  • Go to github and download all the scripts of  ORDS 18.2 from  Oracle GitHub 
    • Refer readme doc on the github to understand how dockfile works on ORDS
    • Download ORDS 18.2 from OTN 
    • put them into directory (ie  /u01/build/ords )
    • cd /u01/build/ords  and  docker build -t oracle/restdataservices:v1 .
    • It will build docker images for ORDS
    • To create volumes outside docker to hold all datafiles and related config files
      • mkdir -p /u01/build/ords/config/ords
      • chown -R 54321:54321   /u01/build/ords/config/ords    (54321 is the UID of oracle user from Docker image)
docker run -itd --name testords1 \
--network=ltest_network \
-p 7777:8888 \
-e ORACLE_HOST=<hostname> \
-e ORACLE_PORT=1528 \
-e ORACLE_SERVICE=ltestpdb \
-e ORACLE_PWD= <password> \
-e ORDS_PWD=<password> \
-v /u01/build/ords/config/ords:/opt/oracle/ords/config/ords \
oracle/restdataservices:v1
      • it will create a new ORDS standalone and install ORDS schema  for you
      • We can run this command again and again. It will detect the config file which  was created , not create a new one
      • use  'docker logs testords1 '   to check status
      • use  'docker exec -t testords1  /bin/bash'   to  get into the docker  container to inspect

Thursday, October 18, 2018

How To Associate Pod With Service In Kubernetes

Answer is to use selector

apiVersion: v1
kind: Service
metadata:
  name: mysql-service
  labels:
    app: mysql
spec:
    selector:
      app: mysql
    ports:
      - port: 3306
        targetPort:  3306
        nodePort:  30301
    clusterIP: None

This specification will create a Service which targets TCP port 3306 on  Pods with the app: mysql label,  in other words, any pods with label  app:mysql  would be associated with this mysql-service automatically in Kubernetes

Tuesday, October 16, 2018

High availability of Oracle DB Pod Practice via Kubernetes Statefulset

Requirement:

     It is similar as Oracle Rac One Architecture.
     The target is to use Kubernetes to manage Oracle DB Pods like Rac One.  It has most benefits of Rac One has .But  K8S can't start 2 db pods simultaneously to enable zero downtime. Details of Rac One benefits, please refer oracle Rac One official website 
    When one db pod dies or node dies, Kubernetes would start a new DB pod in the same node or another node. The datafiles are on Oracle File system (NFS) and they can be accessed by all nodes associated with Oracle DB Pods. In this example it is labeled as ha=livesqldb

Solution:

  • Need to make sure the nodes which can run DB pods has the same access to the NFS
  • Label nodes with  ha=livesqlsb, in our case we have 2 nodes labeled
kubectl label  node instance-cas-db2 ha=livesqlsb
node "instance-cas-db2" labeled
kubectl label  node instance-cas-mt2  ha=livesqlsb
node "instance-cas-mt2" labeled
  • Need to create StatefulSet and replicas: 1 ,yaml is like
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: livesqlsb-db
  labels:
    app: livesqlsb-db
spec:
  selector:
        matchLabels:
           ha: livesqlsb
  serviceName: livesqlsb-db-service
  replicas: 1
  template:
    metadata:
        labels:
           ha: livesqlsb
    spec:
      terminationGracePeriodSeconds: 30
      volumes:
        - name: livesqlsb-db-pv-storage1
          persistentVolumeClaim:
            claimName: livesql-pv-nfs-claim1
      containers:
        - image: oracle/database:18.3v2
          name: livesqldb
          ports:
            - containerPort: 1521
              name: livesqldb
          volumeMounts:
            - mountPath: /opt/oracle/oradata
              name: livesqlsb-db-pv-storage1
          env:
            - name: ORACLE_SID
              value: "LTEST"
            - name: ORACLE_PDB
              value: "ltestpdb"
  • We use kubectl drain  <nodename> --ignore-daemonsets  --force to test node eviction. It would shutdown the pod gracefully and wait 30s to start a new pod  in another node
  • Or kubectl delete pod <db pod name> to test  pod eviction. It would shutdown the pod gracefully and wait 30s to start a new pod in the same node 


Monday, October 15, 2018

Differences among Port TargetPort nodePort containerPort in Kubernetes

We use below below yaml  to explain:

apiVersion: v1
kind: Service
metadata:
  name: mysql-service
  labels:
    app: mysql
spec:
    selector:
      app: mysql
    ports:
      - port: 3309
        targetPort:  3306
        nodePort:  30301
    clusterIP: None


This specification will create a Service which targets TCP port 3306 on any Pod with the app: mysql label, and expose it on an abstracted Service port 3309 (targetPort 3306: is the port the container accepts traffic on, port 3309: is the abstracted Service port, which can be any port other pods use to access the Service). nodePort 30301 is to expose service outside kubernete cluster via kube-proxy.

  • The port is 3309 which represents that order-service can be accessed by other services in the cluster at port 3306(it is advised to be same as targetPort). However when type is LoadBalancer, the port 3309 would be on different scope. It is the service port on LoadBalancer. It is the port which LoadBalancer is listening on. Because type is not clusterIP any more. 
  • The targetPort is 3306 which represents the order-service is actually running on port 3306 on pods
  • The nodePort is 30301 which represents that order-service can be accessed via kube-proxy on port 30301.
  • containerPort which is similar as targetPort , it is used in pod defination yaml

How To Use Python To Backup DB Files To Oracle OCI Object Storage

Please refer my other blogs related. Not just for DB files, archivelogs ...etc but all files in OS.

Python3 OCI SDK Create Bucket and Upload Files Into OCI Object Storage

Python3 OCI SDK Download And Delete Files From OCI Object Storage

Saturday, October 13, 2018

How To Add PersistentVolume of K8S From Oracle OCI File System(NFS)

You need to create  File system and mount targets in OCI first, then we can let K8S to mount them and use . Please refer official Oracle Doc

Then to create NFS PV , PVC in K8S

  • Create Peresistent Volumes DB NFS Files storage. /cas-data is the mount target created in OCI File system . yaml is like
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: livesqlsb-pv-nfs-volume1
spec:
  capacity:
    storage: 300Gi
  accessModes:
    - ReadWriteMany
  nfs:
    path: "/cas-data"
    server: 100.106.148.12

  • Create Persistent Volumne Claim for DB NFS file storage. yaml is like
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: livesql-pv-nfs-claim1
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 300Gi

How To Create Oracle 18.3 DB on NFS In Kubernetes

Requirement:

   We have existing docker images for Oracle DB 18.3 which is running fine.
   We need to move them to kubernetes cluster which is running on the same host.

Solution:

  • Label nodes for nodeSelector usages
kubectl label nodes instance-cas-db2 dbhost=livesqlsb
kubectl label nodes instance-cas-mt2 mthost=livesqlsb
  • To Create:  kubectl create -f <yaml file>
  • Create Peresistent Volumes DB NFS Files storage . yaml is like
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: livesqlsb-pv-nfs-volume1
spec:
  capacity:
    storage: 300Gi
  accessModes:
    - ReadWriteMany
  nfs:
    path: "/cas-data"
    server: 100.106.148.12

  • Create Persistent Volumne Claim for DB NFS file storage. yaml is like
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: livesql-pv-nfs-claim1
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 300Gi

  • Create Service for DB to be accessed by other Apps  in the K8S cluster. yaml is like
apiVersion: v1
kind: Service
metadata:
  labels:
    app: livesqlsb-db
  name: livesqlsb-db-service
  namespace: default
spec:
  clusterIP: None
  ports:
  - port: 1521
    protocol: TCP
    targetPort: 1521
  selector:
    app: livesqlsb-db 

  • Create DB Pod in the K8S cluster. yaml is like
apiVersion: v1
kind: Pod
metadata:
  name: livesqlsb-db
  labels:
    app: livesqlsb-db
spec:
  volumes:
    - name: livesqlsb-db-pv-storage1
      persistentVolumeClaim:
       claimName: livesql-pv-nfs-claim1
  containers:
    - image: oracle/database:18.3v2
      name: livesqldb
      ports:
        - containerPort: 1521
          name: livesqldb
      volumeMounts:
        - mountPath: /opt/oracle/oradata
          name: livesqlsb-db-pv-storage1
      env:
        - name: ORACLE_SID
          value: "LTEST"
        - name: ORACLE_PDB
          value: "ltestpdb"
  nodeSelector:
          dbhost: livesqlsb

How To Push/Pull Docker Images Into Oracle OKE Registry

Requirement:

   We have built some customized docker images for our apps. We need to upload it to OKE registry and being used by  OKE engineer later. Please refer official oracle doc

Solution:

  • Make sure you have correct privileges to push images to OCI registry. You need your tenancy admin to update the policies to allow you to do that
  • Generate Auth Token from OCI  user settings. see details in official oracle doc
  • On the host where your docker images are, use docker to login
docker login phx.ocir.io   (we use phoenix region)
If users are federated with another directory services
Username:  <tenancy-namespace>/<federation name>/test.test@oracle.com
i.e.   mytenancy-namespace/corp_login_federate/test.test@oracle.com
If no federation, remove <federation name>
Password:  <The Auth token you generated before>
Login succeed.
  • Tag the images you would like to upload
docker tag hello-world:latest
<region-code>.ocir.io/<tenancy-namespace>/<repo-name>/<image-name>:<tag>
docker tag hello-world:latest phx.ocir.io/peo/engops/hello-world:latest
  • Remember to add "repo-name"
  • Push the image to registry
docker push  phx.ocir.io/peo-namespace/engops/hello-world:latest
  • Pull the image
 docker pull phx.ocir.io/peo-namespace/engops/hello-world
  • To use it in K8S yaml file, we need to add secret for docker login. Refer k8s doc and oci doc for details
kubectl create secret docker-registry iad-ocir-secret --docker-server=iad.ocir.io --docker-username='<tenancy-namespace>/<federation name>/test.test@oracle.com' --docker-password='******' --docker-email='test@test.com'

 part of sample yaml is like

spec:
      containers:
      - name: helloworld
    # enter the path to your image, be sure to include the correct region prefix 
        image: <region-code>.ocir.io/<tenancy-namespace>/<repo-name>/<image-name>:<tag>
        ports:
        - containerPort: 80
      imagePullSecrets:
    # enter the name of the secret you created
      - name: <secret-name>


Python3 OCI SDK Download And Delete Files From OCI Object Storage

Requirement:

We need to use OCI object storage for our backup purpose.  We need to download backup files , also  we need to  delete obsolete backup files.
Before we do that, we need to setup config file for OCI SDK to get correct user credential, tenancy, compartment_id ...etc. Refer my blog for example:

Solution:

Download files example:

#!/u01/python3/bin/python3
import oci
import argparse
parser = argparse.ArgumentParser(description= 'Download files from Oracle cloud Object Storage')
parser.add_argument('bucketname',help='The name of bucket to download from ')
parser.add_argument('files_location',help='The full path of location to save downloaded files, ie  /u01/archivelogs')
parser.add_argument('prefix_files',nargs='*',help='The filenames to download, No wildcard needed, ie livesql will match livesql*')
parser.add_argument('--version', '-v', action='version', version='%(prog)s 1.0')
args = parser.parse_args()
mybucketname = args.bucketname
retrieve_files_loc = args.files_location
prefix_files_name = args.prefix_files
print(args)
config = oci.config.from_file()
identity = oci.identity.IdentityClient(config)
compartment_id = config["compartment_id"]
object_storage = oci.object_storage.ObjectStorageClient(config)
namespace = object_storage.get_namespace().data
listfiles = object_storage.list_objects(namespace,mybucketname,prefix=prefix_files_name)
#print(listfiles.data.objects)
for filenames in listfiles.data.objects:
   get_obj = object_storage.get_object(namespace, mybucketname,filenames.name)
   with open(retrieve_files_loc+'/'+filenames.name,'wb') as f:
       for chunk in get_obj.data.raw.stream(1024 * 1024, decode_content=False):
           f.write(chunk)
   print(f'downloaded "{filenames.name}" in "{retrieve_files_loc}" from bucket "{mybucketname}"')

Delete files example:

#!/u01/python3/bin/python3
import oci
import sys
import argparse
parser = argparse.ArgumentParser(description= 'Delete files from Oracle cloud Object Storage')
parser.add_argument('bucketname',help='The name of bucket to delete from ')
parser.add_argument('prefix_files',nargs='*',help='The filenames to delete, No wildcard needed, ie livesql will match livesql*')
parser.add_argument('--version', '-v', action='version', version='%(prog)s 1.0')
args = parser.parse_args()
mybucketname = args.bucketname
prefix_files_name = args.prefix_files

config = oci.config.from_file()
identity = oci.identity.IdentityClient(config)
compartment_id = config["compartment_id"]
object_storage = oci.object_storage.ObjectStorageClient(config)
namespace = object_storage.get_namespace().data
listfiles = object_storage.list_objects(namespace,mybucketname,prefix=prefix_files_name)
#print(listfiles.data.objects)
#bool(listfiles.data.objects)
if not listfiles.data.objects:
   print('No files found to be deleted')
   sys.exit()
else:
   for filenames in listfiles.data.objects:
      print(f'File in Bucket "{mybucketname}" to be deleted: "{filenames.name}"')

deleteconfirm = input('Are you sure to delete above files? answer y or n :')
if deleteconfirm.lower() == 'y':
    for filenames in listfiles.data.objects:
        object_storage.delete_object(namespace, mybucketname,filenames.name)
        print(f'deleted "{filenames.name}" from bucket "{mybucketname}"')
else:
    print('Nothing deleted')

Make the Script executable without python intepeter

pip install pyinstaller
pyinstaller -F  < your python script>
in dist folder, you will see the executable file of your python script.
remember it needs  ~/.oci/config and ~/.oci/oci api key , these 2 file to login oracle OCI. 
otherwise you may get error like:

oci.exceptions.ConfigFileNotFound: Could not find config file at /root/.oci/config

Python3 OCI SDK Create Bucket and Upload Files Into OCI Object Storage

Requirement:

We need to use OCI object storage for our backup purpose.  First to create Bucket, then we put all our backup files into the bucket.
Before we do that, we need to setup config file for OCI SDK to get correct user credential, tenancy, compartment_id ...etc. Refer my blog for example:

Solution:

Create bucket example:

#!/u01/python3/bin/python3
import oci
import argparse
#set bhucket name to create
parser = argparse.ArgumentParser(description= 'Create Bucket in Oracle cloud Object Storage')
parser.add_argument('bucketname',help='The name of bucket to be created')
parser.add_argument('--version', '-v', action='version', version='%(prog)s 1.0')
args = parser.parse_args()
bucket_name = args.bucketname
config = oci.config.from_file()
identity = oci.identity.IdentityClient(config)
compartment_id = config["compartment_id"]
object_storage = oci.object_storage.ObjectStorageClient(config)
namespace = object_storage.get_namespace().data
from oci.object_storage.models import CreateBucketDetails
request = CreateBucketDetails()
request.compartment_id = compartment_id
request.name = bucket_name
bucket = object_storage.create_bucket(namespace, request)
print(f'Create bucket "{bucket_name}" in compartment_id "{compartment_id}')

Upload files into the bucket example:

#!/u01/python3/bin/python3
import oci
import glob
import os
import argparse
parser = argparse.ArgumentParser(description= 'Upload files into Oracle cloud Object Storage')
parser.add_argument('bucketname',help='The name of bucket to upload')
parser.add_argument('files',nargs='*',help='The Full path files to upload, wildcard can be used to match multiple files, ie *livesql-archivelog*')
parser.add_argument('--version', '-v', action='version', version='%(prog)s 1.0')
args = parser.parse_args()
mybucketname = args.bucketname
upload_files_loc = args.files
config = oci.config.from_file()
identity = oci.identity.IdentityClient(config)
compartment_id = config["compartment_id"]
object_storage = oci.object_storage.ObjectStorageClient(config)
namespace = object_storage.get_namespace().data
for path in upload_files_loc:
    with open(path,'rb') as f:
       obj = object_storage.put_object(namespace,mybucketname,os.path.basename(path),f)
       print(f'uploaded "{path}" to bucket "{mybucketname}"')

Make the Script executable without python intepeter

pip install pyinstaller
pyinstaller -F  < your python script>
in dist folder, you will see the executable file of your python script.
remember it needs  ~/.oci/config and ~/.oci/oci api key , these 2 file to login oracle OCI. 
otherwise you may get error like:

oci.exceptions.ConfigFileNotFound: Could not find config file at /root/.oci/config

Friday, October 12, 2018

Prepare Config File For Python3 OCI SDK

Refer official doc
  • mkdir ~/.oci/
  • vim config    -----example would be like
[DEFAULT]
user=ocid1.user.oc1..testaaaaaaaa2rwnm5zt2q3kvhjfgrd3w
fingerprint=85:7e:55:e3:cd:63:6a:87:d7:c5:e2:87:40:4e:71:95
key_file=/home/oracle/.oci/oci_api_key.pem
tenancy=ocid1.tenancy.oc1..testestjizbiv4zjm763rbrtd3dfhpjq
compartment_id=ocid1.compartment.oc1..testaaaaatesttest
region=us-phoenix-1
  •  Use below code to load config file for python OCI SDK
>>> from oci.config import from_file
>>> config = from_file()
# Using a different profile from the default location
>>> config = from_file(profile_name="integ-beta")
# Using the default profile from a different file
>>> config = from_file(file_location="~/.oci/config")
#display config setting to see if they are in place
>>> config
{'log_requests': False, 'additional_user_agent': '', 'pass_phrase': None, 'user': 'ocid1.user.oc1..aaaaaaaa2rwntegn3vjf5eg5bp2zuvl7ria', 'fingerprint': '85:7e:03:e3:cd:63:6a:87:d7:66:e2:87:40:4e:71:95', 'key_file': '/home/oracle/.oci/oci_api_key.pem', 'tenancy': 'ocid1.tenancy.oc1..aaaaaatestsetiv4zjm76fhpjq7cmnka', 'compartment_id': 'ocid1.compartment.oc1..aaaaaaaanwshxt3acxczvfsy3ykq', 'region': 'us-phoenix-1'}

Python 3: TypeError: 'bytes' object does not support item assignment

Symptom:

    There is a function in the python code

mesg = { 'clientid':'eeeeeee','token':'ttttttt','channelid':'hhhhh','text':'' }
def mesg2slack(mesgtext):
    """
    send notification mesg to slack channel
    """
    global mesg
    mesg['text'] = mesgtext
    mesg = urllib.parse.urlencode(mesg).encode("utf-8")
    slackurl = 'https://test.test.com/apex/test/v1/push.message'
    req = urllib.request.Request(slackurl, data=mesg) # this will make the method "POST"
    resp = urllib.request.urlopen(req)

First invoke mesg2slack('first try '), it works fine
Second invoke mesg2slack('2nd try'), it error out with error : TypeError: 'bytes' object does not support item assignment


Diagonisis :

      Be careful when we use global variable in python. We use "global mesg"  in the function, it was set as dict type.  However when we do " mesg = urllib.parse.urlencode(mesg).encode("utf-8")  " , it changes the mesg type to be bytes. 
type(mesg)  was 'dict'  when mesg = { 'clientid':'eeeeeee','token':'ttttttt','channelid':'hhhhh','text':'' }
type(mesg)  was 'bytes' after mesg = urllib.parse.urlencode(mesg).encode("utf-8")

Solution:

    Use local variable mesgutf for urllib

def mesg2slack(mesgtext):
    """
    send notification mesg to slack channel
    """
    global mesg
    mesg['text'] = mesgtext
    mesgutf = urllib.parse.urlencode(mesg).encode("utf-8")
    slackurl = 'https://test.test.com/apex/smi/v1/push.message'
    req = urllib.request.Request(slackurl, data=mesgutf) # this will make the method "POST"
    resp = urllib.request.urlopen(req)

Wednesday, October 10, 2018

How To Make Your Own Container Tools To Debug Kubernetes network Issue

Requirement:

   Sometimes we need to get into docker container to check network, storage...etc all kind of things for debugging. However base image won't have such tools like ip , curl ,ssh,sftp, wget , netstat ,nc, ping..... installed as we mean to keen running  images as slim as possible. How can we debug into the container without such tools?

Solution:

   Create our own container with all tools we need and attach our container to network of apps container.
Here are some details

  • docker run -itd --name debug oraclelinux:7-slim
  • docker exec -it debug /bin/bash
  • <debug container># yum install ssh,curl, iproute ....etc tools you need 
  • exit
  • docker commit debug henry-swiss-knife:v1
  • later you can add more tools into your own container image
Then use this henry-swiss-knife to attach network stack of kubernetes
  • use docker ps |grep apex   ( find out container id of K8S pod of apex which is the example). In this case it is 44c780d348bd  ( the pod with  "/pause")
[root@instance-cas-mt2 ~]# docker ps|grep apex
340722fe6f77        4b39de352b36                                                         "/bin/sh -c $ORDS_HO…"   18 hours ago        Up 18 hours                                  k8s_apexords_apexords_default_8b06d971-cb89-11e8-a112-000017010a8f_0
44c780d348bd        container-registry.oracle.com/kubernetes_developer/pause-amd64:3.1   "/pause"                 18 hours ago        Up 18 hours                                  k8s_POD_apexords_default_8b06d971-cb89-11e8-a112-000017010a8f_0
  • docker run -itd --name debug --net=container:44c780d348bd henry-swiss-knife:v1
[root@instance-cas-mt2 ~]# docker run -itd --name debug --net=container:44c780d348bd henry-swiss-knife:v1
904180885ae527b3fc4f34a319ab6dfae39af960e29b2dbf7a5902ead55684e8

  • docker exec -it 90418 /bin/bash    (get into the debug container to debug K8S network stack)

Tuesday, October 09, 2018

How To Change Kubernetes POD CIDR

Requirement:

  Sometimes default K8S Pod CIDR overlap hosts CIDR in the intranet. So there would be problems when pod to communicate to pod. So we need to update Pod CIDR not to overlap any  CIDR in the host network.

Solution:

   It is based on Oracle K8S v1.10.5+2.0.2.el7 ( from yum.oracle.com). In this case we would like to change Pod CIDR to 192.168.0.0/16 (It is advised to use /16 CIDR )
On the master node:

  • ATTEN: The command below will stop/erase existing K8S cluster and recreate a new one. Thus all configurations of K8S will be lost, Please take a backup. However it won't touch  our own docker images. 
  • as root 
  • # export KUBE_REPO_PREFIX=container-registry.oracle.com/kubernetes_developer
  • # kubeadm-setup.sh up --pod-network-cidr  192.168.0.0/16
  • Follow the instructions to get all other worker node to join the new cluster

[===> PLEASE DO THE FOLLOWING STEPS BELOW: <===]
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You can now join any number of machines by running the following on each node
as root:
  export KUBE_REPO_PREFIX=container-registry.oracle.com/kubernetes_developer && kubeadm-setup.sh join 100.106.146.3:6443 --token xxiaxy.132mis6d38xg6y3b --discovery-token-ca-cert-hash sha256:dc958ac229e213ca5f04fd609a44aa09606a459da06800224b01fa

Monday, October 08, 2018

How To Manually Install Kubernetes for Oracle Linux in Oracle OCI

Requirement:

To mannually install Kubernetes in Oracle OCI via Oracle Released Docker and Kubernetes version from Oracle Container Registry
Refer official doc 

Preparation(All Kubernetes Nodes):

  • Assume Master nodes and worker nodes are in the same VCN. Otherwise we need to add access rules in OCI policy to let nodes communicate with each other. details in doc
  • #yum-config-manager --enable ol7_addons
  • #yum install docker-engine
  • #systemctl enable docker
  • #systemctl start docker
  • #docker login container-registry.oracle.com/kubernetes_developer .  We can get free account from Oracle Container Registry 
  • # iptables -P FORWARD ACCEPT
  • # firewall-cmd --add-masquerade --permanent
  • # firewall-cmd --add-port=10250/tcp --permanent
  • # firewall-cmd --add-port=8472/udp --permanent
  • On Master Node only:  # firewall-cmd --add-port=6443/tcp --permanent
  • # systemctl restart firewalld
  • # /usr/sbin/setenforce 0
  • #vim /etc/selinux/config  and set SELINUX=permissive

Setting Master Node

  • #yum install kubeadm
  • # kubeadm-setup.sh up
.......
Please wait ...
\ - 75% completed
Waiting for the control plane to become ready ...
................
100% completed
.......
[===> PLEASE DO THE FOLLOWING STEPS BELOW: <===]
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You can now join any number of machines by running the following on each node
as root:
  export KUBE_REPO_PREFIX=container-registry.oracle.com/kubernetes_developer && kubeadm-setup.sh join 100.106.146.3:6443 --token ********** --discovery-token-ca-cert-hash sha256:****************


  • groupadd k8sgroup; useradd -G k8ggroup k8suser ;
  • visudo --- to add "k8suser ALL=(ALL)       ALL" below "root ALL=(ALL)       ALL"
  • su - k8suser
  • mkdir -p $HOME/.kub
  • sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  • sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • echo 'export KUBECONFIG=$HOME/.kube/config' >> $HOME/.bashrc
  • use this command to verify: $ kubectl get pods -n kube-system, output would be like
[k8suser@instance-cas-mt2 .kube]$ kubectl get pods -n kube-system
NAME                                       READY     STATUS    RESTARTS   AGE
etcd-instance-cas-mt2                      1/1       Running   0          2h
kube-apiserver-instance-cas-mt2            1/1       Running   1          2h
kube-controller-manager-instance-cas-mt2   1/1       Running   0          2h
kube-dns-5c57c4787c-xzsgz                  3/3       Running   0          2h
kube-flannel-ds-87xb9                      1/1       Running   0          2h
kube-proxy-mwn46                           1/1       Running   0          2h
kube-scheduler-instance-cas-mt2            1/1       Running   0          2h
kubernetes-dashboard-7df769d745-m4mgx      1/1       Running   0          2h

 Setting Worker Node:

  • #yum install kubeadm
  • export KUBE_REPO_PREFIX=container-registry.oracle.com/kubernetes_developer && kubeadm-setup.sh join 100.106.146.3:6443 --token ******* --discovery-token-ca-cert-hash sha256:*********
Starting to initialize worker node ...
Checking if env is ready ...
Checking whether docker can pull busybox image ...
Checking access to container-registry.oracle.com/kubernetes_developer ...
Trying to pull repository container-registry.oracle.com/kubernetes_developer/kube-proxy-amd64 ...
v1.10.5: Pulling from container-registry.oracle.com/kubernetes_developer/kube-proxy-amd64
Digest: sha256:*****
Status: Image is up to date for container-registry.oracle.com/ku                                                                                          bernetes_developer/kube-proxy-amd64:v1.10.5
Checking whether docker can run container ...
Checking firewalld settings ...
Checking iptables default rule ...
Checking br_netfilter module ...
Checking sysctl variables ...
Enabling kubelet ...
Created symlink from /etc/systemd/system/multi-user.target.wants      /kubelet.service to /etc/systemd/system/kubelet.service.
Check successful, ready to run 'join' command ...
[preflight] Running pre-flight checks.
[validation] WARNING: kubeadm doesn't fully support multiple API Servers yet
[discovery] Trying to connect to API Server "100.106.146.3:6443"
[discovery] Trying to connect to API Server "100.106.146.3:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://100.106.146.3:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://100.106.146.3:6443"
[discovery] Requesting info from "https://100.106.146.3:6443" again to validate TLS against the pinned public key
[discovery] Requesting info from "https://100.106.146.3:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "100.106.146.3:6443"
[discovery] Successfully established connection with API Server     "100.106.146.3:6443"
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "100.106.146.3:6443"
[discovery] Successfully established connection with API Server     "100.106.146.3:6443"
This node has joined the cluster:
* Certificate signing request was sent to master and a response   was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the   cluster.




  • kubectl get nodes

    • NAME               STATUS    ROLES     AGE       VERSION
        instance-test-db2   Ready     <none>    5m        v1.10.5+2.0.2.el7
          instance-test-mt2   Ready     master    3h        v1.10.5+2.0.2.el7


          Sunday, October 07, 2018

          How To Sftp Into Cloud VMs via Private Key

          Symptom:

             Assume we have already putty into the VMs well with key pairs. We need to use sftp to upload files into cloud VMs. We have to use public/private key pair to login the OS


          Solution:


          • First we need to export openssh priviate keys from PUTTYgen
          • Load your private key via puttygen
          • Go to Conversions->Export OpenSSH and export your private key (ie d:\myprivatekey)
          • Then use sftp -oIdentityFile=myprivatekey opc@<hostname or ip>
          • sftp>put  <file>

          Thursday, October 04, 2018

          How to Send Archivelogs From Primary to Another Primary

          Symptom:

             We need to send archive logs from an Oracle Primary open read-write Database to a remote another Primary  open read-write Database which is working as Goldengate log miner  or other purpose

          Solution:

              We need to  set 2 init parameters

          • log_archive_config   
          ie  log_archive_config='dg_config=(primary,dr,newprimary,newdr......)'
          We need to include all related db_unique_name in this parameter.
          This is parameter controls who I can send and allow to receive archive logs
          see  oracle doc

          •  log_archive_dest_n 
          ie log_archive_dest_2 ="service="TEST_DR", ASYNC NOAFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name="TEST_DR" net_timeout=30, valid_for=(online_logfile,all_roles)

          Specify the details of destations the log archives sent to. 
          see oracle doc

          Monday, October 01, 2018

          kubeadm-setup.sh Issues Of Connecting Oracle Container Registry

          Symptom:

             We follow the Oracle Kubernetes Doc to install kubeadm. The kubeadm-setup.sh  has added "region"  in the code to connect container registry of each region.
              ie; Phoenix, it will contact container-registry-phx.oracle.com/kubernetes
              However we have problems to pull images from it for latest 1.10.5 which is developer release
          Error like
          #kubeadm-setup.sh up
          Starting to initialize master node ...
          Checking if env is ready ...
          Checking whether docker can pull busybox image ...
          Checking access to container-registry-phx.oracle.com/kubernetes ...
          Trying to pull repository container-registry-phx.oracle.com/kubernetes/kube-proxy-amd64 ...
          manifest for container-registry-phx.oracle.com/kubernetes/kube-proxy-amd64:v1.10.5 not found
          [ERROR] docker cannot pull kube-proxy-amd64:v1.10.5 from container-registry-phx.oracle.com/kubernetes registry
               

          Solution:

              To set env parameter to specify registry we would like to connect:
          #export KUBE_REPO_PREFIX=container-registry.oracle.com/kubernetes_developer
          and rerun the kubeadm-setup.sh
          Meanwhile if we want to install 1.9.1 instead, you probably have to disable the preview and developer yum repos to let yum to find old but stable version


          Docker Login Issues of Oracle Container Registry

          Symptom:

             When we try to docker pull images from Oracle Container Registry .  You got error
              Like:
          [ERROR] Please login with valid credential to the container-registry.oracle.com/kubernetes_developer
                  # docker login container-registry.oracle.com/kubernetes_developer

             Then we input username and password,  it logins successfully.  The docker config.json has 'auth'  recorded. When you use docker to pull images again, it still have the same error.

          Solution:

              The reason  behind is  we need to push the button on the Oracle Container Registry to  agree term of every component we would like to pull. It is for legal purpose.  The context is like to below. Once you click that, we are able to pull images

          You must agree to and accept the Oracle Standard Terms and Restrictions prior to downloading from the Oracle Container Registry. Please read the license agreement on the following page carefully.