Tuesday, December 25, 2018
How To Upload K8S Pods Logs Into OCI Object Storage via Fluentd
Please refer the github doc for details.
Sunday, December 23, 2018
How To Restart Fluentd Process in K8S Without Changing Pod Name
Requirement:
When we debug K8S apps running in Pods, we often delete pods to force restart apps in the Pods to reload configuration. For Statefulset , the name of Pod won't be changed. However for deployment and Daemonset, new pod names are generated after we delete the pods, it add some difficulty to track which pod we are debug the apps. We can find an easier way to restart apps in the pods while keeping same pod name.Solution:
We use fluentd as an example.- We update the config file of fluentd in the pod. ie update /etc/fluent/fluent.conf file
- We backup existing docker images via docker tag
docker tag k8s.gcr.io/fluentd-elasticsearch:v.2.0.4 k8s.gcr.io/fluentd-elasticsearch:backup
- We commit the changes on conf file into the docker images. Othewise all your changes will be lost after bounce. Use docker images|grep fluent to find correct name of K8S pod.
docker commit <full k8s pod name in docker> k8s.gcr.io/fluentd-elasticsearch:v.2.0.4
- Use kubectl exec -it <pod name> /bin/bash to get into the pod
- As ps is not installed by default in many pods standard. We can use "find" command to find which process fluentd is running.
find /proc -mindepth 2 -maxdepth 2 -name exe -exec ls -lh {} \; 2>/dev/null
- We can use kill to send signals to Fluentd process . see doc link . ie we send SIGHUP to ask process to reload the conf file. It is quite possible the fluend just restart and trigger pod bounce itself. It would be fine as we have committed changes in the docker images.
Kill -SIGHUP 8
- In this way, the pod name would be kepted, you can have the same pod name after pod bounce.
How to Enable Debug of Fluentd Daemonset in K8S
Requirement:
We have many K8S Pods running, like we have fluentd pod running on each worker node as Daemonset. We need to enable debug mode of fluentd to get more information in the logs.We have same requirements for all other applications running in the Pods. As long as the application accept the parameters to enable debug mode, or put more trace into log files, we should be able to enable it in K8S pods
Solution:
- First we need to find what parameters we can pass to application to enable debug. In fluentd, there are -v and -vv 2 parameters to enable debug information output of the fluentd. Please refer fluend office website .
- We need to get yaml file of the daemonset from kubectl. Same concept if it is deployment or statefulsets.
kubectl get pod -n <namespace> <daemonset name> -o yaml > /tmp/temp.yaml
- Edit this temp.yaml file and find the section which passes parameters to fluentd. In fluentd it is like
- name: FLUENTD_ARGS
value: --no-supervisor -q
- Update -q to to be -v or -vv , like
- name: FLUENTD_ARGS
value: --no-supervisor -vv
- Save the temp.yaml and apply it
kubectl apply -f /tmp/temp.yaml
- It won't be effective right away. The config will be stored in Etcd data store. When you delete pods, the new pods will read the latest config and start pods with -vv parameters .
- Then we can use kubectl logs -n devops <pod name> see debug infor of the pods.
Tuesday, December 18, 2018
How To Use Openssl Generate PEM files
PEM format Key Pair with Password Phase
openssl genrsa -des3 -out private.pem 2048 ( private key pem file)openssl rsa -in private.pem -outform PEM -pubout -out public.pem (public key pem file)
PEM format Key Pair without Password Phase
openssl genrsa -out private.pem 2048 ( private key pem file)openssl rsa -in private.pem -outform PEM -pubout -out public.pem (public key pem file)
How To Move Existing DB Docker Image To Kubernetes
- Requirement:
We have existing docker images for Oracle DB 18.3 which is running fine. Docker command is:docker run -itd --name livesql_testdb1 \
-p 1521:1521 -p 5501:5500 \
-e ORACLE_SID=LTEST \
-e ORACLE_PDB=ltestpdb \
-v /u03/LTEST/oradata:/opt/oracle/oradata \
-v /u03/ALTEST/oradata:/u02/app/oracle/oradata \
oracle/database:18.3v2
We need to move them to kubernetes cluster which is running on the same host.
Solution:
- Label nodes for nodeSelector usages
kubectl label nodes instance-cas-db2 dbhost=livesqlsb
kubectl label nodes instance-cas-mt2 mthost=livesqlsb
- To Create: kubectl create -f <yaml file>
- Create Peresistent Volumes DB Files storage. yaml is like
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: livesqlsb-pv-volume1
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/u03/LTEST/oradata"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: livesqlsb-pv-volume2
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 200Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/u03/ALTEST/oradata"
- Create Persistent Volumne Claim for DB file storage. yaml is like
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: livesql-pv-claim2
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: livesql-pv-claim1
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
- Create Service for DB to be accessed by other Apps in the K8S cluster. yaml is like
apiVersion: v1
kind: Service
metadata:
labels:
app: livesqlsb-db
name: livesqlsb-db-service
namespace: default
spec:
clusterIP: None
ports:
- port: 1521
protocol: TCP
targetPort: 1521
selector:
app: livesqlsb-db
- Create DB Pod in the K8S cluster. yaml is like
apiVersion: v1
kind: Pod
metadata:
name: livesqlsb-db
labels:
app: livesqlsb-db
spec:
volumes:
- name: livesqlsb-db-pv-storage1
persistentVolumeClaim:
claimName: livesql-pv-claim1
- name: livesqlsb-db-pv-storage2
persistentVolumeClaim:
claimName: livesql-pv-claim2
containers:
- image: oracle/database:18.3v2
name: livesqldb
ports:
- containerPort: 1521
name: livesqldb
volumeMounts:
- mountPath: /opt/oracle/oradata
name: livesqlsb-db-pv-storage1
- mountPath: /u02/app/oracle/oradata
name: livesqlsb-db-pv-storage2
env:
- name: ORACLE_SID
value: "LTEST"
- name: ORACLE_PDB
value: "ltestpdb"
nodeSelector:
dbhost: livesqlsb
Subscribe to:
Comments (Atom)