Showing posts with label database. Show all posts
Showing posts with label database. Show all posts

Tuesday, September 22, 2020

RMAN-04022: target database mount id % does not match channel's mount id %

Symptom:

  When we run rman target / and run  "crosscheck archivelog all", we hit an error

RMAN-04022: target database mount id ****  does not match channel's mount id ***

Solution:

   It is quite possible the DB is duplicated with rman. The default channel is still on the old one which is not the current DB. To specifically allocate a disk channel to fix it

run

{

allocate channel disk1 device type disk;

crosscheck archivelog all;

}

Thursday, July 02, 2020

How To RMAN Backup Oracle Database 19c running in Kubernetes

Requirement:

   We have an Oracle Database 19c running in OKE( Oracle Kubernetes Engine). We would like to use rman to backup DB to Object storage of  Cloud. We use Oracle Cloud Infrasture (OCI) as an example. The same concept applied to other Clouds.

Steps:

  • Create a docker image with python 3 and Oracle OCI CLI installed. Please refer official doc how to install Oracle OCI CLI. Also, Dockerfile can be found via  GitHub repo 
  • Create a statefulset using the docker image. Yaml files can be found via GitHub Repo
  • Download the rman backup module of OCI. link
  • Follow the instructions of installation. link
    • Attention: when we set up oci cli, the config file should not be in the docker image, but to the persistent block storage volume. ie /opt/oracle/diag/.oci/config and export OCI_CLI_CONFIG_FILE=/opt/oracle/diag/.oci/config
    •  Attention: when we set up rman backup module and create wallet files,  all config files should not be put in the docker image, but to the persistent block storage volume. ie /opt/oracle/diag/
      • java -jar oci_install.jar \
      • -host https://objectstorage.us-phoenix-1.oraclecloud.com \
      • -pvtKeyFile /opt/oracle/diag/.oci/testuser_ww-oci_api_key.pem \
      • -pubFingerPrint 52:b6:0e:2e:***:a1 \
      • -uOCID "ocid1.user.oc1..aaaaahjia***adfe" \
      • -tOCID "ocid1.tenancy.oc1..aanh7gl5**dfe" \
      • -walletDir /opt/oracle/diag/.oci/opc_wallet \
      • -configFile /opt/oracle/diag/.oci/opc_wallet/opcAUTOCDB.ora \
      • -libDir $ORACLE_HOME/lib \
      • -bucket BUK-OBJECT-STORAGE-BAK-TEMP \
      • -proxyHost yourproxy.com \
      • -proxyPort 80
    • Use java- jar oci_installer.jar -h for more details
    • Tip:If you have libopc.so in place in $ORACLE_HOME/lib which is in docker image, we can ignore the warning of  downloading part of the process
    • Tip: You can copy opc_wallet to other servers or OKE clusters without doing oci cli and java -jar oic_installer.jar steps .
    • Tip: If you see error " KBHS-00713: HTTP client error '', check http_proxy and https_proxy settings. Rman backup to object storage module uses  HTTP HTTPS protocols. 
    • Tip: If you see error " KBHS-01012: ORA-28759 occurred during wallet operation; WRL file:/home/oracle/opc_wallet ",  it maybe due to there are some old opc<sid>.ora config files in $ORACLE_HOME/dbs. DB always try to read the config file in ./dbs instead of using parameters. Remove the files should clear it
    • To avoid error "KBHS-01006: Parameter OPC_HOST was not specified", we need to put all parameters in opcAUTOCDB.ora in the rman script.
  • Test RMAN backup inside your statefulset DB pod
    • rman target /
    • SET ENCRYPTION ON IDENTIFIED BY 'testtest' ONLY;
    • run {
    • SET ENCRYPTION ON IDENTIFIED BY 'changeme' ONLY;
    • ALLOCATE CHANNEL t1 DEVICE TYPE sbt PARMS "SBT_LIBRARY=/opt/oracle/product/19c/dbhome_1/lib/libopc.so ENV=(OPC_HOST=https://objectstorage.us-phoenix-1.oraclecloud.com/n/testnamespace, OPC_WALLET='LOCATION=file:/opt/oracle/diag/.oci/opc_wallet CREDENTIAL_ALIAS=alias_oci', OPC_CONTAINER=TEST-OBJECT-STORAGE-RMAN, OPC_COMPARTMENT_ID=ocid1.compartment.oc1..aa****sddfeq, OPC_AUTH_SCHEME=BMC)";
    • backup current controlfile;
    • }

Monday, June 15, 2020

Dockerfile for Oracle Database 19.5 image with patches applied

Summary:

Here is the github link for Dockerfile of Oracle Database 19.5 image with patches applied

https://github.com/HenryXie1/Dockerfile/tree/master/OracleDatabase

The docker image has 19.3 installed and apply below patches to 19.5
OCT_RU_DB_p30125133_190000_Linux-x86-64.zip  OCT_RU_OJVM_p30128191_190000_Linux-x86-64.zip  
p30083488_195000DBRU_Linux-x86-64.zip

The docker image has updates to facilitate automated block storage provision in  OKE (Oracle Kubernetes Engine)

The docker image creates three different volumes for  Oradata,  Fast Recovery Area (FRA)  and Diagnose area (diag). The three would help to keep datafiles safe, dedicated space for recovery and separated place for diagnosing avoid filling up Data and FRA places.

The testdb yaml files utilize oci-bv (Container Storage Interface -- CSI based)  of OKE

Sunday, May 17, 2020

Tip: Clean Oracle DB Diagnose Home Automatcally

Requirement:

  Oracle DB can generate a huge amount of trace files and fill up the file system. Tired to clean Oracle DB trace files and incident files?

Solution:

SHORTP_POLICY : Retention for ordinary trace files
LONGP_POLICY : Retention for like incident files

adrci> set control (SHORTP_POLICY = 360) ===>15days
adrci> set control (LONGP_POLICY = 2160) ===>90 Days
adrci> show control

Purging Trace files manually:

Following command will manually purge all tracefiles older than 2 days (2880 minutes):
adrci> purge -age 4880 -type trace
adrci> purge -age 129600 -type ALERT ===> purging ALERT older than 90 days
adrci> purge -age 43200 -type INCIDENT ===> purging INCIDENT older than 30 days
adrci> purge -age 43200 -type TRACE ===> purging TRACE older than 30 days
adrci> purge -age 43200 -type CDUMP ===> purging CDUMP older than 30 days
adrci> purge -age 43200 -type HM ===> purging HM older than 30 days
adrci> show tracefile -rt

Crontab to purge files Automatically
00 20 * * * adrci exec="set home diag/rdbms/****;purge -age 4880 -type trace;purge -age 43200 -type INCIDENT;purge -age 43200 -type CDUMP"

Sunday, April 12, 2020

Oracle Non-RAC DB StatefulSet HA 1 Command Failover Test in OKE

Requirement:

OKE has a very powerful Block Volume management built-in. It can find, detach, reattach block storage volumes among different worker nodes seamlessly. Here is what we are going to test.
We create an Oracle DB statefulset on OKE. Imagine we have hardware or OS issue on the worker node and test HA failover to another worker node with only 1 command (kubectl drain).
Below things happen automatically when draining the node
  • OKE will shutdown DB pod
  • OKE will detach PV on the worker node
  • OKE will find a new worker node in the same AD
  • OKE will attach PV in the new worker node
  • OKE will start DB pod in the new worker node
DB in statefulset is not RAC, but with the power of OKE, we can failover a DB to new VM in less than a few minutes

Solution:

  • Create service for DB statefulset
    $ cat testsvc.yaml 
    apiVersion: v1
    kind: Service
    metadata:
      labels:
         name: oradbauto-db-service
      name: oradbauto-db-svc
    spec:
      ports:
      - port: 1521
        protocol: TCP
        targetPort: 1521
      selector:
         name: oradbauto-db-service
  • Create a DB statefulset, wait about 15 min to let DB fully up
    $ cat testdb.yaml 
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: oradbauto
      labels:
        app: apexords-operator
        name: oradbauto
    spec:
      selector:
         matchLabels:
            name: oradbauto-db-service
      serviceName: oradbauto-db-svc
      replicas: 1
      template:
        metadata:
            labels:
               name: oradbauto-db-service
        spec:
          securityContext:
             runAsUser: 54321
             fsGroup: 54321
          containers:
            - image: iad.ocir.io/espsnonprodint/autostg/database:19.2
              name: oradbauto
              ports:
                - containerPort: 1521
                  name: oradbauto
              volumeMounts:
                - mountPath: /opt/oracle/oradata
                  name: oradbauto-db-pv-storage
              env:
                - name: ORACLE_SID
                  value: "autocdb"
                - name: ORACLE_PDB
                  value: "autopdb"
                - name:  ORACLE_PWD
                  value: "whateverpass"
      volumeClaimTemplates:
      - metadata:
          name: oradbauto-db-pv-storage
        spec:
          accessModes: [ "ReadWriteOnce" ]
          resources:
            requests:
              storage: 50Gi

  • Image we have hardware issues on this node, we need to failover to a new node 
    • Before Failover: Check the status of PV and Pod. and the pod is running on the node  1.1.1.1
    • Check any if other  pods running on the node will be affected
    • We have a node ready in the same AD as statefulset Pod
    • kubectl get pv,pvc
      kubectl get po -owide
      NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
      oradbauto-0 1/1 Running 0 20m 10.244.3.40 1.1.1.1 <none> <none>
    • 1 command to failover DB to new worker node
      • kubectl drain  <node name> --ignore-daemonsets --delete-local-data
      • kubectl drain  1.1.1.1    --ignore-daemonsets --delete-local-data
      • No need to update MT connection string as DB servicename is untouched and transparent to new DB pod
    • After failover: Check the status of PV and Pod. and the pod is running on the new node 
      • kubectl get pv,pvc
      • kubectl get pod -owide
  • The movement of PV,PVC  work on  volumeClaimTemplates as well as  the PV,PVC when we create them via yaml files with storage class "oci"

Monday, June 10, 2019

Tip Example to Create Tablespace in the Same Location


Requirement:

 We need to create a new tablespace. Default datafile will be created on Oracle DB Home if  db_create_file_dest is not set . We don't wanna that happen, we would like to use the existing datafile location for the new tablespace.

Solution:

sample sql we use is 
declare v_datafile VARCHAR2(100);
begin
select  ((select  regexp_substr(name,'^.*/\')  from v$datafile where rownum = 1)||'livesqldata01.dbf')
into v_datafile   from dual;
execute immediate 'create tablespace LIVESQL datafile '''||v_datafile||''' size 50M reuse autoextend on';
end;

/

Thursday, March 07, 2019

Easy Way To Unplug and Plug PDB After 12.2

After Oracle DB 12.2 , it is much easier to move data or migrate DB to different places. 2 simple sqls can achieve that.

Unplug

ALTER PLUGGABLE DATABASE TESTDB CLOSE;
ALTER PLUGGABLE DATABASE TESTDB UNPLUG INTO '/u02/databak/testdb.pdb';

Copy the .pdb file to the target place......

Plug

CREATE PLUGGABLE DATABASE TESTDBNEW USING '/u110/datatarget/testdb.pdb'
create_file_dest='/opt/oracle/oradata/testdbnew/'
file_name_convert =('/u110/datatarget/','/opt/oracle/oradata/testnew/DATAFILE/');

ALTER PLUGGABLE DATABASE TESTDBNEW OPEN READ WRITE;