Monday, December 17, 2012

How To Check Dead Process in Linux

use lsof command to check if any files deleted when processes are running

    * lsof -p PID

use strace to check the pids are not processing anything

    * time strace -p PID

use top  then H, to see all the threads

or use -m in ps

ps -efwm|grep httpd

strace -o /tmp/strace.output  -p  PID

how to check which process is taking a port
for example, apache can't start on port 80

lsof -i TCP:80

Sunday, December 02, 2012

How to Recover Datafile From Standby DB

Goal

Problem Statement

How to recover the primary database's datafile using a copy of a standby database's datafile.

and

How to recover the standby database's datafile using a copy of a primary database's datafile.
Solution
Recovering the Primary's Datafile

How to recover the primary database's datafile using a copy of a standby database's datafile.

1) copy the standby database's datafile

2) ftp the file to the primary site

3) catalog the datafile copy on the primary site

4) on the primary site restore and recovery the datafile copy

This procedure will work for all file systems - cooked, raw or ASM.
Example

Through this example we will be using datafile 25.

1) On standby database, copy datafile from ASM to a cooked file system:

       9i:  RMAN> copy datafile 25 to '/tmp/df25.cpy';

       10g: RMAN> backup as copy datafile 25 format '/tmp/df25.cpy';


2) FTP the file to primary server on cooked system

On primary database
~~~~~~~~~~~~~~~~~~~
3) catalog this datafile copy:
       SQL> alter database datafile 25 offline;
       RMAN> catalog datafilecopy '/tmp/df25.cpy';

4) Confirm that datafile exists:
       RMAN> list copy of datafile 25;

5) Restore the datafile:
       RMAN> restore datafile 25;

6) Recover the datafile:
       RMAN> recover datafile 25;


recover datafile 7 until time '09-AUG-2010 14:00:00';

recover datafile 7 until time to_date('09-aug-2010 14:00:00','dd-mon-rrrr hh24:mi:ss')

7) Place the datafile online:
       SQL> alter database datafile 25 online;


Recovering the Standby's Datafile

If recovering the standby, reverse the steps. That is:

1) copy the primary database's datafile

2) ftp the file to the standby site

3) catalog the datafile copy on the standby site

4) stop Redo Apply on the Physical Standby Database

    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

5) on the standby site restore and recovery the datafile copy

Thursday, November 22, 2012

A Useful Sql To Check Tablespace Growth

To find latest 30 days tablespace growth report:

set heading on
set linesize 5500
set pages 999

SELECT TO_CHAR (sp.begin_interval_time,'DD-MM-YYYY') days
   , ts.tsname
   , max(round((tsu.tablespace_size* dt.block_size )/(1024*1024),2) ) cur_size_MB
   , max(round((tsu.tablespace_usedsize* dt.block_size )/(1024*1024),2)) usedsize_MB
   FROM DBA_HIST_TBSPC_SPACE_USAGE tsu
  , DBA_HIST_TABLESPACE_STAT ts
    , DBA_HIST_SNAPSHOT sp
    , DBA_TABLESPACES dt
    WHERE tsu.tablespace_id= ts.ts#
   AND tsu.snap_id = sp.snap_id
   AND ts.tsname = dt.tablespace_name
   AND ts.tsname  IN ('TEST1','TEST2')
   AND sp.begin_interval_time  between sysdate -30 and sysdate
   GROUP BY TO_CHAR (sp.begin_interval_time,'DD-MM-YYYY'), ts.tsname
   ORDER BY ts.tsname, days;

To find specific date of tablespace report: ie 01sep2012 or 11nov0212

SELECT TO_CHAR (sp.begin_interval_time,'DD-MM-YYYY') days
   , ts.tsname
   , max(round((tsu.tablespace_size* dt.block_size )/(1024*1024),2) ) cur_size_MB
   , max(round((tsu.tablespace_usedsize* dt.block_size )/(1024*1024),2)) usedsize_MB
   FROM DBA_HIST_TBSPC_SPACE_USAGE tsu
  , DBA_HIST_TABLESPACE_STAT ts
    , DBA_HIST_SNAPSHOT sp
    , DBA_TABLESPACES dt
    WHERE tsu.tablespace_id= ts.ts#
   AND tsu.snap_id = sp.snap_id
   AND ts.tsname = dt.tablespace_name
   AND ts.tsname  IN ('TEST1','TEST2')
   AND TO_CHAR (sp.begin_interval_time,'DD-MM-YYYY') = '01-09-2012'
   GROUP BY TO_CHAR (sp.begin_interval_time,'DD-MM-YYYY'), ts.tsname
   ORDER BY ts.tsname, days;

then you can easily figure how much space growth between 01sep2012 and 11nov0212

Sunday, November 18, 2012

How To View Oracle Text Query Exec Plan

Oracle Text Query is not like normal sql which we can get sql plan via "explain plan for" "select * from table(dbms_xplan.display());".....

It has specific API to get them:

Creating the Explain Table
To create an explain table called test_explain for example, use the following SQL
statement:
create table test_explain(
explain_id varchar2(30),
id number,
parent_id number,
operation varchar2(30),
options varchar2(30),
object_name varchar2(64),
position number,
cardinality number);

Running CTX_QUERY.EXPLAIN
To obtain the expansion of a query expression such as comp% OR ?smith, use CTX_QUERY.EXPLAIN as follows:
ctx_query.explain(
index_name => ''TEST_IDCTEXT2',
text_query => 'comp% OR ?smith',
explain_table => 'test_explain',
sharelevel => 0,
explain_id => 'Test');

exec ctx_query.explain(index_name => 'TEST_IDCTEXT2',text_query => '((((DEFINESCORE((test), RELEVANCE * .1)) )))  and  (((((EMPL%) WITHIN dDocAccount)) or (((CATEST1%) WITHIN dDocAccount)) or (((GIT/GLOBAL#0023IT#0023ONLY%) WITHIN dDocAccount)) or (((GIT/ALL%) WITHIN dDocAccount)) or (((idcnull) WITHIN dDocAccount))))',explain_table => 'test_explain',sharelevel => 0,explain_id => 'Test');

Retrieving Data from Explain Table
To read the explain table, you can select the columns as follows:

  select lpad('   ',level-1)||operation||' '||options||' '||object_name "Plan"
 from test_explain
  connect by prior id = parent_id
start with id=1
 order siblings by position

Wednesday, November 14, 2012

How Exadata Offload Some RMAN Incremtal Backup Load To Cell

It's well known that from 10g, oracle db has a new feature to enable block change tracking. It does a great job to avoid RMAN to scan the whole data files while only track the data that have been changed since the last backup.

At that time, exadata was not out yet. With new Exadata with storage cell storage, It enhances the block change tracking and offload some incremental backup load to Cell. Here is how it works

The key is on _bct_chunk_size which is hidden parameter ,default is 0 .
However CTWR(Change Tracking Writer) uses 32K(a chunk).It can't be 0 though in reality.
If we have DB Block set 8K, it means the minimum  track unit of CTWR is 4 db blocks.
If one of 4 is changed, all 4 are tracked in tracking files

Before Exadata, RMAN need some work to pick the changes blocks from the chunk. Now with Exadata, this work can be offloaded into Cell. On Exadata, oracle recommend AU size to be 4M.
4M can hold quite a few 32k(chunk) . It gives enough space for Cell to filter or inspect blocks in the AU, then return the only changed blocks to Database, thus offload some load for RMAN.

To check ASM AU size is simple:
select ALLOCATION_UNIT_SIZE,NAME from v$asm_diskgroup;