Sunday, March 29, 2009

Data Pump

Data Pump



Oracle has introduced data pump in version 10g. Data pump enables very high speed data and metadata loading and unloading of oracle databases.
Oracle has added some new features in 11g for data pump utility. I haven’t shared my experience using data pump, let me talk some data pump features and move on to 11g enhancements.

It was a big relief for many DBA’s and Database developers when the data pump feature was made available in Oracle10g. I used to work in an environment where the database size was around 20TB. A application uses a schema in the database which was around 400GB. The schema data is scattered in multiple table spaces which are shared by other schemas (Ashamed of myself to be a DBA who supports this Database). I get requests to refresh this schema from production. Since my data is not logically separated, I often tend to use the export/import feature. This is a week long process with some baby sitting.

When we upgraded the database to Oracle10g, I really wanted to try out the Data pump option. The result was amazing. It was blazing fast compared to 7 days of traditional export/import. I could complete the data pump export/import in 10-12 hours.

I have fallen in love with Data pump!!!.



Oracle 10g has provided two command line tools named “expdp” and “impdp”. These tools support all the functionalities of the original exp and imp tools as well as some new features.

Data Pump export is a utility for unloading data and metadata into a set of operating system files called dump file sets. Data pump import is used to load metadata and data in export dump files to a target database.

The heart of the data pump operation is the master table, which is a table created in the schema of the user executing the job. The master table maintains all the aspects of the job. The master table helps the data pump to get the status of a job and restart it, in the event of a abnormal stoppage. The master table gets dropped when the job finishes normally.


The datapump actually runs on the database server. You can intiate the process from a client machine and exit out the process. The job still runs on the database server. You have the capability to attach to the job and check the status at any point as long as the job is running.



Let us see some examples for datapump export and import.



You need to create a logical directory for the datapump job to place the dump files and log files.



$> sqlplus / as sysdba

sql> CREATE DIRECTORY export_dir as '\u01\oracle\export_dir';

sql> GRANT READ, WRITE ON directory export_dir TO patent; (here patent is the name of the database user who is executing the expdp job)



Start the data pump export



Eg: to export Just one table from schema Patent



nohup expdp patent/xxxxxxx@binfo directory=export_dir tables=SEQUENCE_QUALITY job_name=patent_exp dumpfile=export_dir:sequence_quality_%u.dmp logfile=export_dir:sequence_quality.log filesize=80530636800 status=5 parallel=5 &



Where filesize is the max size the dumpfile can grow. when it reaches the max size, it automaticllay creates a new file and write to it.



Job_name is required for you to attach to the job again, if you disconnect from the session.



If you need to import the above export dump file in to another schema..(patent_test).



$> sqlplus / as sysdba
sql> CREATE DIRECTORY imp_dir as '\u01\oracle\imp_dir';
sql> GRANT READ, WRITE ON directory imp_dir TO patent_test; (here patent_test is the name of the database user who is executing the impdp job)




nohup impdp patent_test/xxxxxxx@dev directory=imp_dir job_name=patent_import dumpfile=imp_dir:sequence_quality_%u.dmp logfile=imp_dir:patentimp.log remap_schema=PATENT:PATENT_TEST status=5 remap_tablespace=seqdb_quality_data:SMS_DATA full=y &



Here the key words remap_schema tells the jobs that the export was done on schema patent and the import data has to go to schema "patent_test".



The key word remap_tablespace tells the jobs to move all the objects under seqdb_quality_data tablespace on the export database to sms_data tablespace.



If you need to skip constraints and indexes on the same impdp job.



nohup impdp patent_test/********@dev directory=imp_dir job_name=patent_import dumpfile=imp_dir:sequenceS_%u.dmp logfile=imp_dir:patentimp.log remap_schema=PATENT:PATENT_TEST status=5 remap_tablespace=seqdb_sequences_data:SMS_DATA full=y exclude=GRANT,CONSTRAINT,INDEX parallel=5 &



Like in Exp and imp job you can still use par files here.



sample Parfile

--------------

import

------

directory=IMP_DIR

dumpfile=IMP_DIR:ecg%U.dmp

job_name=ecg_import

logfile=imp_dir:ecgimp.log

status=5

parallel=4

TABLES=ECG.TRAN_ERR

CONTENT=DATA_ONLY

TABLE_EXISTS_ACTION=APPEND

REMAP_SCHEMA=ECG:ECG_TEST



Export

------

directory=EXP_DIR

dumpfile=EXP_DIR:ecg%u

dmpjob_name=ecg_export

logfile=exp_dir:ecgexp.

logfilesize=10G

status=5

parallel=4

exclude=index

TABLES=TRAN_ERR,

TRAN_DETAIL_STATUS,

TRAN_HEADER_STATUS,

TRAN_PARTY



Usage

------



$> nohup expdp patent/xxxxxxx@binfo parfile=exp.par &

$> nohup impdp patent/xxxxxxx@binfo parfile=imp.par &





How do you attach to the above impdp job from another client computer and watch the status.



$> impdp patent_test/********@dev attach=patent_import --> (name of the job)


Connected to: Oracle10i Enterprise Edition Release 10.1.0.0.0 - 64bit Beta
With the Partitioning, Oracle Label Security, OLAP and Oracle Data Mining optios
Job: patent_imp
Owner: patent_test
Operation: import
Creator Privs: FALSE
GUID: H245BD7LM2C78A6EE03400306E0F6291
Start Time: Monday, 02 March, 2009 3:11
Mode: SCHEMA
Instance: dev
MaxDegree: 1
Import Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND patent_test/******** job_name=patent_imp directory=imp_dir
DATA_ACCESS_METHOD AUTOMATIC
ESTIMATE BLOCKS
INCLUDE_METADATA 1
KEEP_MASTER 0
LOG_FILE_DIRECTORY IMP_DIR
LOG_FILE_NAME patentimp.log
TABLE_CONSISTENCY 0
Job: PATENT_IMP
Operation: IMPORT
Mode: SCHEMA
State: EXECUTING
Degree: 1
Job Error Count: 0
Dump file: /u01/oracle/imp_dir/sequences_01.dmp
bytes written = 2048
Worker 1 Status:
State: EXECUTING
Import>



How do you stop/start a job.

Attach to the executing job first.

$>impdp patent_test/********@dev attach=patent_import

Connected to: Oracle10i Enterprise Edition Release 10.1.0.0.0 - 64bit Beta
With the Partitioning, Oracle Label Security, OLAP and Oracle Data Mining optios
Job: patent_imp
Owner: patent_test
Operation: import
Creator Privs: FALSE
GUID: H245BD7LM2C78A6EE03400306E0F6291
Start Time: Monday, 02 March, 2009 3:11
Mode: SCHEMA
Instance: dev
MaxDegree: 1
Import Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND patent_test/******** job_name=patent_imp directory=imp_dir
DATA_ACCESS_METHOD AUTOMATIC
ESTIMATE BLOCKS
INCLUDE_METADATA 1
KEEP_MASTER 0
LOG_FILE_DIRECTORY IMP_DIR
LOG_FILE_NAME patentimp.log
TABLE_CONSISTENCY 0
Job: PATENT_IMP
Operation: IMPORT
Mode: SCHEMA
State: EXECUTING
Degree: 1
Job Error Count: 0
Dump file: /u01/oracle/imp_dir/sequences_01.dmp
bytes written = 2048
Worker 1 Status:
State: EXECUTING
Import> stop_job (to stop a running Job)


Are you sure you wish to stop this job ([y]/n): y

To start a Job

Import> start_job

Import> status (to view the status of the current job.

How to terminate a Running Job

Attach to the running job first

Import> kill_job

Are you sure you wish to stop this job ([y]/n): y

$

Once the job is killed the master table gets dropped and it automatically detaches all clinet sessions.

You can view the status of a Data Pump Job from dba_datapump_jobs

Conclusion

Datapump is great addition to Oracle tools. The best part is , its very easy to use and the commands are very similar to the traditional export/import tools. I'll talk about some of the 11g datapump enhancements in next post.






Thursday, March 26, 2009

Oracle 11g new Features - Database Capture and Replay

Database Capture and Replay

Oracle 11g come with a cool new feature to capture the changes in a database at a specified interval and replay them in a different database. This is similar to running a macro to capture all the changes in the computer screen and playing the macro in other computers.

Oracle uses two new DBMS packages to perform this “DBMS_WORKLOAD_CAPTURE” and “DBMS_WORKLOAD_APPLY”.

In the example below I’ll be performing the capture and replay on the same database. I’ll set the “restore point” before the capture process. Once the capture process is done, I’ll flashback the database to the restore point and replay the captured task on the same database.

Enable flash back on the database first. This is not a requirement, I am doing this to be able to flash back the database.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount exclusive
ORACLE instance started.
Total System Global Area 4275781632 bytes
Fixed Size 2121800 bytes
Variable Size 2566918072 bytes
Database Buffers 1694498816 bytes
Redo Buffers 12242944 bytes
Database mounted.

SQL> alter database archivelog;
Database altered.

Elapsed: 00:00:00.10
SQL> alter database flashback on;
Database altered.

Elapsed: 00:00:01.40
SQL> alter database open;
Database altered.

Elapsed: 00:00:15.39

SQL> SELECT flashback_on, log_mode
2 FROM v$database;

FLASHBACK_ON LOG_MODE
------------------ ------------
YES ARCHIVELOG
Elapsed: 00:00:00.00


Capture Process

Lets start with creating a directory to hold the captured information.

SQL> create directory capture_dir as 'd:\database_capture';
Directory created.
Elapsed: 00:00:00.04

Here I am creating a restore point to flashback the database for replay.

SQL> create restore point praveen;
Restore point created.
Elapsed: 00:00:00.09

We need to use DBMS_WORKLOAD_CAPTURE.start_capture package to start the capture process. The start_capture process captures everything that’s happening in the database, In this example I am only interested in the actions performed by user “SCOTT”. I have to use some filters for my capture process. The procedure “ADD_FILTER” under the same package is used to filter the capture process.

Lets look at the parameters required for the “ADD_FILTER” option.

SQL> desc DBMS_WORKLOAD_CAPTURE
PROCEDURE ADD_FILTER
Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------
FNAME VARCHAR2 IN
FATTRIBUTE VARCHAR2 IN
FVALUE VARCHAR2 IN

Fname - Any name for your Filter

Fattribute - The attribute which the filter needs to be applied.
The possible values are( INSTANCE_NUMBER - type NUMBER,USER - type STRING,MODULE - type STRING,ACTION - type STRING,PROGRAM - type STRING,SERVICE - type STRING)

Fvalue - The value for the attribute (wild cards are permitted
)

sqlplus / as sysdba
SQL> exec DBMS_WORKLOAD_CAPTURE.ADD_FILTER ('CAPTURE_FILTER1','USER','SCOTT');
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.06


Now lets start the capture process by executing the DBMS_WORKLOAD_CAPTURE.START_CAPTURE package.

Procedure definition
DBMS_WORKLOAD_CAPTURE.START_CAPTURE (
name IN VARCHAR2,
dir IN VARCHAR2,
duration IN NUMBER DEFAULT NULL,(in sec)
default_action IN VARCHAR2 DEFAULT 'INCLUDE',
auto_unrestrict IN BOOLEAN DEFAULT TRUE);



SQL> execute DBMS_WORKLOAD_CAPTURE.start_capture ('db_capture1','CAPTURE_DIR',NULL,'EXCLUDE',FALSE);
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.75
SQL>


Here the parameter exclude means - Exclude all the changes from other users, expect the changes from user scott as specified in the filter

Perform some actions at the database as user scott.

SQL> connect scott/xxxxxxx
Connected.
SQL> ED
Wrote file afiedt.buf
1 CREATE USER PRAVEEN1 IDENTIFIED BY PRAVEEN
2 DEFAULT TABLESPACE USERS
3 TEMPORARY TABLESPACE TEMP
4* QUOTA UNLIMITED ON USERS
SQL> /
User created
.

SQL> CREATE TABLE SCOTT.PRAVEEN AS SELECT * FROM SCOTT.TEST;
Table created.
Elapsed: 00:00:00.12

SQL> GRANT CONNECT, DBA TO PRAVEEN1;
Grant succeeded.
Elapsed: 00:00:00.03
SQL>


Now log in as sysdba and stop the capture process

SQL> EXECUTE DBMS_WORKLOAD_CAPTURE.finish_capture;
PL/SQL procedure successfully completed.
Elapsed: 00:00:34.01
SQL>


Here are the contents of my capture_directory

D:\database_capture>dir
Volume in drive D is Data
Volume Serial Number is 9820-5C5B

Directory of D:\database_capture

03/24/2009 05:23 PM .
03/24/2009 05:23 PM ..
03/24/2009 05:18 PM 0 wcr_4mkawa80025q8.rec
03/24/2009 05:18 PM 0 wcr_4mkawac002708.rec
03/24/2009 05:19 PM 0 wcr_4mkawah0023q0.rec
03/24/2009 05:19 PM 0 wcr_4mkawaw00252c.rec
03/24/2009 05:19 PM 0 wcr_4mkawb400254h.rec
03/24/2009 05:19 PM 736 wcr_4mkawd40025nw.rec
03/24/2009 05:19 PM 0 wcr_4mkawds0025r4.rec
03/24/2009 05:19 PM 0 wcr_4mkawfn002408.rec
03/24/2009 05:19 PM 0 wcr_4mkawfn0026k0.rec
03/24/2009 05:19 PM 68 wcr_4mkawhs0024gs.rec
03/24/2009 05:20 PM 970 wcr_4mkawph00248n.rec
03/24/2009 05:20 PM 1,390 wcr_4mkawqc0027ws.rec
03/24/2009 05:20 PM 68 wcr_4mkawsc0022zc.rec
03/24/2009 05:20 PM 0 wcr_4mkawsh0027ch.rec
03/24/2009 05:21 PM 970 wcr_4mkawx40027rh.rec
03/24/2009 05:21 PM 68 wcr_4mkawzw0026fh.rec
03/24/2009 05:22 PM 0 wcr_4mkax58002330.rec
03/24/2009 05:22 PM 68 wcr_4mkax7c0023fn.rec
03/24/2009 05:22 PM 992 wcr_4mkax880022cw.rec
03/24/2009 05:23 PM 43,490 wcr_cr.html
03/24/2009 05:23 PM 22,006 wcr_cr.text
03/24/2009 05:23 PM 192 wcr_fcapture.wmd
03/24/2009 05:18 PM 128 wcr_scapture.wmd
23 File(s) 71,146 bytes
2 Dir(s) 96,016,297,984 bytes free

The data dictionary dba_workload_capture will provide information about the capture process.
SQL> col name for a15
SQL> select id, name from dba_workload_captures;
ID NAME
---------- ---------------
1 db_capture1
Elapsed: 00:00:00.01

You are done with the capture process

Replay

Here I am planning to replay on the same database which I performed the capture.I have created a new directory named database_replay and copied all the contents from database_capture directory. This step is optional.

SQL> create directory replay_dir as 'd:\database_replay';
Directory created.
Elapsed: 00:00:00.04
SQL>

The steps below are required only for testing and it applies to this scenario. This step is not a requirement for replay

Here we shutdown the database and flashback the database to a point before the capture process.

Shutdown the database and recover it to a point before the capture process.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.
Total System Global Area 4275781632 bytes
Fixed Size 2121800 bytes
Variable Size 2566918072 bytes
Database Buffers 1694498816 bytes
Redo Buffers 12242944 bytes
Database mounted.

Flash back the database to the restore point Praveen (perfomed before the capture process started).

SQL> flashback database to restore point praveen;
Flashback complete.
Elapsed: 00:00:06.31
SQL> alter database open resetlogs;
Database altered.

Query the database to see, if the user "praveen1" exists (created during capture process)

SQL> select username from dba_users where username like 'PRAVEEN1';
no rows selected
Elapsed: 00:00:00.00
SQL> select count(*) from dba_tables where table_name like 'PRAVEEN' and owner like 'SCOTT';
COUNT(*)
----------
0

Three procedures under the package dbms_workload_capture are used for replay. The steps below are not optional.

PROCEDURE PROCESS_CAPTURE
Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------
CAPTURE_DIR VARCHAR2 IN


PROCEDURE INITIALIZE_REPLAY
Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------
REPLAY_NAME VARCHAR2 IN
REPLAY_DIR VARCHAR2 IN

PROCEDURE PREPARE_REPLAY
Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------
SYNCHRONIZATION BOOLEAN IN DEFAULT
CONNECT_TIME_SCALE NUMBER IN DEFAULT
THINK_TIME_SCALE NUMBER IN DEFAULT
THINK_TIME_AUTO_CORRECT BOOLEAN IN DEFAULT

The first step is to initialize and prepare the database for replay

sqlplus / as sysdba

SQL> execute DBMS_WORKLOAD_REPLAY.process_capture('REPLAY_DIR');
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.98
SQL> EXECUTE DBMS_WORKLOAD_REPLAY.initialize_replay ('DATABASE_REPLAY1','REPLAY_DIR');
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.20
SQL> EXECUTE DBMS_WORKLOAD_REPLAY.prepare_replay;
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.25

There is a replay client that’s available in oracle 11g named “wrc” in both unix and windows environments. The wrc client can read the captured work and replay them.

You have to openup a new window or terminal session type the command at the "os" prompt.

D:\>wrc userid=scott password=xxxxx replaydir='d:\database_replay' mode=replay

Workload Replay Client: Release 11.1.0.6.0 - Production on Thu Mar 26 17:35:23 2009
Copyright (c) 1982, 2007, Oracle. All rights reserved.

Wait for the replay to start (17:35:23)


The session will wait till you manually start the replay.

Go back to sqlplus and login as sysdba on other window

SQL> execute DBMS_WORKLOAD_REPLAY.start_replay;

PL/SQL procedure successfully completed.

You can check the staus of apply at wrc session.

Workload Replay Client: Release 11.1.0.6.0 - Production on Thu Mar 26 17:35:23 2009
Copyright (c) 1982, 2007, Oracle. All rights reserved.

Wait for the replay to start (17:35:23)
Replay started (17:35:45)
Replay finished (17:39:17)
D:\>

At this point you are done with Replay.

Test your database to see the captured changes exist in the database.
-------------------------------------------------------------------------------
SQL> select username from dba_users where username like 'PRAVEEN1';

USERNAME
------------------------------
PRAVEEN1
Elapsed: 00:00:00.01
SQL>
SQL> select count(*) from SCOTT.PRAVEEN;
COUNT(*)
----------
14
Elapsed: 00:00:00.01

Conclusion
This feature allows the database administrator to simulate the workload to test database level changes through the recreation of actual workloads. You can use OEM grid control to perform the same using GUI interface. I'll disucss other new features in future post

Wednesday, March 25, 2009

Oracle 11g New Features...

In the next few posts I'll be discuss some of the new features Oracle 11g offers-. Oracle 11g has a lot of new features for developers and DBA’s. The article lists some of the features with examples.

DDL Wait Option

This feature allows a DBA to retry a DDL operation at certain intervals automatically, if the DDL operation couldn’t get a exclusive lock on the object.

Eg: You are trying to lock a table and received a “resource busy “ error (ORA-0054). As a DBA you retry the operation manually until you succeed.

In Oracle 11g you have a ddl_lock_timeout parameter, which will retry the operation with in a specified time at regular intervals until it’s successful or the specified time interval expires.

Sql> alter session set ddl_lock_timeout = 30;

This will prevent the time out error , if it couldn’t get a exclusive lock on the object, instead it will retry the operation many times till its successful or reaches 30 seconds.

Eg :

Session 1. I have grabbed a lock on the table.

SQL> select * from scott.test for update;

Session 2: I am trying to drop a column from the same table.

SQL> alter table scott.test drop column testing;
alter table scott.test drop column testing
*
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

SQL> alter session set ddl_lock_timeout=120;

Session altered.

Elapsed: 00:00:00.00

Session 1

SQL>Rollback;

Elapsed: 00:00:00.00

Session 2

SQL> alter table scott.test drop column testing;

Table altered.

Elapsed: 00:00:12.82
SQL>

Session 2 didn’t throw any resource busy errors this time. Idid a rollback on session 1. The command executed successfully after 12 seconds in session 2.

Virtual Columns

Oracle 11g lets you create virtual columns on the tables and populates the values automatically. In the previous releases you need to physically add a new column and fill the data or allow nulls in the new column. You additionally have to recompile all the dependent objects. Application team doesn’t have to write new code or create triggers to populate the new column.

Lets take a look at the emp table in scott schema;

Sql> create table scott.test as select * from scott.emp;

SQL> SELECT * FROM SCOTT.TEST;

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7369 SMITH CLERK 7902 17-DEC-80 800 20
7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30
7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30
7566 JONES MANAGER 7839 02-APR-81 2975 20
7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30
7698 BLAKE MANAGER 7839 01-MAY-81 2850 30
7782 CLARK MANAGER 7839 09-JUN-81 2450 10
7788 SCOTT ANALYST 7566 19-APR-87 3000 20
7839 KING PRESIDENT 17-NOV-81 5000 10
7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30
7876 ADAMS CLERK 7788 23-MAY-87 1100 20
7900 JAMES CLERK 7698 03-DEC-81 950 30
7902 FORD ANALYST 7566 03-DEC-81 3000 20
7934 MILLER CLERK 7782 23-JAN-82 1300 10

14 rows selected.

Here I'll create a virtual column named income category, which lists the high, low and medium income group based on the salary.

SQL> ALTER TABLE SCOTT.TEST ADD INCOME_CATEGORY VARCHAR2(10)
2 GENERATED ALWAYS AS
3 (
4 CASE
5 WHEN SAL <=2000 THEN 'LOW' 6 WHEN SAL >2000 AND SAL <=3000 THEN 'MEDIUM' 7 WHEN SAL >3000 THEN 'HIGH'
8 END
9 ) VIRTUAL;

Table altered.

SQL> SET LINESIZE 400
SQL> SELECT * FROM SCOTT.TEST;

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO INCOME
---------- ---------- --------- ---------- --------- ---------- ---------- ---------- ------
7369 SMITH CLERK 7902 17-DEC-80 800 20 LOW
7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30 LOW
7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30 LOW
7566 JONES MANAGER 7839 02-APR-81 2975 20 MEDIUM
7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30 LOW
7698 BLAKE MANAGER 7839 01-MAY-81 2850 30 MEDIUM
7782 CLARK MANAGER 7839 09-JUN-81 2450 10 MEDIUM
7788 SCOTT ANALYST 7566 19-APR-87 3000 20 MEDIUM
7839 KING PRESIDENT 17-NOV-81 5000 10 HIGH
7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30 LOW
7876 ADAMS CLERK 7788 23-MAY-87 1100 20 LOW
7900 JAMES CLERK 7698 03-DEC-81 950 30 LOW
7902 FORD ANALYST 7566 03-DEC-81 3000 20 MEDIUM
7934 MILLER CLERK 7782 23-JAN-82 1300 10 LOW

You can create indexes on Virtual colums.

Invisible indexes

Indexes are created to enhance search performance. Indexes are good in a majority of the cases , but I have seen cases the existence of an Index really hurts the performance. Your application may have sql statements which uses the index effectively and also have sql statements which perform poorly because of the same index. Oracle 11g has provided an option to make the index invisible to the optimizer. This way the user doesn’t have to drop an index or mark the index unusable for certain queries and recreate them for other set of queries.

Lets take a look at the scott.test table in the previous example.

SQL> select * from dba_indexes where table_name like 'TEST' and owner like 'SCOTT';
no rows selected
Elapsed: 00:00:00.04

SQL> create index scott.test_empno on scott.test (empno);
Index created.

SQL> create index scott.test_empno on scott.test (empno);
Index created.


SQL> set linesize 300
SQL> set autotrace on
SQL> select * from scott.test where empno=7902;

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO INCOME
---------- ---------- --------- ---------- --------- ---------- ---------- ---------- ------
7902 FORD ANALYST 7566 03-DEC-81 3000 20 MEDIUM

Elapsed: 00:00:00.00

Execution Plan
----------------------------------------------------------
Plan hash value: 3658239601

------------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
------------------------------------------------------------------------------------------
0 SELECT STATEMENT 1 42 2 (0) 00:00:01
1 TABLE ACCESS BY INDEX ROWID TEST 1 42 2 (0) 00:00:01
* 2 INDEX RANGE SCAN TEST_EMPNO 1 1 (0) 00:00:01

The execution plan shows that the newly created index is used. Lets make the index invisible to the optimizer.

SQL> alter index scott.test_empno invisible;
Index altered.
Elapsed: 00:00:00.01
SQL>


SQL> select visibility from dba_indexes where owner like 'SCOTT' and INDEX_NAME LIKE 'TEST_EMPNO';
VISIBILIT
---------
INVISIBLE
Elapsed: 00:00:00.01

SQL> select * from scott.test where empno=7902;
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO INCOME
---------- ---------- --------- ---------- --------- ---------- ---------- ---------- ------
7902 FORD ANALYST 7566 03-DEC-81 3000 20 MEDIUM
Elapsed: 00:00:00.00
Execution Plan
----------------------------------------------------------
Plan hash value: 1357081020


--------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
--------------------------------------------------------------------------
0 SELECT STATEMENT 1 42 3 (0) 00:00:01
* 1 TABLE ACCESS FULL TEST 1 42 3 (0) 00:00:01
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("EMPNO"=7902)


Statistics
----------------------------------------------------------
193 recursive calls
0 db block gets
31 consistent gets
0 physical reads
0 redo size
1070 bytes sent via SQL*Net to client
520 bytes received via SQL*Net from client

You could see that the optimizer didn’t use the newly created index.

You could make the optimizer use invisible indexes by specifying a index hint or set the “OPTIMIZER_USE_INVISIBLE_INDEXES” to True at the session level or database level.

To make index Visible again.

SQL> alter index scott.test_empno visible;
Index altered.
Elapsed: 00:00:00.01
SQL> select visibility from dba_indexes where owner like 'SCOTT' and INDEX_NAME LIKE 'TEST_EMPNO';

VISIBILIT
---------
VISIBLE
Elapsed: 00:00:00.00

Case sensitive passwords

The case sensitive passwords is a new feature in Oracle 11g. Developers using sqlserver
Databases are familiar with case sensitive passwords. Oracle 11g gives you the flexibility to turn on and off password case sensitivity.


SQL> SHOW PARAMETER SEC_CASE_SENSITIVE_LOGON

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sec_case_sensitive_logon boolean TRUE
SQL>

SQL> ALTER SYSTEM SET SEC_CASE_SENSITIVE_LOGON = FALSE;
System altered.
SQL> create user praveen identified by Praveen;
User created.
SQL> grant create session to praveen;

Grant succeeded.
SQL> connect praveen/praveen
Connected.
SQL> connect / as sysdba
Connected.
SQL> ALTER SYSTEM SET SEC_CASE_SENSITIVE_LOGON = TRUE;
System altered.
SQL> connect praveen/praveen
ERROR:
ORA-01017: invalid username/password; logon denied
Warning: You are no longer connected to ORACLE.
SQL> connect praveen/Praveen
Connected.
SQL>


To create a case in-sensitive password in the password file, recreate the password file using ignorecase=y option.
$ orapwd file=orapwDB11Gb entries=100 ignorecase=y password=mypassword

-----------------------------------------------------------------------------------------------------------------------

I'll discuss more features in next post.

------------------------------------------------------------------------------------------------------------------------

Thursday, March 19, 2009

Basic CRS and SRVCTL commands

Purpose

This document provides some basic CRS and Srvctl commnds in Oracle 10g RAC environment.

What is CRS? (Cluster Ready services)

CRS is Oracle clusterware, which provides many system management services and interact with the vendor clusterware to coordinate cluster membership information.

Oracle has provided some utilities to help administer and troubleshoot the RAC environment. Lets looks at some of the basic commands,

COMMANDS

1) The Most basic thing to check for is the existence of the “crs” processes in Unix/Linux server.
$> ps –ef grep crs

Here is a list of required CRS processes.

Ocssd.bin, crsd.bin, evmd.bin, evmlogger.bin

2) crs_stat

This command will list all the resources that are registered with CRS with the name of the resource , the type of resource and its current state (online/offline).

· crs_stat command does list all the resources in a linear fashion, if you would like to see the resources in a tabular fashion yiou could use crs_stat –t , which makes it easy to read.
E.g.:-
$> crs_stat –t

oracle@amro503:fimsq11> crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....03.lsnr application ONLINE ONLINE amro503
ora....503.gsd application ONLINE ONLINE amro503
ora....503.ons application ONLINE ONLINE amro503
ora....503.vip application ONLINE ONLINE amro503
ora....05.lsnr application ONLINE ONLINE amro505
ora....505.gsd application ONLINE ONLINE amro505
ora....505.ons application ONLINE ONLINE amro505
ora....505.vip application ONLINE ONLINE amro505
ora.fimsq1.db application ONLINE ONLINE amro503
ora....11.inst application ONLINE ONLINE amro503
ora....12.inst application ONLINE ONLINE amro505

· crs_stat –v list you all the status of the resources in verbose mode. This really becomes handy during trouble shooting.

3) Crs_getperm


This command helps to identify the permissions associated with a resource.

Syntax: crs_getperm (where resource name is the name you get from crs_stat command)

4) Crs_setperm

Sets and modifies permissions associated with a resource name. This is similar to the change mode command in unix.

5) Crsctl check

Helps you verify the health of your clusterware.

oracle@amro503:fimsq11> crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy


6) Find out the version of CRS you are running

$> crsctl query crs softwareversion

E.g.:- oracle@amro503:fimsq11> crsctl query crs softwareversion
CRS software version on node [amro503] is [10.2.0.3.0]

Or
$> crsctl query crs activeversion

E.g.:- oracle@amro503:fimsq11> crsctl query crs activeversion
CRS active version on the cluster is [10.2.0.3.0]

7) Command to start and stop CRS


Usually the “root” user has permissions to start and stop CRS on the cluster.
The oracle user can be given sudo permissions to stop and start a CRS.
Use this command to stop crs. Its better to shutdown the database and the nodeapps before performing this action.

$> /u01/oracle/cluster/crs/bin/crsctl stop crs

Use this command to start the crs

$> /u01/oracle/cluster/crs/bin/crsctl start crs.


8) Crs_start

This command starts all the registered crs resources in the right order.

9) Crs_stop

This command stops all the registered crs resources in the right order.

SRVCTL COMMANDS

10) Commands to start and stop a database

Set your oracle rdbms environment.
To start/stop the database and all the instances in RAC with one command.
At the command prompt type

$> srvtcl start/stop instance –d -i , , ………

Or

$> srvtcl start/stop instance –d

E.g.:- srvctl start/stop instance –d fimsp1 –i fimsp11, fimsp12

To start/stop one instance

$> srvtcl start/stop instance –d -i


11) To start and stop nodeapps

This can be done from any node in the cluster.

Syntax:- $> srvctl stop/start nodeapps –n

E.g.:- to stop/start all nodeapps on amro503/505 cluster

Amro505:oracle:fimsq11> srvctl stop/start nodeapps –n amro503 à to stop all nodeapps on amro503.

Amro505:oracle:fimsq11> srvctl stop/start nodeapps –n amro505 à to stop all nodeapps on amro505

To see if all the nodeapps are down. Use command “crs_stat –t “

12) To add a new instance to CRS

srvctl add instance -d database -i -n

13) To remove a resource from CRS

srvctl remove database -d


14) To Trace SRVCTL issues. Set the debug flag on. This information is very helpful in debugging issues.


In ksh
$> export srvm_trace=true

The trace o/p will be big and descriptive.

oracle@:fimsp21> srvctl strat nodeapps -n amro501

/fisc/oracle/cluster/crs/jdk/jre/bin/java -classpath /fisc/oracle/cluster/crs/jlib/netcfg.jar:/fisc/oracle/cluster/crs/jdk/jre/lib/rt.jar:/fisc/oracle/cluster/crs/jdk/jre/lib/i18n.jar:/fisc/oracle/cluster/crs/jlib/srvm.jar:/fisc/oracle/cluster/crs/jlib/srvmhas.jar:/fisc/oracle/cluster/crs/jlib/srvmasm.jar:/fisc/oracle/cluster/crs/srvm/jlib/srvctl.jar -DTRACING.ENABLED=true -DTRACING.LEVEL=2 oracle.ops.opsctl.OPSCTLDriver start nodeapps -n amro501
[main] [23:32:20:471] [OPSCTLDriver.setInternalDebugLevel:165] tracing is true at level 2 to file null
[main] [23:32:20:662] [OPSCTLDriver.:95] Security manager is set
[main] [23:32:20:677] [CommandLineParser.parse:174] parsing cmdline args
[main] [23:32:20:678] [CommandLineParser.parse2WordCommandOptions:948] parsing 2-word cmdline
[main] [23:32:20:791] [HASContext.getInstance:197] Module init : 16
[main] [23:32:20:792] [HASContext.getInstance:221] Local Module init : 19
[main] [23:32:20:792] [HASContext.:90] moduleInit = 19
[main] [23:32:20:810] [Library.getInstance:95] Created instance of Library.
[main] [23:32:20:811] [Library.load:195] Loading libsrvmhas10.a...
[main] [23:32:20:811] [Library.load:201] oracleHome null
[main] [23:32:20:812] [Library.load:236] Loading library srvmhas10 from path=
/fisc/oracle/cluster/crs/jdk/jre/bin:/fisc/oracle/cluster/crs/jdk/jre/bin/classic:/fisc/oracle/cluster/crs/jdk/jre/bin:/fisc/oracle/cluster/crs/lib32:/fisc/oracle/cluster/crs/srvm/lib32:/usr/lib
[main] [23:32:20:864] [Library.load:244] Loaded library srvmhas10 from path=
/fisc/oracle/cluster/crs/jdk/jre/bin:/fisc/oracle/cluster/crs/jdk/jre/bin/classic:/fisc/oracle/cluster/crs/jdk/jre/bin:/fisc/oracle/cluster/crs/lib32:/fisc/oracle/cluster/crs/srvm/lib32:/usr/lib
[main] [23:32:20:886] [has.HASContextNative.Native] prsr_trace: no lsf ctx, line=Native: allocHASContext

Wednesday, March 18, 2009

Partitioning - Maintenance

Add a Partition.

Adding a partition has different side affects for different partition types. For range partitions, a new partition is added at the end mainly to specify a new high-end value. You cannot add a partition if MAXVALUE partition exists. Adding a partition does not not mark a global index unusable. For hash partition or hash subpartition in composite partition, adding a partition will receive rows redistributed from other partitions.

Example

Alter table sales_range
Add partition sales_mar2009 values Less
Than (to_date(‘03/01/2009’,’dd/mm/yyyy’))
Tablespace tsp1;

Alter table q1_sales_by_region
Add partition q1_outerregion values (‘HI’,’PR’)
Tablespace tbsp2;

Drop and Truncate Partition


Dropping a partition will discard the rows quickly, uses few system resources and doesn’t use rollback. Only range and list partitions can be dropped. If a table contains only one partition the partition cannot be dropped. You must drop the table. If range partition, if you want to remove the range key but keep the data then you should merge the partitions and not drop the partition. Only one partition can be dropped at a time. You can truncate a partition to discard the data rows in the partition but not remove the partition. The corresponding local indexes are also truncated. If you truncate the table then it will discard rows from all partitions.

Example

Alter table sales drop partition jan2000;

Alter table sales truncate partition jan2000;

Split, Merge and Coalesce Partition

Splitting a partition will create two new partitions filled with rows of the split partition. Merging a partition collects the rows from two partitions and drops them into one partition. Hash partitions cannot be split or merged. Coalesce is used on hash partitions. It is same as merge on non-hash partitioned tables. Coalescing is also used to reorganize a partition of an IOT table.


Example


Alter table sales_list
Split partition sales_central values (‘texas’) into partition sales_south, sales_southwest;

Alter table sales_range
Split partition sales_jan2009 values less
Than (to_date(‘01/16/2009’,’dd/mm/yyyy’))
Into partition sales_jan2009_1,sales_jan2009_2;

Alter table sales_range
Merge partition sales_jan2009, sales_feb2009 into partition sales_feb2009;

Alter table sales_list
Merge partition sales_east,sales_central into partition sales_central;

Alter table sales_hash coalesce partition;

Move and Rename Partition

Moving a partition is generally used to replace a partition in a new tablespace. In order to move a partitioned table you will have to move all the partitions. Global indexes are marked unusable unless there is no data in the partition that is moved or update global indexes command is used. Renaming a partition is to change name of the partition. There are no restrictions on renaming a partition name as long as the partition name is unique with in the partitioned table or index.

Example

Alter table sales_list move partition sales_east tablespace sales_new;

Alter table sales_list rename partition sales_west to sales_west_north;

Exchange Partition

Exchange partition is to swap names. You can exchange a partition with a non-partitioned table. This operation does not move rows. The non-partitioned table must have the same structure as the partitioned table.

Example

Alter table sales_list
Exchange partition sales_west with sales_west_temp;



DBMS_REDEFINITION package can be used to take a non-partitioned tables and change it to partitioned tables when its is being accessed.

Tuesday, March 17, 2009

Table partitioning - II

Range Partitioning

Range Partitioning maps data to partitions based on ranges of partition key values that you establish for each partition. It is the most common type of partitioning and is often used with dates. For example, you might want to partition sales data on a monthly basis. You can also partition by a range of alpha characters.

Example.

Create TABLE sales_range
(salesman_id Number(5),
salesman_name varchar2(30),
sales_amount Number(10),
sales_date Date)
Partition by Range(sales_date)
(
Partition sales_jan2009 values less than (to_date(‘02/01/2009’,’DD/MM/YYYY’)),
Partition sales_feb2009 values less than (to_date(‘03/01/2009’,’DD/MM/YYYY’)),
Partition sales_Mar2009 values less than (to_date(‘04/01/2009’,’DD/MM/YYYY’))
Enable row movement);

Create Table Students (
Student_id Number(6),
Student_fn Varchar2(25),
Student_ln Varchar2(35),
Primary key (student_id))
Partition by range (student_ln)
(partition student_ae Values less that (‘F%’),
partition student_fl Values less that (‘M%’),
partition student_mr Values less that (‘S%’),
partition student_sz Values less that (MAXVALUE)
ENABLE row movement);


Oracle 11g Feature – Interval Partitioning

In the above sales_range example, the DBA has to manually create the new partitions every month to accommodate the the values beyond the range specified. The system automatically creates a new partition ,if the specified values above the specified range for Interval partition.


Example:

Create TABLE sales_range
(salesman_id Number(5),
salesman_name varchar2(30),
sales_amount Number(10),
sales_date Date)
Partition by Range(sales_date)
Interval (numtoyminterval (1,’month’))
(
Partition sales_jan2009 values less than (to_date(‘02/01/2009’,’DD/MM/YYYY’))
);

List Partitioning

List partitioning enables you to control explicitly how rows map to partitions. You do this by specifying a list of discrete values for the partitioning column in the description for each partition. List partitioning allows for partitions to reflect real-world groupings (eg. Business units and regions). It differs from range partition in that the groupings in the list partitioning are not side by side or in a logical range. List partitioning gives us the ability to group together seemingly unrelated data into a specific partition.

Example

Create table sales_list
(salesman_id number(5),
salesman_name varchar2(30),
salesman_state varchar2(20),
sales_amount number(10),
sales_date date )
Partition by list (sales_state)
(
partition sales_west values (‘california’,’hawaii’),
partition sales_east values (‘New York’,’virginia’,’florida’),
partition sales_central values (‘texas’,’illinois’)
partition sales_other values (default)
enable row movement);

Composite Partitioning

There are two types of composite partitioning. They are composite Range-hash partitioning and Composite Range-List Partitioning

Composite Range-hash partitioning is used to range partition first, then use a hashing algorithm to further divide the data into sub partitions within each range partition. It combines both the ease of range partitioning and the benefits of hashing for date placement, striping and parallelism.

Possible usage : Range partition by date of birth then hash partition by name.

Composite range-list partitioning is used to range partition first, the divide the data in to subpartitions within each range partition based on the explicit list you chose. It combines both the ease of range partitioning and the benefits of list partitioning at the sub partition level.

Possible usage :- Range partition by date of birth then list partition by state.

Example

Create table sales_composite
(salesman_id number(5),
salesman_name varchar2(30),
sales_amount number(10),
sales_date date)
Partition by range (sales_date) subpartition by hash (salesman_id)
Subpartition template (
Subpartition sp1 tablespace data1,
Subpartition sp2 tablespace data2,
Subpartition sp3 tablespace data3,
Subpartition sp4 tablespace data4)
(partition sales_jan2009 values less
than (to_date (‘02/01/2009’,’dd/mm/yyyy’))
partition sales_feb2009 values less
than (to_date (‘03/01/2009’,’dd/mm/yyyy’))
partition sales_mar2009 values less
than (to_date (‘04/01/2009’,’dd/mm/yyyy’))
partition sales_apr2009 values less
than (to_date (‘05/01/2009’,’dd/mm/yyyy’))
partition sales_may2009 values less
than (to_date (‘06/01/2009’,’dd/mm/yyyy’))
enable row movement);

Create table bimonthly_regional_sales
(deptno number,
item_no varchar2(20),
txn_date date,
txn_amount number,
state varchar2(2))
Partition by Rnage (txn_date)
Subpartition by list (state)
Subpartition template (
Subpartition east values (‘NY’,’VA’,’FL’) tablespace ts1,
Subpartition west values (‘CA’,’OR’,’HI’) tablespace ts2,
Subpartition CENTRAL values (‘IL’,’TX’,’KS’) tablespace ts3
(Partition janfeb2009 values less than (to_date(‘1-MAR-2009’,’DD-MON-YYYY’)),
Partition marapr2009 values less than (to_date(‘1-MAY-2009’,’DD-MON-YYYY’)),
Partition mayjun2009 values less than (to_date(‘1-JU;-2009’,’DD-MON-YYYY’))
ENABLE ROW MOVEMENT);



Hash Partitioning

Hash Partitioning distributes data by applying a proprietary hashing algorith to the partition key and the assigning the data to the appropriate partition. With hash partitioning you can partition data that may not have any logical ranges. Oracle handles all of the distribution of date once the partition key is identified. Hash partitioning is used to spread data evenly over partitions.

Possible usage:
Data has no logical groupings.

Example

Create table sales_hash
(salesman_id number(5),
salesman_name varchar2(30),
sales_amount number(10),
week_no number(2))
partition by hash (salesman_id)
partitions 4
store in (ts1, ts2, ts3, ts4);

In the next post I’ll talk about partitioned table maintenance activities and some guide lines.

Sunday, March 15, 2009

Table Partitioning in Oracle

In this section I’ll be discussing about the some types of partitioning options available in Oracle , its advantages and also provide some examples.

Why Partitioning?

Oracle partitioning is mainly used for manageability, availability and performance of oracle tables. Partitioning allows tables, indexes, materialized views and Index-organized tables to be further divided in to smaller manageable pieces. Partitioning enables the database objects to be managed and accessed at a finer level of granularity.

Partitioning for manageability

The partitioning option allows indexes and tables to be partitioned in to smaller manageable units. Using partition tables, DBA’s can perform maintenance on certain partitions while the rest of the partitions are still accessed by the applications.

A typical usage of partitioning for manageability is tos upport a “rolling window” load process in a data warehouse. Imagine you have to load a table with data on a monthly basis. You can take advantage of the range partition option so that each partition contains a months worth of data.
If you have to purge 6 month old data from a table on a monthly basis. The range partitioning offers a better solution. You can just delete a partition rather than issuing a DELETE command which creates additional load on the database.

Partitioning for Performance

By limiting the amount of data to be examined or operated on and by enabling parallel execution, the Oracle Partitioning option provides a number of performance benefits.

Partition Pruning
Partition pruning is the simplest and also the most substantial means to improve performance. Partition pruning can often improve query performance by several orders of magnitude.
Imagine a Orders table containing historical records of orders and the table data is partitioned by week. A query requesting data for a single week would only access a single partition of the orders table, thus by improving the performance by a bigger magnitude. Partition pruning works with all of Oracle’s other performance features. Oracle will utilize partition pruning in Conjunction with any indexing technique, join technique or parallel access method.

Partition Wise Joins

Partitioning can also improve the performance of multi-table joins, by using a technique known as partition-wise join. Partition-wise join can be applied with two tables being joined together and both of these tables are partitioned on the join key. Partition-wise joins breaks a large join in to smaller joins that occur between each of the partitions, completing the overall join in less time. This offers significant performance benefits both for serial and parallel execution.

Parallel Execution

Partitioning enables parallel execution of UPDATE, DELETE and MERGE statements. Oracle will parallelize SELECT statements and INSERT statements when accessing both partitioned and non-partitioned database objects. UPDATE, DELETE and MERGE statements can be parallelized for both partitioned and non-partitioned database objects when no bit map indexes are present. In order to parallelize the operations on objects having bit map indexes , the target table must be partitioned. Parallel execution of sql statements can vastly improve performance, particularly for UPADTE, DELETE or MERGE operations involving large volumes data.

Partitioning for Availability

The DBA can store different partitions in different tablespaces which would allow him/her to perform backup/recovery operations on each individual partition, independent of the other partitions in the table.

Partitioned database objects provide partition independence. If any one of the partitions become unavailable, all other partitions of the table remain online and available. Applications can still use the available partitions while the DBA can work on fixing the failed partition/partitions.


Types of Partitioning

· Range Partitioning
· Hash Partitioning
· List Partitioning
· Composite partitioning

In the next post, I’ll be discussing the above-mentioned partitioning options in detail.

Thursday, March 12, 2009

Uni directional streams set up..

Oracle streams allows for back and forth repliaction between two active database servers.

Here I'll discuss the basic steps involved in setting up a streams replication in oracle 10g.

In the example provided I am setting up a upstream replication for table customer residing on binfo schema at source_db to binfo.customer at dest_db.

Init parameters
--------------------

certain initilalization parameters are required to be set at both source and destination instances before setting up streams.

Here are the parameters.

compatible - This parameter should be same on both sides.
Global_names = True --> Database links should match the global names.
Job_queue_process --> This parameter should be set to atleast 2(two).
Parallel_max_servers --> this parameter should be set to atleast 6 (six).
SGA_TARGET --> should be set .. else you need to set "streams_pool_size"
domain_name --> make sure this value is set.

Tns Names
-----------


Add Tnsnames entries for the source and destination databases on both sides.

Source_db = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = xx.xx.xx.xx)(PORT = 1521)) ) (CONNECT_DATA = (service_name = source_db) ) )
dest_db = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = xx.xx.xx.xx)(PORT = 1521)) ) (CONNECT_DATA = (service_name = dest_db) ) )

Create Tablespace
-----------------

Create separate tablespace for streams admin user objects on both sides.

create tablespace streams_ts datafile '/u01/oradata/sorce_db/streams_ts_001.dbf' size 250M;

and

create tablespace streams_ts datafile '/u01/oradata/dest_db/streams_ts_001.dbf' size 250M;

Create streams administrator user on both sides
------------------------------------------------

$> sqlplus / as sysdba

create user "STRMADMIN" identified by "xxxxxxx"
default tablespace streams_ts
temporary tablespace temp
quota unlimited on streams_ts;


Grant the following privs to strmsadmin user at the dest_db by looging in as sysdba

-----------------------------------------------------------------------------------
$> sqlplus / as sysdba
grant CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE to "STRMADMIN"; GRANT DBA TO STRMADMIN; GRANT EXECUTE ON sys.dbms_aq TO STRMADMIN; GRANT EXECUTE ON sys.dbms_aqadm TO STRMADMIN;


BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE
(
privilege => 'ENQUEUE_ANY',
grantee => 'STRMADMIN',
admin_option => FALSE
);
END;
/
BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE
(
privilege => 'DEQUEUE_ANY',
grantee => 'STRMADMIN',
admin_option => FALSE
);
END;
/

BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE
(
privilege => 'MANAGE_ANY',
grantee => 'STRMADMIN',
admin_option => TRUE
);
END;
/

BEGIN
DBMS_AQADM.GRANT_TYPE_ACCESS
(
user_name => 'STRMADMIN'
);
END;
/

BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE
(
privilege => DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT_OBJ,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE
(
privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE
(
privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
grantee => 'STRMADMIN',
grant_option => TRUE
);
END;
/

BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE
(
privilege => DBMS_RULE_ADM.CREATE_ANY_RULE_SET,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE
(
privilege => DBMS_RULE_ADM.ALTER_ANY_RULE_SET,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE
(
privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE_SET,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE
(
privilege => DBMS_RULE_ADM.CREATE_ANY_RULE,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE
(
privilege => DBMS_RULE_ADM.ALTER_ANY_RULE,
grantee => 'STRMADMIN',
grant_option => TRUE);
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE
(
privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE,
grantee => 'STRMADMIN',
grant_option => TRUE
);
END;
/

BEGIN
DBMS_RULE_ADM.GRANT_OBJECT_PRIVILEGE
(
privilege => DBMS_RULE_ADM.EXECUTE_ON_EVALUATION_CONTEXT, object_name => 'SYS.STREAMS$_EVALUATION_CONTEXT',
grantee => 'STRMADMIN',
grant_option => FALSE
);
END;
/


GRANT EXECUTE ON sys.dbms_capture_adm TO STRMADMIN; GRANT EXECUTE ON sys.dbms_apply_adm TO STRMADMIN; GRANT EXECUTE ON sys.dbms_rule_adm TO STRMADMIN;GRANT SELECT_CATALOG_ROLE TO STRMADMIN;


BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE
(
privilege => DBMS_RULE_ADM.EXECUTE_ANY_EVALUATION_CONTEXT, grantee => 'STRMADMIN',
grant_option => TRUE
);
END;
/

GRANT EXECUTE ON SYS.dbms_streams_adm TO STRMADMIN; GRANT ALL PRIVILEGES TO STRMADMIN;


Create application user(schema) on both sides

------------------------------------------------------


create user binfo identified by binfo
default tablespace binfo_data
temporary tablespace temp
quota unlimited on binfo_data;
grant connect, dba to binfo_data;


Connect to Binfo at source_db

---------------------------------------
create table customer
( cust_id number(10) not null,
cust_Fn varchar2(20) not null,
cust_Ln Varchar2(20) not null,
Addr Varchar2 (50),
state varchar2(2));


alter table customer add constraint customer_pk primary key (cust_id);


Connect to strmadmin at the remote_db

--------------------------------------------------
$> sqlplus strmadmin/@remote_db
Create streams queue
BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_user => 'STRMADMIN');
END;
/

Add apply rules for tables at the destination database
-------------------------------------------------------------

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES
( table_name => '"BINFO"."CUSTOMER"',
streams_type => 'APPLY',
streams_name => 'STRMADMIN_SOURCE_DB_',
queue_name => '"STRMADMIN"."STREAMS_QUEUE"',
include_dml => true,
include_ddl => true,
source_database => 'SOURCE_DB.XXX.COM');
END;
/


Connect to sysdba at the source database

-------------------------------------------------------
Enable supplemental logging at table level (optional) Supplemental logging helps capture additional information to redologs for row identification

alter table binfo.customer add supplemental log group cust_loggrp (cust_id, cust_fn, cust_ln);
Switch log file

sqlplus> ALTER SYSTEM SWITCH LOGFILE;


Grant the required privileges to strmadmin user

--------------------------------------------------------------


grant CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE to "STRMADMIN";

GRANT DBA TO STRMADMIN;GRANT EXECUTE ON sys.dbms_aq TO STRMADMIN;

GRANT EXECUTE ON sys.dbms_aqadm TO STRMADMIN;


BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE
( privilege => 'ENQUEUE_ANY',
grantee => 'STRMADMIN',
admin_option => FALSE);
END;
/
BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE
( privilege => 'DEQUEUE_ANY',
grantee => 'STRMADMIN',
admin_option => FALSE);
END;
/
BEGIN
DBMS_AQADM.GRANT_SYSTEM_PRIVILEGE
( privilege => 'MANAGE_ANY',
grantee => 'STRMADMIN',
admin_option => TRUE);
END;
/
BEGIN
DBMS_AQADM.GRANT_TYPE_ACCESS
( user_name => 'STRMADMIN');
END;
/
BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE
( privilege => DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT_OBJ,
grantee => 'STRMADMIN',
grant_option => TRUE)
;
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE
( privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,
grantee => 'STRMADMIN',
grant_option => TRUE);


DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE
( privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,
grantee => 'STRMADMIN',
grant_option => TRUE);
END;
/

BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE
( privilege => DBMS_RULE_ADM.CREATE_ANY_RULE_SET,
grantee => 'STRMADMIN',
grant_option => TRUE);


DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE
( privilege => DBMS_RULE_ADM.ALTER_ANY_RULE_SET,
grantee => 'STRMADMIN',
grant_option => TRUE);


DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE
( privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE_SET,
grantee => 'STRMADMIN',
grant_option => TRUE);


DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE
( privilege => DBMS_RULE_ADM.CREATE_ANY_RULE,
grantee => 'STRMADMIN',
grant_option => TRUE);


DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE
( privilege => DBMS_RULE_ADM.ALTER_ANY_RULE,
grantee => 'STRMADMIN',
grant_option => TRUE);


DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE
( privilege => DBMS_RULE_ADM.EXECUTE_ANY_RULE,
grantee => 'STRMADMIN',
grant_option => TRUE);
END;
/

BEGIN
DBMS_RULE_ADM.GRANT_OBJECT_PRIVILEGE
( privilege => DBMS_RULE_ADM.EXECUTE_ON_EVALUATION_CONTEXT,
object_name => 'SYS.STREAMS$_EVALUATION_CONTEXT',
grantee => 'STRMADMIN',
grant_option => FALSE );
END;
/


GRANT EXECUTE ON sys.dbms_capture_adm TO STRMADMIN;GRANT EXECUTE ON sys.dbms_apply_adm TO STRMADMIN;GRANT EXECUTE ON sys.dbms_rule_adm TO STRMADMIN;GRANT SELECT_CATALOG_ROLE TO STRMADMIN;

BEGIN
DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE
( privilege => DBMS_RULE_ADM.EXECUTE_ANY_EVALUATION_CONTEXT,
grantee => 'STRMADMIN',
grant_option => TRUE);
END;
/


GRANT EXECUTE ON SYS.dbms_streams_adm TO STRMADMIN;

GRANT ALL PRIVILEGES TO STRMADMIN;

Connect to strmadmin at source_db

----------------------------------------------


Create a database link

CREATE DATABASE LINK
"dest_db.xxx.COM" connect to "STRMADMIN" identified by "STRMADMIN" using '(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=xxx.xxx.xxx.xx)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=dest_db.xx.com)))';

Create streams queue

BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE
( queue_user => 'STRMADMIN');
END;
/


Add capture rules for tables at the source site

-------------------------------------------------------


BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES
( table_name => '"BINFO"."CUSTOMER"',
streams_type => 'CAPTURE',
streams_name => 'STRMADMIN_CAPTURE',
queue_name => '"STRMADMIN"."STREAMS_QUEUE"',
include_dml => true,
include_ddl => true,
source_database => 'SOURCE_DB.XX.COM');
END;
/


Add propagation rules for tables at the source database


BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES
( table_name => '"BINFO"."CUSTOMER"',
streams_name => 'STRMADMIN_PROPAGATE',
source_queue_name => '"STRMADMIN"."STREAMS_QUEUE"',
destination_queue_name => '"STRMADMIN"."STREAMS_QUEUE"@SOURCE_DBxx.COM',
include_dml => true,
include_ddl => true,
source_database => 'source_db.xx.COM');
END;
/


Export the table from source_db and import them in to dest_db.

--------------------------------------------------------------------------------


Here you will instantiate the scn at dest_db along with import. You can use data pump as well.


exp USERID="STRMADMIN"@source_db TABLES="binfo"."customer" FILE=tables.dmp GRANTS=Y ROWS=N LOG=exportTables.log OBJECT_CONSISTENT=Y INDEXES=Y


imp USERID="STRMADMIN"@dest_db FULL=Y CONSTRAINTS=Y FILE=tables.dmp IGNORE=Y GRANTS=Y ROWS=N COMMIT=Y LOG=importTables.log STREAMS_CONFIGURATION=N STREAMS_INSTANTIATION=Y

Startup operations

----------------------------
You are pretty much done with the set up at this point .you need to start the capture and apply process.
connect to strmadmin@dest_db and start the apply process first

DECLARE v_started number;
BEGIN
SELECT decode
(status, 'ENABLED', 1, 0) INTO v_started FROM DBA_APPLY WHERE APPLY_NAME = 'STRMADMIN_source_db_'; if (v_started = 0) then
DBMS_APPLY_ADM.START_APPLY(apply_name => 'STRMADMIN_source_db_');
end if;
END;
/


Connect to strmadmin at the source database and start the capture process

-------------------------------------------------------------------------------------------


DECLARE
v_started number;
BEGIN
SELECT decode(status, 'ENABLED', 1, 0) INTO v_started FROM DBA_CAPTURE WHERE CAPTURE_NAME = 'STRMADMIN_CAPTURE';
if (v_started = 0) then
DBMS_CAPTURE_ADM.START_CAPTURE(capture_name => 'STRMADMIN_CAPTURE');
end if;
END;
/

Now test the replication.

-------------------------------


Insert some values in to the source table and see if its getting replicated.

I'll discuss more about streams troubleshooting in Future posts.