Monday, April 20, 2009
Oracle acquires Sun Microsystems ..
Oracle software is widely used on Solaris platform.So Oracle has clear business interests on Solaris. Oracle is a major stakeholder and a prominent player in Java space. For Java lovers it will be a pleasant news.
Its not so long that Sun acquired MYSQL. It will be a bad news for MYSQL lovers. We'll have to wait and see what future holds for MYSQL.
Its hard to predict the impact now, but for sure there will be an impact.
Sunday, April 19, 2009
Data Pump continued … Remap_data
Syntax
REMAP_DATA=[(chema.tablename.column_name:schema.pkg.function)
Example
Look at table scott.dp_test
SQL> desc scott.dp_test;
Name Null? Type
----------------------------------------- -------- ----------------------------
NAME VARCHAR2(10)
AGE NUMBER
SQL> select * from scott.dp_test;
NAME AGE
---------- ----------
ahmad 25
Gairik 31
Ryan 18
Greg 29
SQL>
I need to export the data and increment every ones age by 10.The package has a function named modify_age which accepts age as input and add 10 to it.
SQL> ed
Wrote file afiedt.buf
1 create or replace package add_age as
2 function modify_age (roll_in number) return number;
3* end;
SQL> /
Package created.
SQL> create or replace package body scott.add_age as
2 function modify_age (roll_in number) return number
3 as
4 roll_out number;
5 begin
6 roll_out := roll_in + 10;
7 return(roll_out);
8 end;
9 end;
10 /
Package body created.
I am planning to import the table to scott1 schema. I have created a new user named Scott1 and created the table named dp_test similar to scott.dp_test.
SQL> create user scott1 identified by xxxxxxxx;
User created.
SQL> grant create session, imp_full_database to scott1;
Grant succeeded.
SQL> create table scott1.dp_test as select * from scott.dp_test;
Table created.
SQL> delete from scott1.dp_test;
4 rows deleted.
SQL> commit;
Commit complete.
Granting read/write privs on exp_dir where the dump file resides.
SQL> grant read, write on directory exp_dir to scott1;
Grant succeeded.
Export the data from scott.dp_test using the remap_data function which was created above.
H:\>expdp scott/xxxxxxx directory=exp_dir dumpfile=exp_dir:dp_test.dmp logfile=exp_dir:dp_test.log tables=scott.dP_test remap_data=scott.dp_test.age:scott.add_age.modify_age
Export: Release 11.1.0.6.0 - 64bit Production on Tuesday, 14 April, 2009
16:13:08
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_TABLE_01": scott/******** directory=exp_dir dumpfile=exp_dir:dp_test.dmp logfile=exp_dir:dp_test.log table
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SCOTT"."DP_TEST" 5.453 KB 4 rows
Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
D:\EXP_DIR\DP_TEST.DMP
Job "SCOTT"."SYS_EXPORT_TABLE_01" successfully completed at 16:13:26
Export completed successfully
I am importing the dump file to scott1.dp_test.
H:\>impdp scott1/xxxxxxxx directory=exp_dir dumpfile=exp_dir:dp_test.dmp logfile=exp_dir:dp_testimp.log remap_schema=scott:scott1 table_exists_action=append
Import: Release 11.1.0.6.0 - 64bit Production on Wednesday, 15 April, 2009 11:24:46
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SCOTT1"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SCOTT1"."SYS_IMPORT_FULL_01": scott1/******** directory=exp_dir dumpfile=exp_dir:dp_test.dmp logfile=exp_dir:dp_testimp.log remap
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39152: Table "SCOTT1"."DP_TEST" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "SCOTT1"."DP_TEST" 5.453 KB 4 rows
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SCOTT1"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 11:24:48
Verify the Result
Lets query the data on scott1.dp_test and see if every ones age is incremented by 10
SQL> select * from scott1.dp_test;
NAME AGE
---------- ----------
ahmad 35
Gairik 41
Ryan 28
Greg 39
SQL>
Similarly you can write your functions to map the data the way you wanted.
Conclusion:
Datapump is a great tool for DBA’s and developers. Oracle 11g has made the tool more powerful. The more you use it, the more you love it .
Wednesday, April 8, 2009
Data Pump.. Contd
------------------------
Oracle has added Some new features for data pump in Oracle 11g. One of the things which caught my attention is the compression feature for Data part. The compression feature for Meta data part was already available in Oracle 10g. Oracle 11g has extended the capability for compressing the data while exporting and also provided the flexibility for compressing only the data or metadata or a both.
Here are some examples.
Metadata compression only
H:> expdp scott/p122ahu7@digdb directory=exp_dir dumpfile=exp_dir:scott1.dmp logfile=exp_dir:scott1.log compression=metadata_only job_name=scott1_exp status=5
Worker 1 Status:
Process Name: DW01
State: WORK WAITING
Job "SCOTT"."SCOTT1_EXP" successfully completed at 10:18:31
Data compression
H:> expdp scott/p122ahu7@digdb directory=exp_dir dumpfile=exp_dir:scott2.dmp logfile=exp_dir:scott2.log compression=data_only job_name=scott2_exp status=5
Worker 1 Status:
Process Name: DW01
State: WORK WAITING
Job "SCOTT"."SCOTT2_EXP" successfully completed at 10:25:23
Full compression
H:>expdp scott/p122ahu7@digdb directory=exp_dir dumpfile=exp_dir:scott3.dmp logfile=exp_dir:scott3.log compression=All job_name=scott3_exp status=5
Worker 1 Status:
Process Name: DW01
State: WORK WAITING
Job "SCOTT"."SCOTT3_EXP" successfully completed at 10:30:11
Now have a look at the size of the dump files.
D:\exp_dir>dir *.dmp
Volume in drive D is Data
Volume Serial Number is 9820-5C5B
Directory of D:\exp_dir
04/02/2009 10:18 PM 299,008 SCOTT1.DMP
04/02/2009 10:25 PM 188,416 SCOTT2.DMP
04/02/2009 10:30 PM 143,360 SCOTT3.DMP
Encryption
----------------
Another great feature is the addition of encryption option. Oracle 10g has the capability to handle encrypted columns during export/import. Oracle 11g has provided the capability to encrypt dump file sets. Oracle 11g provides three modes of encryption. 1) Transparent encryption or encryption using Oracle wallets.2) Password encryption 3) A combination of 1 and 2
How do you secure your data, if you need to send your data across the network?. Here is the answer.
If you look at the expdp help, You can see four encryption parameters.
D:\exp_dir>expdp help=y
Export: Release 11.1.0.6.0 - 64bit Production on Wednesday, 08 April, 2009 11:25:13
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Keyword Description (Default)
------------------------------------------------------------------------------
ENCRYPTION Encrypt part or all of the dump file where valid keyword
values are: ALL, DATA_ONLY, METADATA_ONLY,
ENCRYPTED_COLUMNS_ONLY, or NONE.
ENCRYPTION_ALGORITHM Specify how encryption should be done where valid
keyword values are: (AES128), AES192, and AES256.
ENCRYPTION_MODE Method of generating encryption key where valid keyword
values are: DUAL, PASSWORD, and (TRANSPARENT).ENCRYPTION_PASSWORD Password key for creating encrypted column data
Encryption_Algorithm – Lets you choose the algorithm from the list of values. . This parameter requires you to specify the encryption or encrytption_password along with it.
Example:
expdp scott/xxxxxxx@digdb directory=exp_dir dumpfile=exp_dir:scott1.dmp logfile=exp_dir:scott1.log job_name=scott1_exp encryption=all encryption_password=hahahehe encryption_algorithm=AES256 ENCRYPTION_MODE=PASSWORD
Export: Release 11.1.0.6.0 - 64bit Production on Wednesday, 08 April, 2009 13:25:12
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SCOTT1_EXP": scott/********@digdb directory=exp_dir dumpfile=exp_dir:scott1.dmp logfile=exp_dir:scott1.log jo
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 448 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
. . exported "SCOTT"."DEPT" 5.945 KB 4 rows
. . exported "SCOTT"."EMP" 8.578 KB 14 rows
. . exported "SCOTT"."PRAVEEN" 9.062 KB 14 rows
. . exported "SCOTT"."SALARY" 6.632 KB 2 rows
. . exported "SCOTT"."SALGRADE" 5.875 KB 5 rows
. . exported "SCOTT"."TEST" 8.992 KB 14 rows
. . exported "SCOTT"."TEST_CASE" 5.562 KB 5 rows
. . exported "SCOTT"."BONUS" 0 KB 0 rows
Master table "SCOTT"."SCOTT1_EXP" successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SCOTT1_EXP is:
D:\EXP_DIR\SCOTT1.DMP
Job "SCOTT"."SCOTT1_EXP" successfully completed at 13:25:30
H:\>
Lets try importing the file:
First I have tried importing without providing the ENCRYPTION password
impdp praveen1/xxxxxx@digdb directory=exp_dir dumpfile=exp_dir:scott1.dmp logfile=exp_dir:scott1_imp.log job_name=scott1_imp remap_schema=scott:praveen1
Import: Release 11.1.0.6.0 - 64bit Production on Wednesday, 08 April, 2009 13:47:36
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39002: invalid operation
ORA-39174: Encryption password must be supplied.
Import failed with encryption password error.
Let me try with providing the password
>impdp praveen1/xxxxxx@digdb directory=exp_dir dumpfile=exp_dir:scott1.dmp logfile=exp_dir:scott1_imp.log job_name=scott1_imp encryption_password=hahahehe remap_schema=scott:praveen1
Import: Release 11.1.0.6.0 - 64bit Production on Wednesday, 08 April, 2009 13:46:45
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "PRAVEEN1"."SCOTT1_IMP" successfully loaded/unloaded
Starting "PRAVEEN1"."SCOTT1_IMP": praveen1/********@digdb directory=exp_dir dumpfile=exp_dir:scott1.dmp logfile=exp_dir:scot
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"PRAVEEN1" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "PRAVEEN1"."DEPT" 5.945 KB 4 rows
. . imported "PRAVEEN1"."EMP" 8.578 KB 14 rows
. . imported "PRAVEEN1"."PRAVEEN" 9.062 KB 14 rows
. . imported "PRAVEEN1"."SALARY" 6.632 KB 2 rows
. . imported "PRAVEEN1"."SALGRADE" 5.875 KB 5 rows
. . imported "PRAVEEN1"."TEST" 8.992 KB 14 rows
. . imported "PRAVEEN1"."TEST_CASE" 5.562 KB 5 rows
. . imported "PRAVEEN1"."BONUS" 0 KB 0 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
ORA-39082: Object type ALTER_PROCEDURE:"PRAVEEN1"."F_REPORT" created with compilation warnings
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
Job "PRAVEEN1"."SCOTT1_IMP" completed with 2 error(s) at 13:46:50
Pl ignore the errors in the import job. The import was successful.
I haven’t tried using the datapump export/import encryption using wallets. I’ll discuss that in future
Reuse Parameter
This option can be handy and saves you some time. You don’t have to look at the dumpfile directory to check the existence before exporting. This option lets you overwrite the file, if it exists.
The parameter to use “reuse_dumpfiles=y”
Eg: C:\>expdp scott/xxxxxxxx@digdb directory=exp_dir dumpfile=exp_dir:scott1.dmp logfile=exp_dir:scott1.log job_name=scott1_exp encryption=all encryption_password=hahahehe encryption_algorithm=AES256 ENCRYPTION_MODE=PASSWORD reuse_dumpfiles=y
Export: Release 11.1.0.6.0 - 64bit Production on Wednesday, 08 April, 2009 16:13:36
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SCOTT1_EXP": scott/********@digdb directory=exp_dir dumpfile=exp_dir:scott1.dmp logfil
umpfiles=y
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 448 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
. . exported "SCOTT"."DEPT" 5.945 KB 4 rows
. . exported "SCOTT"."EMP" 8.578 KB 14 rows
. . exported "SCOTT"."PRAVEEN" 9.062 KB 14 rows
. . exported "SCOTT"."SALARY" 6.632 KB 2 rows
. . exported "SCOTT"."SALGRADE" 5.875 KB 5 rows
. . exported "SCOTT"."TEST" 8.992 KB 14 rows
. . exported "SCOTT"."TEST_CASE" 5.562 KB 5 rows
. . exported "SCOTT"."BONUS" 0 KB 0 rows
Master table "SCOTT"."SCOTT1_EXP" successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SCOTT1_EXP is:
D:\EXP_DIR\SCOTT1.DMP
Job "SCOTT"."SCOTT1_EXP" successfully completed at 16:13:54
REMAP_TABLES
This option helps if you need to rename a table during the import process.
Example
C:\>impdp praveen1/zzzzzz@digdb directory=exp_dir dumpfile=exp_dir:scott1.dmp logfile=exp_dir:scott1_imp.log job_name=scott1_imp encryption_password=hahahehe remap_schema=scott:praveen1 remap_table=scott.praveen:praveen1
Data Options
This option lets you skip constraint errors during an import process.
Example:
C:\>impdp praveen1/zzzzzz@digdb directory=exp_dir dumpfile=exp_dir:scott1.dmp logfile=exp_dir:scott1_imp.log job_name=scott1_imp encryption_password=hahahehe remap_schema=scott:praveen1 data_options= SKIP_CONSTRAINT_ERRORS
Another great feature I liked in Oracle11g datapump is the remap_data. I’ll explain this with example in the next post.
Sunday, March 29, 2009
Data Pump
Oracle has introduced data pump in version 10g. Data pump enables very high speed data and metadata loading and unloading of oracle databases.
Oracle has added some new features in 11g for data pump utility. I haven’t shared my experience using data pump, let me talk some data pump features and move on to 11g enhancements.
It was a big relief for many DBA’s and Database developers when the data pump feature was made available in Oracle10g. I used to work in an environment where the database size was around 20TB. A application uses a schema in the database which was around 400GB. The schema data is scattered in multiple table spaces which are shared by other schemas (Ashamed of myself to be a DBA who supports this Database). I get requests to refresh this schema from production. Since my data is not logically separated, I often tend to use the export/import feature. This is a week long process with some baby sitting.
When we upgraded the database to Oracle10g, I really wanted to try out the Data pump option. The result was amazing. It was blazing fast compared to 7 days of traditional export/import. I could complete the data pump export/import in 10-12 hours.
I have fallen in love with Data pump!!!.
Oracle 10g has provided two command line tools named “expdp” and “impdp”. These tools support all the functionalities of the original exp and imp tools as well as some new features.
Data Pump export is a utility for unloading data and metadata into a set of operating system files called dump file sets. Data pump import is used to load metadata and data in export dump files to a target database.
The heart of the data pump operation is the master table, which is a table created in the schema of the user executing the job. The master table maintains all the aspects of the job. The master table helps the data pump to get the status of a job and restart it, in the event of a abnormal stoppage. The master table gets dropped when the job finishes normally.
The datapump actually runs on the database server. You can intiate the process from a client machine and exit out the process. The job still runs on the database server. You have the capability to attach to the job and check the status at any point as long as the job is running.
Let us see some examples for datapump export and import.
You need to create a logical directory for the datapump job to place the dump files and log files.
$> sqlplus / as sysdba
sql> CREATE DIRECTORY export_dir as '\u01\oracle\export_dir';
sql> GRANT READ, WRITE ON directory export_dir TO patent; (here patent is the name of the database user who is executing the expdp job)
Start the data pump export
Eg: to export Just one table from schema Patent
nohup expdp patent/xxxxxxx@binfo directory=export_dir tables=SEQUENCE_QUALITY job_name=patent_exp dumpfile=export_dir:sequence_quality_%u.dmp logfile=export_dir:sequence_quality.log filesize=80530636800 status=5 parallel=5 &
Where filesize is the max size the dumpfile can grow. when it reaches the max size, it automaticllay creates a new file and write to it.
Job_name is required for you to attach to the job again, if you disconnect from the session.
If you need to import the above export dump file in to another schema..(patent_test).
$> sqlplus / as sysdba
sql> CREATE DIRECTORY imp_dir as '\u01\oracle\imp_dir';
sql> GRANT READ, WRITE ON directory imp_dir TO patent_test; (here patent_test is the name of the database user who is executing the impdp job)
nohup impdp patent_test/xxxxxxx@dev directory=imp_dir job_name=patent_import dumpfile=imp_dir:sequence_quality_%u.dmp logfile=imp_dir:patentimp.log remap_schema=PATENT:PATENT_TEST status=5 remap_tablespace=seqdb_quality_data:SMS_DATA full=y &
Here the key words remap_schema tells the jobs that the export was done on schema patent and the import data has to go to schema "patent_test".
The key word remap_tablespace tells the jobs to move all the objects under seqdb_quality_data tablespace on the export database to sms_data tablespace.
If you need to skip constraints and indexes on the same impdp job.
nohup impdp patent_test/********@dev directory=imp_dir job_name=patent_import dumpfile=imp_dir:sequenceS_%u.dmp logfile=imp_dir:patentimp.log remap_schema=PATENT:PATENT_TEST status=5 remap_tablespace=seqdb_sequences_data:SMS_DATA full=y exclude=GRANT,CONSTRAINT,INDEX parallel=5 &
Like in Exp and imp job you can still use par files here.
sample Parfile
--------------
import
------
directory=IMP_DIR
dumpfile=IMP_DIR:ecg%U.dmp
job_name=ecg_import
logfile=imp_dir:ecgimp.log
status=5
parallel=4
TABLES=ECG.TRAN_ERR
CONTENT=DATA_ONLY
TABLE_EXISTS_ACTION=APPEND
REMAP_SCHEMA=ECG:ECG_TEST
Export
------
directory=EXP_DIR
dumpfile=EXP_DIR:ecg%u
dmpjob_name=ecg_export
logfile=exp_dir:ecgexp.
logfilesize=10G
status=5
parallel=4
exclude=index
TABLES=TRAN_ERR,
TRAN_DETAIL_STATUS,
TRAN_HEADER_STATUS,
TRAN_PARTY
Usage
------
$> nohup expdp patent/xxxxxxx@binfo parfile=exp.par &
$> nohup impdp patent/xxxxxxx@binfo parfile=imp.par &
How do you attach to the above impdp job from another client computer and watch the status.
$> impdp patent_test/********@dev attach=patent_import --> (name of the job)
Connected to: Oracle10i Enterprise Edition Release 10.1.0.0.0 - 64bit Beta
With the Partitioning, Oracle Label Security, OLAP and Oracle Data Mining optios
Job: patent_imp
Owner: patent_test
Operation: import
Creator Privs: FALSE
GUID: H245BD7LM2C78A6EE03400306E0F6291
Start Time: Monday, 02 March, 2009 3:11
Mode: SCHEMA
Instance: dev
MaxDegree: 1
Import Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND patent_test/******** job_name=patent_imp directory=imp_dir
DATA_ACCESS_METHOD AUTOMATIC
ESTIMATE BLOCKS
INCLUDE_METADATA 1
KEEP_MASTER 0
LOG_FILE_DIRECTORY IMP_DIR
LOG_FILE_NAME patentimp.log
TABLE_CONSISTENCY 0
Job: PATENT_IMP
Operation: IMPORT
Mode: SCHEMA
State: EXECUTING
Degree: 1
Job Error Count: 0
Dump file: /u01/oracle/imp_dir/sequences_01.dmp
bytes written = 2048
Worker 1 Status:
State: EXECUTING
Import>
How do you stop/start a job.
Attach to the executing job first.
$>impdp patent_test/********@dev attach=patent_import
Connected to: Oracle10i Enterprise Edition Release 10.1.0.0.0 - 64bit Beta
With the Partitioning, Oracle Label Security, OLAP and Oracle Data Mining optios
Job: patent_imp
Owner: patent_test
Operation: import
Creator Privs: FALSE
GUID: H245BD7LM2C78A6EE03400306E0F6291
Start Time: Monday, 02 March, 2009 3:11
Mode: SCHEMA
Instance: dev
MaxDegree: 1
Import Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND patent_test/******** job_name=patent_imp directory=imp_dir
DATA_ACCESS_METHOD AUTOMATIC
ESTIMATE BLOCKS
INCLUDE_METADATA 1
KEEP_MASTER 0
LOG_FILE_DIRECTORY IMP_DIR
LOG_FILE_NAME patentimp.log
TABLE_CONSISTENCY 0
Job: PATENT_IMP
Operation: IMPORT
Mode: SCHEMA
State: EXECUTING
Degree: 1
Job Error Count: 0
Dump file: /u01/oracle/imp_dir/sequences_01.dmp
bytes written = 2048
Worker 1 Status:
State: EXECUTING
Import> stop_job (to stop a running Job)
Are you sure you wish to stop this job ([y]/n): y
To start a Job
Import> start_job
Import> status (to view the status of the current job.
How to terminate a Running Job
Attach to the running job first
Import> kill_job
Are you sure you wish to stop this job ([y]/n): y
$
Once the job is killed the master table gets dropped and it automatically detaches all clinet sessions.
You can view the status of a Data Pump Job from dba_datapump_jobs
Conclusion
Datapump is great addition to Oracle tools. The best part is , its very easy to use and the commands are very similar to the traditional export/import tools. I'll talk about some of the 11g datapump enhancements in next post.
Thursday, March 26, 2009
Oracle 11g new Features - Database Capture and Replay
Oracle 11g come with a cool new feature to capture the changes in a database at a specified interval and replay them in a different database. This is similar to running a macro to capture all the changes in the computer screen and playing the macro in other computers.
Oracle uses two new DBMS packages to perform this “DBMS_WORKLOAD_CAPTURE” and “DBMS_WORKLOAD_APPLY”.
In the example below I’ll be performing the capture and replay on the same database. I’ll set the “restore point” before the capture process. Once the capture process is done, I’ll flashback the database to the restore point and replay the captured task on the same database.
Enable flash back on the database first. This is not a requirement, I am doing this to be able to flash back the database.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount exclusive
ORACLE instance started.
Total System Global Area 4275781632 bytes
Fixed Size 2121800 bytes
Variable Size 2566918072 bytes
Database Buffers 1694498816 bytes
Redo Buffers 12242944 bytes
Database mounted.
SQL> alter database archivelog;
Database altered.
Elapsed: 00:00:00.10
SQL> alter database flashback on;
Database altered.
Elapsed: 00:00:01.40
SQL> alter database open;
Database altered.
Elapsed: 00:00:15.39
SQL> SELECT flashback_on, log_mode
2 FROM v$database;
FLASHBACK_ON LOG_MODE
------------------ ------------
YES ARCHIVELOG
Elapsed: 00:00:00.00
Capture Process
Lets start with creating a directory to hold the captured information.
SQL> create directory capture_dir as 'd:\database_capture';
Directory created.
Elapsed: 00:00:00.04
Here I am creating a restore point to flashback the database for replay.
SQL> create restore point praveen;
Restore point created.
Elapsed: 00:00:00.09
SQL> desc DBMS_WORKLOAD_CAPTURE
PROCEDURE ADD_FILTER
Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------
FNAME VARCHAR2 IN
FATTRIBUTE VARCHAR2 IN
FVALUE VARCHAR2 IN
Fname - Any name for your Filter
Fattribute - The attribute which the filter needs to be applied.
The possible values are( INSTANCE_NUMBER - type NUMBER,USER - type STRING,MODULE - type STRING,ACTION - type STRING,PROGRAM - type STRING,SERVICE - type STRING)
Fvalue - The value for the attribute (wild cards are permitted)
sqlplus / as sysdba
SQL> exec DBMS_WORKLOAD_CAPTURE.ADD_FILTER ('CAPTURE_FILTER1','USER','SCOTT');
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.06
Now lets start the capture process by executing the DBMS_WORKLOAD_CAPTURE.START_CAPTURE package.
Procedure definition
DBMS_WORKLOAD_CAPTURE.START_CAPTURE (
name IN VARCHAR2,
dir IN VARCHAR2,
duration IN NUMBER DEFAULT NULL,(in sec)
default_action IN VARCHAR2 DEFAULT 'INCLUDE',
auto_unrestrict IN BOOLEAN DEFAULT TRUE);
SQL> execute DBMS_WORKLOAD_CAPTURE.start_capture ('db_capture1','CAPTURE_DIR',NULL,'EXCLUDE',FALSE);
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.75
SQL>
Here the parameter exclude means - Exclude all the changes from other users, expect the changes from user scott as specified in the filter
Perform some actions at the database as user scott.
SQL> connect scott/xxxxxxx
Connected.
SQL> ED
Wrote file afiedt.buf
1 CREATE USER PRAVEEN1 IDENTIFIED BY PRAVEEN
2 DEFAULT TABLESPACE USERS
3 TEMPORARY TABLESPACE TEMP
4* QUOTA UNLIMITED ON USERS
SQL> /
User created.
SQL> CREATE TABLE SCOTT.PRAVEEN AS SELECT * FROM SCOTT.TEST;
Table created.
Elapsed: 00:00:00.12
SQL> GRANT CONNECT, DBA TO PRAVEEN1;
Grant succeeded.
Elapsed: 00:00:00.03
SQL>
Now log in as sysdba and stop the capture process
SQL> EXECUTE DBMS_WORKLOAD_CAPTURE.finish_capture;
PL/SQL procedure successfully completed.
Elapsed: 00:00:34.01
SQL>
Here are the contents of my capture_directory
D:\database_capture>dir
Volume in drive D is Data
Volume Serial Number is 9820-5C5B
Directory of D:\database_capture
03/24/2009 05:23 PM .
03/24/2009 05:23 PM ..
03/24/2009 05:18 PM 0 wcr_4mkawa80025q8.rec
03/24/2009 05:18 PM 0 wcr_4mkawac002708.rec
03/24/2009 05:19 PM 0 wcr_4mkawah0023q0.rec
03/24/2009 05:19 PM 0 wcr_4mkawaw00252c.rec
03/24/2009 05:19 PM 0 wcr_4mkawb400254h.rec
03/24/2009 05:19 PM 736 wcr_4mkawd40025nw.rec
03/24/2009 05:19 PM 0 wcr_4mkawds0025r4.rec
03/24/2009 05:19 PM 0 wcr_4mkawfn002408.rec
03/24/2009 05:19 PM 0 wcr_4mkawfn0026k0.rec
03/24/2009 05:19 PM 68 wcr_4mkawhs0024gs.rec
03/24/2009 05:20 PM 970 wcr_4mkawph00248n.rec
03/24/2009 05:20 PM 1,390 wcr_4mkawqc0027ws.rec
03/24/2009 05:20 PM 68 wcr_4mkawsc0022zc.rec
03/24/2009 05:20 PM 0 wcr_4mkawsh0027ch.rec
03/24/2009 05:21 PM 970 wcr_4mkawx40027rh.rec
03/24/2009 05:21 PM 68 wcr_4mkawzw0026fh.rec
03/24/2009 05:22 PM 0 wcr_4mkax58002330.rec
03/24/2009 05:22 PM 68 wcr_4mkax7c0023fn.rec
03/24/2009 05:22 PM 992 wcr_4mkax880022cw.rec
03/24/2009 05:23 PM 43,490 wcr_cr.html
03/24/2009 05:23 PM 22,006 wcr_cr.text
03/24/2009 05:23 PM 192 wcr_fcapture.wmd
03/24/2009 05:18 PM 128 wcr_scapture.wmd
23 File(s) 71,146 bytes
2 Dir(s) 96,016,297,984 bytes free
The data dictionary dba_workload_capture will provide information about the capture process.
SQL> col name for a15
SQL> select id, name from dba_workload_captures;
ID NAME
---------- ---------------
1 db_capture1
Elapsed: 00:00:00.01
You are done with the capture process
Replay
Here I am planning to replay on the same database which I performed the capture.I have created a new directory named database_replay and copied all the contents from database_capture directory. This step is optional.
SQL> create directory replay_dir as 'd:\database_replay';
Directory created.
Elapsed: 00:00:00.04
SQL>
The steps below are required only for testing and it applies to this scenario. This step is not a requirement for replay
Here we shutdown the database and flashback the database to a point before the capture process.
Shutdown the database and recover it to a point before the capture process.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.
Total System Global Area 4275781632 bytes
Fixed Size 2121800 bytes
Variable Size 2566918072 bytes
Database Buffers 1694498816 bytes
Redo Buffers 12242944 bytes
Database mounted.
Flash back the database to the restore point Praveen (perfomed before the capture process started).
SQL> flashback database to restore point praveen;
Flashback complete.
Elapsed: 00:00:06.31
SQL> alter database open resetlogs;
Database altered.
Query the database to see, if the user "praveen1" exists (created during capture process)
SQL> select username from dba_users where username like 'PRAVEEN1';
no rows selected
Elapsed: 00:00:00.00
SQL> select count(*) from dba_tables where table_name like 'PRAVEEN' and owner like 'SCOTT';
COUNT(*)
----------
0
Three procedures under the package dbms_workload_capture are used for replay. The steps below are not optional.
PROCEDURE PROCESS_CAPTURE
Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------
CAPTURE_DIR VARCHAR2 IN
PROCEDURE INITIALIZE_REPLAY
Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------
REPLAY_NAME VARCHAR2 IN
REPLAY_DIR VARCHAR2 IN
PROCEDURE PREPARE_REPLAY
Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------
SYNCHRONIZATION BOOLEAN IN DEFAULT
CONNECT_TIME_SCALE NUMBER IN DEFAULT
THINK_TIME_SCALE NUMBER IN DEFAULT
THINK_TIME_AUTO_CORRECT BOOLEAN IN DEFAULT
The first step is to initialize and prepare the database for replay
sqlplus / as sysdba
SQL> execute DBMS_WORKLOAD_REPLAY.process_capture('REPLAY_DIR');
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.98
SQL> EXECUTE DBMS_WORKLOAD_REPLAY.initialize_replay ('DATABASE_REPLAY1','REPLAY_DIR');
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.20
SQL> EXECUTE DBMS_WORKLOAD_REPLAY.prepare_replay;
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.25
There is a replay client that’s available in oracle 11g named “wrc” in both unix and windows environments. The wrc client can read the captured work and replay them.
You have to openup a new window or terminal session type the command at the "os" prompt.
D:\>wrc userid=scott password=xxxxx replaydir='d:\database_replay' mode=replay
Workload Replay Client: Release 11.1.0.6.0 - Production on Thu Mar 26 17:35:23 2009
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Wait for the replay to start (17:35:23)
The session will wait till you manually start the replay.
Go back to sqlplus and login as sysdba on other window
SQL> execute DBMS_WORKLOAD_REPLAY.start_replay;
PL/SQL procedure successfully completed.
Workload Replay Client: Release 11.1.0.6.0 - Production on Thu Mar 26 17:35:23 2009
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Wait for the replay to start (17:35:23)
Replay started (17:35:45)
Replay finished (17:39:17)
D:\>
At this point you are done with Replay.
Test your database to see the captured changes exist in the database.
-------------------------------------------------------------------------------
SQL> select username from dba_users where username like 'PRAVEEN1';
USERNAME
------------------------------
PRAVEEN1
Elapsed: 00:00:00.01
SQL>
SQL> select count(*) from SCOTT.PRAVEEN;
COUNT(*)
----------
14
Elapsed: 00:00:00.01
This feature allows the database administrator to simulate the workload to test database level changes through the recreation of actual workloads. You can use OEM grid control to perform the same using GUI interface. I'll disucss other new features in future post
Wednesday, March 25, 2009
Oracle 11g New Features...
DDL Wait Option
This feature allows a DBA to retry a DDL operation at certain intervals automatically, if the DDL operation couldn’t get a exclusive lock on the object.
Eg: You are trying to lock a table and received a “resource busy “ error (ORA-0054). As a DBA you retry the operation manually until you succeed.
In Oracle 11g you have a ddl_lock_timeout parameter, which will retry the operation with in a specified time at regular intervals until it’s successful or the specified time interval expires.
Sql> alter session set ddl_lock_timeout = 30;
This will prevent the time out error , if it couldn’t get a exclusive lock on the object, instead it will retry the operation many times till its successful or reaches 30 seconds.
Eg :
Session 1. I have grabbed a lock on the table.
SQL> select * from scott.test for update;
Session 2: I am trying to drop a column from the same table.
SQL> alter table scott.test drop column testing;
alter table scott.test drop column testing
*
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
SQL> alter session set ddl_lock_timeout=120;
Session altered.
Elapsed: 00:00:00.00
Session 1
SQL>Rollback;
Elapsed: 00:00:00.00
Session 2
SQL> alter table scott.test drop column testing;
Table altered.
Elapsed: 00:00:12.82
SQL>
Session 2 didn’t throw any resource busy errors this time. Idid a rollback on session 1. The command executed successfully after 12 seconds in session 2.
Virtual Columns
Oracle 11g lets you create virtual columns on the tables and populates the values automatically. In the previous releases you need to physically add a new column and fill the data or allow nulls in the new column. You additionally have to recompile all the dependent objects. Application team doesn’t have to write new code or create triggers to populate the new column.
Lets take a look at the emp table in scott schema;
Sql> create table scott.test as select * from scott.emp;
SQL> SELECT * FROM SCOTT.TEST;
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7369 SMITH CLERK 7902 17-DEC-80 800 20
7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30
7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30
7566 JONES MANAGER 7839 02-APR-81 2975 20
7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30
7698 BLAKE MANAGER 7839 01-MAY-81 2850 30
7782 CLARK MANAGER 7839 09-JUN-81 2450 10
7788 SCOTT ANALYST 7566 19-APR-87 3000 20
7839 KING PRESIDENT 17-NOV-81 5000 10
7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30
7876 ADAMS CLERK 7788 23-MAY-87 1100 20
7900 JAMES CLERK 7698 03-DEC-81 950 30
7902 FORD ANALYST 7566 03-DEC-81 3000 20
7934 MILLER CLERK 7782 23-JAN-82 1300 10
14 rows selected.
Here I'll create a virtual column named income category, which lists the high, low and medium income group based on the salary.
SQL> ALTER TABLE SCOTT.TEST ADD INCOME_CATEGORY VARCHAR2(10)
2 GENERATED ALWAYS AS
3 (
4 CASE
5 WHEN SAL <=2000 THEN 'LOW' 6 WHEN SAL >2000 AND SAL <=3000 THEN 'MEDIUM' 7 WHEN SAL >3000 THEN 'HIGH'
8 END
9 ) VIRTUAL;
Table altered.
SQL> SET LINESIZE 400
SQL> SELECT * FROM SCOTT.TEST;
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO INCOME
---------- ---------- --------- ---------- --------- ---------- ---------- ---------- ------
7369 SMITH CLERK 7902 17-DEC-80 800 20 LOW
7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30 LOW
7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30 LOW
7566 JONES MANAGER 7839 02-APR-81 2975 20 MEDIUM
7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30 LOW
7698 BLAKE MANAGER 7839 01-MAY-81 2850 30 MEDIUM
7782 CLARK MANAGER 7839 09-JUN-81 2450 10 MEDIUM
7788 SCOTT ANALYST 7566 19-APR-87 3000 20 MEDIUM
7839 KING PRESIDENT 17-NOV-81 5000 10 HIGH
7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30 LOW
7876 ADAMS CLERK 7788 23-MAY-87 1100 20 LOW
7900 JAMES CLERK 7698 03-DEC-81 950 30 LOW
7902 FORD ANALYST 7566 03-DEC-81 3000 20 MEDIUM
7934 MILLER CLERK 7782 23-JAN-82 1300 10 LOW
You can create indexes on Virtual colums.
Invisible indexes
Indexes are created to enhance search performance. Indexes are good in a majority of the cases , but I have seen cases the existence of an Index really hurts the performance. Your application may have sql statements which uses the index effectively and also have sql statements which perform poorly because of the same index. Oracle 11g has provided an option to make the index invisible to the optimizer. This way the user doesn’t have to drop an index or mark the index unusable for certain queries and recreate them for other set of queries.
Lets take a look at the scott.test table in the previous example.
SQL> select * from dba_indexes where table_name like 'TEST' and owner like 'SCOTT';
no rows selected
Elapsed: 00:00:00.04
SQL> create index scott.test_empno on scott.test (empno);
Index created.
SQL> create index scott.test_empno on scott.test (empno);
Index created.
SQL> set linesize 300
SQL> set autotrace on
SQL> select * from scott.test where empno=7902;
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO INCOME
---------- ---------- --------- ---------- --------- ---------- ---------- ---------- ------
7902 FORD ANALYST 7566 03-DEC-81 3000 20 MEDIUM
Elapsed: 00:00:00.00
Execution Plan
----------------------------------------------------------
Plan hash value: 3658239601
------------------------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
------------------------------------------------------------------------------------------
0 SELECT STATEMENT 1 42 2 (0) 00:00:01
1 TABLE ACCESS BY INDEX ROWID TEST 1 42 2 (0) 00:00:01
* 2 INDEX RANGE SCAN TEST_EMPNO 1 1 (0) 00:00:01
The execution plan shows that the newly created index is used. Lets make the index invisible to the optimizer.
SQL> alter index scott.test_empno invisible;
Index altered.
Elapsed: 00:00:00.01
SQL>
SQL> select visibility from dba_indexes where owner like 'SCOTT' and INDEX_NAME LIKE 'TEST_EMPNO';
VISIBILIT
---------
INVISIBLE
Elapsed: 00:00:00.01
SQL> select * from scott.test where empno=7902;
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO INCOME
---------- ---------- --------- ---------- --------- ---------- ---------- ---------- ------
7902 FORD ANALYST 7566 03-DEC-81 3000 20 MEDIUM
Elapsed: 00:00:00.00
Execution Plan
----------------------------------------------------------
Plan hash value: 1357081020
--------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
--------------------------------------------------------------------------
0 SELECT STATEMENT 1 42 3 (0) 00:00:01
* 1 TABLE ACCESS FULL TEST 1 42 3 (0) 00:00:01
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("EMPNO"=7902)
Statistics
----------------------------------------------------------
193 recursive calls
0 db block gets
31 consistent gets
0 physical reads
0 redo size
1070 bytes sent via SQL*Net to client
520 bytes received via SQL*Net from client
You could see that the optimizer didn’t use the newly created index.
You could make the optimizer use invisible indexes by specifying a index hint or set the “OPTIMIZER_USE_INVISIBLE_INDEXES” to True at the session level or database level.To make index Visible again.
SQL> alter index scott.test_empno visible;
Index altered.
Elapsed: 00:00:00.01
SQL> select visibility from dba_indexes where owner like 'SCOTT' and INDEX_NAME LIKE 'TEST_EMPNO';
VISIBILIT
---------
VISIBLE
Elapsed: 00:00:00.00
Case sensitive passwords
The case sensitive passwords is a new feature in Oracle 11g. Developers using sqlserver
Databases are familiar with case sensitive passwords. Oracle 11g gives you the flexibility to turn on and off password case sensitivity.
SQL> SHOW PARAMETER SEC_CASE_SENSITIVE_LOGON
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sec_case_sensitive_logon boolean TRUE
SQL>
SQL> ALTER SYSTEM SET SEC_CASE_SENSITIVE_LOGON = FALSE;
System altered.
SQL> create user praveen identified by Praveen;
User created.
SQL> grant create session to praveen;
Grant succeeded.
SQL> connect praveen/praveen
Connected.
SQL> connect / as sysdba
Connected.
SQL> ALTER SYSTEM SET SEC_CASE_SENSITIVE_LOGON = TRUE;
System altered.
SQL> connect praveen/praveen
ERROR:
ORA-01017: invalid username/password; logon denied
Warning: You are no longer connected to ORACLE.
SQL> connect praveen/Praveen
Connected.
SQL>
To create a case in-sensitive password in the password file, recreate the password file using ignorecase=y option.
$ orapwd file=orapwDB11Gb entries=100 ignorecase=y password=mypassword
-----------------------------------------------------------------------------------------------------------------------
I'll discuss more features in next post.
------------------------------------------------------------------------------------------------------------------------
Thursday, March 19, 2009
Basic CRS and SRVCTL commands
This document provides some basic CRS and Srvctl commnds in Oracle 10g RAC environment.
What is CRS? (Cluster Ready services)
CRS is Oracle clusterware, which provides many system management services and interact with the vendor clusterware to coordinate cluster membership information.
Oracle has provided some utilities to help administer and troubleshoot the RAC environment. Lets looks at some of the basic commands,
COMMANDS
1) The Most basic thing to check for is the existence of the “crs” processes in Unix/Linux server.
$> ps –ef grep crs
Here is a list of required CRS processes.
Ocssd.bin, crsd.bin, evmd.bin, evmlogger.bin
2) crs_stat
This command will list all the resources that are registered with CRS with the name of the resource , the type of resource and its current state (online/offline).
· crs_stat command does list all the resources in a linear fashion, if you would like to see the resources in a tabular fashion yiou could use crs_stat –t , which makes it easy to read.
E.g.:-
$> crs_stat –t
oracle@amro503:fimsq11> crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....03.lsnr application ONLINE ONLINE amro503
ora....503.gsd application ONLINE ONLINE amro503
ora....503.ons application ONLINE ONLINE amro503
ora....503.vip application ONLINE ONLINE amro503
ora....05.lsnr application ONLINE ONLINE amro505
ora....505.gsd application ONLINE ONLINE amro505
ora....505.ons application ONLINE ONLINE amro505
ora....505.vip application ONLINE ONLINE amro505
ora.fimsq1.db application ONLINE ONLINE amro503
ora....11.inst application ONLINE ONLINE amro503
ora....12.inst application ONLINE ONLINE amro505
· crs_stat –v list you all the status of the resources in verbose mode. This really becomes handy during trouble shooting.
3) Crs_getperm
This command helps to identify the permissions associated with a resource.
Syntax: crs_getperm
4) Crs_setperm
Sets and modifies permissions associated with a resource name. This is similar to the change mode command in unix.
5) Crsctl check
Helps you verify the health of your clusterware.
oracle@amro503:fimsq11> crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
6) Find out the version of CRS you are running
$> crsctl query crs softwareversion
E.g.:- oracle@amro503:fimsq11> crsctl query crs softwareversion
CRS software version on node [amro503] is [10.2.0.3.0]
Or
$> crsctl query crs activeversion
E.g.:- oracle@amro503:fimsq11> crsctl query crs activeversion
CRS active version on the cluster is [10.2.0.3.0]
7) Command to start and stop CRS
Usually the “root” user has permissions to start and stop CRS on the cluster.
The oracle user can be given sudo permissions to stop and start a CRS.
Use this command to stop crs. Its better to shutdown the database and the nodeapps before performing this action.
$> /u01/oracle/cluster/crs/bin/crsctl stop crs
Use this command to start the crs
$> /u01/oracle/cluster/crs/bin/crsctl start crs.
8) Crs_start
This command starts all the registered crs resources in the right order.
9) Crs_stop
This command stops all the registered crs resources in the right order.
SRVCTL COMMANDS
10) Commands to start and stop a database
Set your oracle rdbms environment.
To start/stop the database and all the instances in RAC with one command.
At the command prompt type
$> srvtcl start/stop instance –d
Or
$> srvtcl start/stop instance –d
E.g.:- srvctl start/stop instance –d fimsp1 –i fimsp11, fimsp12
To start/stop one instance
$> srvtcl start/stop instance –d
11) To start and stop nodeapps
This can be done from any node in the cluster.
Syntax:- $> srvctl stop/start nodeapps –n
E.g.:- to stop/start all nodeapps on amro503/505 cluster
Amro505:oracle:fimsq11> srvctl stop/start nodeapps –n amro503 à to stop all nodeapps on amro503.
Amro505:oracle:fimsq11> srvctl stop/start nodeapps –n amro505 à to stop all nodeapps on amro505
To see if all the nodeapps are down. Use command “crs_stat –t “
12) To add a new instance to CRS
srvctl add instance -d database -i
13) To remove a resource from CRS
srvctl remove database -d
14) To Trace SRVCTL issues. Set the debug flag on. This information is very helpful in debugging issues.
In ksh
$> export srvm_trace=true
The trace o/p will be big and descriptive.
oracle@:fimsp21> srvctl strat nodeapps -n amro501
/fisc/oracle/cluster/crs/jdk/jre/bin/java -classpath /fisc/oracle/cluster/crs/jlib/netcfg.jar:/fisc/oracle/cluster/crs/jdk/jre/lib/rt.jar:/fisc/oracle/cluster/crs/jdk/jre/lib/i18n.jar:/fisc/oracle/cluster/crs/jlib/srvm.jar:/fisc/oracle/cluster/crs/jlib/srvmhas.jar:/fisc/oracle/cluster/crs/jlib/srvmasm.jar:/fisc/oracle/cluster/crs/srvm/jlib/srvctl.jar -DTRACING.ENABLED=true -DTRACING.LEVEL=2 oracle.ops.opsctl.OPSCTLDriver start nodeapps -n amro501
[main] [23:32:20:471] [OPSCTLDriver.setInternalDebugLevel:165] tracing is true at level 2 to file null
[main] [23:32:20:662] [OPSCTLDriver.
[main] [23:32:20:677] [CommandLineParser.parse:174] parsing cmdline args
[main] [23:32:20:678] [CommandLineParser.parse2WordCommandOptions:948] parsing 2-word cmdline
[main] [23:32:20:791] [HASContext.getInstance:197] Module init : 16
[main] [23:32:20:792] [HASContext.getInstance:221] Local Module init : 19
[main] [23:32:20:792] [HASContext.
[main] [23:32:20:810] [Library.getInstance:95] Created instance of Library.
[main] [23:32:20:811] [Library.load:195] Loading libsrvmhas10.a...
[main] [23:32:20:811] [Library.load:201] oracleHome null
[main] [23:32:20:812] [Library.load:236] Loading library srvmhas10 from path=
/fisc/oracle/cluster/crs/jdk/jre/bin:/fisc/oracle/cluster/crs/jdk/jre/bin/classic:/fisc/oracle/cluster/crs/jdk/jre/bin:/fisc/oracle/cluster/crs/lib32:/fisc/oracle/cluster/crs/srvm/lib32:/usr/lib
[main] [23:32:20:864] [Library.load:244] Loaded library srvmhas10 from path=
/fisc/oracle/cluster/crs/jdk/jre/bin:/fisc/oracle/cluster/crs/jdk/jre/bin/classic:/fisc/oracle/cluster/crs/jdk/jre/bin:/fisc/oracle/cluster/crs/lib32:/fisc/oracle/cluster/crs/srvm/lib32:/usr/lib
[main] [23:32:20:886] [has.HASContextNative.Native] prsr_trace: no lsf ctx, line=Native: allocHASContext