Document 1492129
Document 1492129
1)
Modified: 21-Aug-2013 Type: REFERENCE
In this Document
Purpose
Overview
Oracle Identity Manager 11g R2 Database Schemas
OIM 11gR2 Database Backup and Restoration
Logical Backup of OIM Schema
Restoration of OIM Schema
Remote Restoration Vs Local Restoration
What has changed for the Logical Export/Import in OIM 11g R2 Schema(s)
Generic High Level steps in Logical Export of OIM 11g R2 Schema(s)
Generic High Level steps in Logical Import of OIM 11g R2 Schema(s)
Details
Detailed Steps for Export of OIM 11g R2 Schema(s)
Pre-Export Steps
Data Pump Export
Post-Export Steps
Detailed Steps for Restoration of OIM Schema(s)
Pre-Import Steps
Data Pump Import
Post-Import Steps
OIM 11gR2 mid-tier side JDBC Connection configuration
Data Source Configuration editing
MBean Configuration changes
ANNEXURE
FAQs
References
APPLIES TO:
Identity Manager - Version 11.1.2 and later
Information in this document applies to any platform.
PURPOSE
Overview
The Database Layer (OIM and its dependent Schemas) are an integral part of the OIM Application Infrastructure.
This document intends to provide for a Step-By-Step process in detail for the Logical Backup and Restoration of the OIM Application Schema(s) particularly focusing on the OIM 11g R2
Database Schema Topology using Oracle’s Data Pump utility.
C) The methodology depicted is not intended for a Production level Backup Restoration and Recovery solution for an enterprise customer deployment.
Similar Oracle Support Tech-note published for OIM 11gR1/PS1 Schemas (Version 11.1.1.3.0 to 11.1.1.5.0 [Release 11g R1/PS1]) is available at support.oracle.com as “OIM 11gR1:
Schema Backup and Restoration using Data Pump Client Utility ( Document 1359656.1 )”
Following are the possible scenarios of restoration based on the location of restore:
a) Local restoration [Restoration in the same Database Instance post Schema drop]
Data Pump command line clients are available out-of-the-box with Oracle Database Client/Server (Oracle DB 10g and onwards) with no additional licensing cost.
NOTE –Check for the compatibility of the Data Pump Client wrt to the Target Database in use.
A newer version of Data Pump Client cannot be used in order to export data from a lower database version.
For example to export data from database version 10.2.0.3 or 9.2 the 11.1.0.7 Data Pump Client would not suffice.
a) In Remote Restoration; schemas are imported in a different database instance (which can be on a different machine as well).This Database may or may not have any prior OIM 11g
Schemas or even any FMW Schema created on it, depending on this some additional steps may be required.
b) In the case of Remote DB restoration, the JDBC connection information is to be modified as per the steps in the section ‘OIM 11gR2 mid-tier side JDBC Connection configuration’.
What has changed for the Logical Export/Import in OIM 11g R2 Schema(s)
a) In the case of export/import of dependent schemas
b) On the OIM side in 11gR2, Catalog and ADF features have introduced three DBMS SCHEDULER JOBS in the Database
a) FAST_OPTIMIZE_CAT_TAGS
b) REBUILD_OPTIMIZE_CAT_TAGS
c) PURGE_ADF_BC_TXN_TABLE
Order of Export of OIM, MDS, SOAINFRA, ORASDPM, and OPSS Schemas does not matter from the OIM 11g R2 Application functionality perspective.
For ease of subsequent restoration, following details are to be retained for later reference along with the OIM export dump:
DETAILS
Data Pump operations require a directory object to be created, it is a logical mapping in the DB for the disk location on the Oracle DB Server m/c. Ensure it has sufficient disk space for
intermediate storage of all the dump files.
SQL> CREATE DIRECTORY <directory_name> AS '<absolute path of the dir or folder on the DB m/c>';
If you take this location as your dump file location, all your log files etc can be created here, use an appropriate location with sufficient disk space.
NOTE Oracle RDBMS 10gR2 and onwards provide for a default directory out of the box by the name of DATA_PUMP_DIR. Following SQL can be used to see the directory name and its
corresponding server file path.
SQL> SELECT directory_name,directory_path FROM dba_directories; [run as SYS/SYSTEM user]
a) EM Grid Control
Connect as SOAINFRA user:
SQL> SELECT name,enqueue_enabled,dequeue_enabled FROM USER_QUEUES where queue_type = 'NORMAL_QUEUE'
To stop all the Queues: use the PL/SQL API Methods of DBMS_AQADM
Sample PL/SQL Block –
BEGIN
DBMS_AQADM.STOP_QUEUE ('B2B_BAM_QUEUE');
DBMS_AQADM.STOP_QUEUE ('EDN_OAOO_QUEUE');
DBMS_AQADM.STOP_QUEUE ('EDN_EVENT_QUEUE');
DBMS_AQADM.STOP_QUEUE ('IP_IN_QUEUE');
DBMS_AQADM.STOP_QUEUE ('IP_OUT_QUEUE');
DBMS_AQADM.STOP_QUEUE ('TASK_NOTIFICATION_Q');
END;
4) Stop any running DBMS_SCHEDULER jobs (Catalog sync and ADF realted).
Connect as OIM user:
Run the following SQLs to identify any running DBMS_SCHEDULER jobs:
SQL> SELECT job_name,session_id,running_instance,elapsed_time FROM user_scheduler_running_jobs
In case of any running jobs, either wait till its completion or stop the job ‘gracefully’ using:
BEGIN
DBMS_SCHEDULER.stop_job('REBUILD_OPTIMIZE_CAT_TAGS');
END;
BEGIN
DBMS_SCHEDULER.stop_job('FAST_OPTIMIZE_CAT_TAGS');
END;
BEGIN
DBMS_SCHEDULER.stop_job('PURGE_ADF_BC_TXN_TABLE’);
DBMS_SCHEDULER.stop_job('PURGE_ADF_BC_TXN_TABLE’);
END;
NOTE If the job is not running ORA27366 is returned on executing the above blocks, this exception can be ignored.
5) MDS has implemented certain VPD based access policies which prevent an export data pump job initiated from SYSTEM user with the following ORA-39181 error.
To circumvent this, grant the EXEMPT ACCESS POLICY to SYSTEM from SYS user before initiating the export data pump job.
As SYS user:
SQL>GRANT EXEMPT ACCESS POLICY TO SYSTEM;
NOTE
Also see Metalink Support Note for additional information: “ORA-39181:Only Partial Table Data Exported Due To Fine Grain Access Control [ Document 422480.1 ]”
Post shutdown on OIM and SOA server instances, SOA queues and OIM 11g R2 DBMS_SCHEDULER jobs, do a clean shutdown and then startup of the OIM Database Instance this is to
eliminate any possibility of a functional consistency in the data being exported.
Figure 1- Sample SQL*Plus command output for SHUTDOWN IMMEDATE and STARTUP of Database
expdp system/password@<SID>
DIRECTORY=<dir name>
SCHEMAS=<OIMSchema,SOASchema,MDSSchema,ORASDPMSchema,OPSSSchema>
DUMPFILE=<filename.dmp> PARALLEL=4
LOGFILE=<log file name>
JOB_NAME= <job name>
EXCLUDE=STATISTICS
2) Sample command (exporting SCHEMA_VERSION_REGISTRY view and its underlying table from the SYSTEM Schema
expdp system/password@<SID>
DIRECTORY=<dir name>
SCHEMAS=SYSTEM
DUMPFILE=<filename.dmp>
INCLUDE= VIEW:"IN('SCHEMA_VERSION_REGISTRY')" TABLE:"IN('SCHEMA_VERSION_REGISTRY$')" LOGFILE=<log file name>
JOB_NAME= <job name>
NOTE Verify the Data Pump Export log file for any significant errors encountered during the operation.
Post-Export Steps
After the Export completes successfully, do the following:
1) Document the name of the OIM Schema and the dependent Schemas
1) Document the name of the OIM Schema and the dependent Schemas
2) Document default, temporary tablespace name of OIM Schema, along with names of any other tablespace involved in OIM Schema objects
SQL> SELECT DISTINCT tablespace_name,owner
FROM
dba_segments
WHERE owner IN ( ‘OIM’,’SOA’,’ORASDPM’,’MDS’,’OPSS’ Schema Names)
3) Generate the database schema tablespace and user creation script (should contain the user default tablespace, temporary tablespace etc.).
Following method using DBMS_METADATA can be used to generate the tablespace/user creation script from the data dictionary directly:
SET LONG 10000
SET LINES 200
SET PAGES 400
SQL> SELECT DBMS_METADATA.GET_DDL(‘TABLESPACE','<TablespaceName>') FROM DUAL
Run replacing the tablespace names as per the o/p of previous SQL in pt 2) above.
SQL> SELECT DBMS_METADATA.GET_DDL('USER','<Schema Name>') FROM DUAL
Run replacing the schema user names of OIM, MDS, SOAINFRA, ORASDPM and OPSS for users.
NOTE Copy the output of the SQLs and append ‘;’ after each statement.
Save the output as a .sql file to run later on the destination database.
4) Capture all the System and Object Grants to the OIM, MDS, SOA Schema Users from the data dictionary using DBMS_METADATA Package.
ü Capturing SYSTEM GRANTS
ü SELECT DBMS_METADATA.GET_GRANTED_DDL ('SYSTEM_GRANT','<OIM Schema Name>') FROM DUAL;
ü SELECT DBMS_METADATA.GET_GRANTED_DDL ('SYSTEM_GRANT','<MDS Schema Name>') FROM DUAL;
ü SELECT DBMS_METADATA.GET_GRANTED_DDL ('SYSTEM_GRANT', '<SOAINFRA Schema Name>') FROM DUAL;
ü SELECT DBMS_METADATA.GET_GRANTED_DDL ('SYSTEM_GRANT', '<ORASDPM Schema Name>') FROM DUAL;
NOTE
Copy the output of the SQLs and append ‘;’ after each statement. Save the output as a ‘.sql’ file to run on the destination database.
Capturing OBJECT GRANTS
ü SELECT DBMS_METADATA.GET_GRANTED_DDL ('OBJECT_GRANT', '<OIM Schema Name>') FROM DUAL;
ü SELECT DBMS_METADATA.GET_GRANTED_DDL ('OBJECT_GRANT', '<MDS Schema Name>') FROM DUAL;
ü SELECT DBMS_METADATA.GET_GRANTED_DDL ('OBJECT_GRANT', '<SOAINFRA Schema Name>') FROM DUAL;
ü SELECT DBMS_METADATA.GET_GRANTED_DDL ('OBJECT _GRANT', '<ORASDPM Schema Name>') FROM DUAL;
NOTE
Copy the output of the SQLs and append ‘;’ after each statement. Save the output as a ‘.sql ‘file to run on the destination database.
All the Steps 1) to 4) are required in case of restoration on a different database instance. In case of restoration in the same database, all the steps 1) to 4) can be ignored
and DROP USER <username> CASCADE; should suffice.
In case of a fresh import of this dump (on the destination database), create the tablespaces as captured in step ‘Post Export Steps -> Step 2’.This is where the OIM Schema export
dump is to be restored:
SQL> CREATE TABLESPACE <tablespace name > DATAFILE '<file path>/<datafile>.dbf' SIZE 100 M AUTOEXTEND ON NEXT 10M;
Datafile path can be fetched like this:
As SYS user : SQL>SELECT * FROM dba_data_files.
FILE_NAME column will give the existing data files' location for the current database instance.
Create the data and temp tablespaces for all the schemas i.e. OIM, MDS, SOA, ORASDPM and OPSS.
2) Check for a count of 2 for each of the following SQLs on the target database where the OIM Schema export dump is to be restored, if count <2,perform step 3) else skip and go to
step 4).
To verify for the creation of DBMS_SHARED_POOL Package
SQL> SELECT COUNT(*)
FROM dba_objects
WHERE owner = 'SYS'
AND object_name = 'DBMS_SHARED_POOL'
AND object_type IN ('PACKAGE','PACKAGE BODY')
To verify the creation of the v$xatrans$,v$pending_xatrans$ views
SELECT COUNT(*)
SELECT COUNT(*)
FROM dba_objects
WHERE owner = 'SYS'
AND object_name like '%XATRANS%'
3) Create the required generic database objects like DBMS_SHARED_POOL/xatrans view etc.
To create the DBMS_SHARED_POOL and xatrans view, run the following script from the RDBMS location:
a) For DBMS_SHARED_POOL creation:
SQL> @<$ORACLE_HOME>/rdbms/admin/dbmspool
SQL> @<$ORACLE_HOME>/rdbms/admin/prvtpool.plb
b) For xatrans$ views creation:
SQL> @<$ORACLE_HOME>/rdbms/admin/xaview
Replace $ORACLE_NAME with the corresponding file path.
4) Create the Schema Users with the same default data and temp tablespace along with the system and object grants as per the instructions in ‘Post Export Steps’
Steps 3) and 4)
Run the sample import data pump command:
impdp system/password@<SID>
SCHEMAS=<OIMSchema,SOASchema,MDSSchema,ORASDPMSchema,OPSS Schema>
DUMPFILE=<filename.dmp> PARALLEL=4
LOGFILE=<log file name>
JOB_NAME= <job name>
2) Import the SCHEMA_VERSION_REGISTRY view in the SYSTEM Schema
Run the sample import data pump command:
impdp system/password@<SID>
DIRECTORY=<dir name>
SCHEMAS=SYSTEM
DUMPFILE=<filename.dmp>
LOGFILE=<log file name>
JOB_NAME= <job name>
TABLE_EXISTS_ACTION=APPEND
Post successful import, do the following:
Connect as SYSTEM user:
SQL> CREATE PUBLIC SYNONYM schema_version_registry FOR system.schema_version_registry;
Post-Import Steps
1) Change all the Schema Passwords (preferably to the old passwords)
To identify INVALID Schema Objects:
SELECT owner,object_type,object_name, status
FROM dba_objects
WHERE status = 'INVALID'
AND owner in (‘<Schema Name1>’,’<Schema Name2>’..)
ORDER BY owner, object_type, object_name;
To compile INVALID Schema Objects:
Any appropriate method can be used, one such sample method using (UTL_RECOMP) shown below:
Execute the block for each of the affected Schemas:
BEGIN
UTL_RECOMP.recomp_serial('<Schema Name>');
END;
NOTE
Rerun the above block for all Schemas to make sure no objects are INVALID
Connect as SOAINFRA user:
SQL> SELECT name,enqueue_enabled,dequeue_enabled FROM USER_QUEUES where queue_type='NORMAL_QUEUE'
To start all the Queues: use the PL/SQL API Methods of DBMS_AQADM
Sample PL/SQL Block –
BEGIN
DBMS_AQADM.START_QUEUE ('B2B_BAM_QUEUE');
DBMS_AQADM.START_QUEUE ('EDN_OAOO_QUEUE');
DBMS_AQADM.START_QUEUE ('EDN_EVENT_QUEUE');
DBMS_AQADM.START_QUEUE ('IP_IN_QUEUE');
DBMS_AQADM.START_QUEUE ('IP_OUT_QUEUE');
DBMS_AQADM.START_QUEUE ('TASK_NOTIFICATION_Q');
END;
4) Collect the DB Schema stats [optional]: Refer to Oracle® Fusion Middleware Performance and Tuning Guide 11g Release 2 (11.1.2) (26 Oracle Identity Manager Performance
Tuning) [ http://docs.oracle.com/cd/E27559_01/doc.1112/e28552/oim.htm#autoId17 ]
OIM 11gR2 midtier side JDBC Connection configuration
Data Source Configuration editing
http://hostname:weblogic_adminconsole_portno/em
5) Edit the OIMSchemaPassword attribute in case there is a change in the OIM Schema password
MBean Configuration changes
MBean Configuration changes
7) Goto the System MBean Browser -> Configuration MBeans ->Security-> myrealmOIMAuthenticationProvider->Attributes