0% found this document useful (0 votes)
59 views21 pages

Document 1492129

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views21 pages

Document 1492129

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

OIM 11gR2: Schema Backup and Restoration using Data Pump Client Utility (Doc ID 1492129.

1)
Modified: 21-Aug-2013 Type: REFERENCE

In this Document

Purpose
Overview
Oracle Identity Manager 11g R2 Database Schemas
OIM 11gR2 Database Backup and Restoration
Logical Backup of OIM Schema
Restoration of OIM Schema
Remote Restoration Vs Local Restoration
What has changed for the Logical Export/Import in OIM 11g R2 Schema(s)
Generic High Level steps in Logical Export of OIM 11g R2 Schema(s)
Generic High Level steps in Logical Import of OIM 11g R2 Schema(s)
Details
Detailed Steps for Export of OIM 11g R2 Schema(s)
Pre-Export Steps
Data Pump Export
Post-Export Steps
Detailed Steps for Restoration of OIM Schema(s)
Pre-Import Steps
Data Pump Import
Post-Import Steps
OIM 11gR2 mid-tier side JDBC Connection configuration
Data Source Configuration editing
MBean Configuration changes
ANNEXURE
FAQs
References

APPLIES TO:
Identity Manager - Version 11.1.2 and later
Information in this document applies to any platform.

PURPOSE

Overview
The Database Layer (OIM and its dependent Schemas) are an integral part of the OIM Application Infrastructure.

This document intends to provide for a Step-By-Step process in detail for the Logical Backup and Restoration of the OIM Application Schema(s) particularly focusing on the OIM 11g R2
Database Schema Topology using Oracle’s Data Pump utility.

What this Technical Note covers


What this Technical Note covers

A) Backup of OIM 11g R2 and its dependant Schemas of a running deployment.

B) Restoration from scratch on a different1/same database instance.

C) The methodology depicted is not intended for a Production level Backup Restoration and Recovery solution for an enterprise customer deployment.

1 - a different instance on the same or on a different physical machine.

What this Technical Note does not cover

A) Backup/Restoration methodology for the OIM 11g Upgrade environments

B) Backup/Restoration methodology for T2P scenarios.

Similar Oracle Support Tech-note published for OIM 11gR1/PS1 Schemas (Version 11.1.1.3.0 to 11.1.1.5.0 [Release 11g R1/PS1]) is available at support.oracle.com as “OIM 11gR1:
Schema Backup and Restoration using Data Pump Client Utility ( Document 1359656.1 )”

Oracle Identity Manager 11g R2 Database Schemas


From the operational perspective, OIM 11g R2 Database Schema topology consists of the following FIVE mandatory schemas:

a) Oracle Identity Manager (OIM) Schema [OIM 11gR2]

b) Meta Data Service (MDS) Schema

c) Service Oriented Architecture (SOAINFRA) Infra and ORASDPM Schema

d) Oracle Platform Security Services (OPSS) Schema

OIM 11gR2 Database Backup and Restoration


Logical Backup of OIM Schema
For the Logical Backup of OIM 11g Schema(s) (and its subsequent Restoration) the recommended tool is Oracle11g R1/R2 Data Pump Export Utility.

Restoration of OIM Schema


For the restoration of the Logical Backup (taken using the Oracle 11g/10g Data Pump Export utility), the corresponding utility i.e. the Data Pump Import utility is to be essentially used.

Following are the possible scenarios of restoration based on the location of restore:

a) Local restoration [Restoration in the same Database Instance post Schema drop]

b) Remote restoration [Restoration in a different Database Instance]


Data Pump is a tool of choice over conventional export/import utility as it not only provides a platform independent data dump but also by way of its architectural enhancements it is
faster.

Data Pump command line clients are available out-of-the-box with Oracle Database Client/Server (Oracle DB 10g and onwards) with no additional licensing cost.

NOTE –Check for the compatibility of the Data Pump Client wrt to the Target Database in use.

Compatibility between the DB version and the Data Pump Client:

A newer version of Data Pump Client cannot be used in order to export data from a lower database version.

For example to export data from database version 10.2.0.3 or 9.2 the 11.1.0.7 Data Pump Client would not suffice.

Remote Restoration Vs Local Restoration


Steps for restoration on Local or Remote DB Instances are same almost the same except for:

a) In Remote Restoration; schemas are imported in a different database instance (which can be on a different machine as well).This Database may or may not have any prior OIM 11g
Schemas or even any FMW Schema created on it, depending on this some additional steps may be required.

b) In the case of Remote DB restoration, the JDBC connection information is to be modified as per the steps in the section ‘OIM 11gR2 mid-tier side JDBC Connection configuration’.

What has changed for the Logical Export/Import in OIM 11g R2 Schema(s)
a) In the case of export/import of dependent schemas

1. There is a new dependent schema OPSS introduced in OIM 11gR2.


2. MDS has introduced VPD based access policies for few tables.
3. SOAINFRA has a new DBMS_AQ job.

b) On the OIM side in 11gR2, Catalog and ADF features have introduced three DBMS SCHEDULER JOBS in the Database

a) FAST_OPTIMIZE_CAT_TAGS

b) REBUILD_OPTIMIZE_CAT_TAGS

c) PURGE_ADF_BC_TXN_TABLE

Generic High Level steps in Logical Export of OIM 11g R2 Schema(s)


NOTE

Order of Export of OIM, MDS, SOAINFRA, ORASDPM, and OPSS Schemas does not matter from the OIM 11g R2 Application functionality perspective.

What to retain along with the exported data dump:

For ease of subsequent restoration, following details are to be retained for later reference along with the OIM export dump:

1. Name of the OIM/dependent Schemas


2. Default and Temporary Tablespace names of OIM/its dependant schemas, along with names of any other tablespaces involved in OIM/other Schema objects.
3. Data Pump/Conventional Export log file

Generic High Level steps in Logical Import of OIM 11g R2 Schema(s)


NOTE
Steps 2) and 4) can be skipped if restoring on the same DB Instance (using the same schema names).

DETAILS

Detailed Steps for Export of OIM 11g R2 Schema(s)


Pre-Export Steps
1) Database Directory creation [DATA PUMP specific]

Data Pump operations require a directory object to be created, it is a logical mapping in the DB for the disk location on the Oracle DB Server m/c. Ensure it has sufficient disk space for
intermediate storage of all the dump files.

SQL> CREATE DIRECTORY <directory_name> AS   '<absolute path of the dir or folder on the DB m/c>';

If you take this location as your dump file location, all your log files etc can be created here, use an appropriate location with sufficient disk space. 

NOTE ­ Oracle RDBMS 10gR2 and onwards provide for a default directory out of the box by the name of DATA_PUMP_DIR. Following SQL can be used to see the directory name and its
corresponding server file path.

SQL> SELECT directory_name,directory_path FROM dba_directories;  [run as SYS/SYSTEM user]

2) Shutdown OIM and SOA Managed Server Instances:

i) Shutdown the OIM Weblogic Managed Server Instance

ii) Shutdown the SOA Weblogic Managed Server Instance

Either mode can be used to shutdown the managed instances.

a) EM Grid Control

b) Command line shell script

[./stopManagedWebLogic.sh <OIM/SOA Managed Server Instance Name>]


3) Stop the SOA queues

Connect as SOAINFRA user:

SQL> SELECT name,enqueue_enabled,dequeue_enabled  FROM USER_QUEUES where queue_type = 'NORMAL_QUEUE'

To stop all the Queues: use the PL/SQL API Methods of DBMS_AQADM

Sample PL/SQL Block –

BEGIN

DBMS_AQADM.STOP_QUEUE ('B2B_BAM_QUEUE');

DBMS_AQADM.STOP_QUEUE ('EDN_OAOO_QUEUE');

DBMS_AQADM.STOP_QUEUE ('EDN_EVENT_QUEUE');

DBMS_AQADM.STOP_QUEUE ('IP_IN_QUEUE');

DBMS_AQADM.STOP_QUEUE ('IP_OUT_QUEUE');

DBMS_AQADM.STOP_QUEUE ('TASK_NOTIFICATION_Q');

END;

4) Stop any running DBMS_SCHEDULER jobs (Catalog sync and ADF realted).

Connect as OIM user:

Run the following SQLs to identify any running DBMS_SCHEDULER jobs:

SQL> SELECT job_name,session_id,running_instance,elapsed_time FROM user_scheduler_running_jobs

In case of any running jobs, either wait till its completion or stop the job ‘gracefully’ using:

BEGIN

DBMS_SCHEDULER.stop_job('REBUILD_OPTIMIZE_CAT_TAGS');

END;

BEGIN

DBMS_SCHEDULER.stop_job('FAST_OPTIMIZE_CAT_TAGS');

END;

BEGIN

DBMS_SCHEDULER.stop_job('PURGE_ADF_BC_TXN_TABLE’);
DBMS_SCHEDULER.stop_job('PURGE_ADF_BC_TXN_TABLE’);

END;

NOTE ­ If the job is not running ORA­27366 is returned on executing the above blocks, this exception can be ignored.

5) MDS has implemented certain VPD based access policies which prevent an export data pump job initiated from SYSTEM user with the following ORA-39181 error.

To circumvent this, grant the EXEMPT ACCESS POLICY to SYSTEM from SYS user before initiating the export data pump job.

As SYS user:

SQL>GRANT EXEMPT ACCESS POLICY TO SYSTEM;

NOTE

Also see Metalink Support Note for additional information: “ORA-39181:Only Partial Table Data Exported Due To Fine Grain Access Control [ Document 422480.1 ]”

6) Shutdown/Startup of DB [Optional Step]

Post shutdown on OIM and SOA server instances, SOA queues and OIM 11g R2 DBMS_SCHEDULER jobs, do a clean shutdown and then startup of the OIM Database Instance this is to
eliminate any possibility of a functional consistency in the data being exported.
Figure 1- Sample SQL*Plus command output for SHUTDOWN IMMEDATE and STARTUP of Database

Data Pump Export


1) Sample command (exporting OIM,SOA,ORASDPM,MDS,OPSS Schemas from SYSTEM user, edit to add the required values):

expdp system/password@<SID>

DIRECTORY=<dir name>

SCHEMAS=<OIM­Schema,SOA­Schema,MDS­Schema,ORASDPM­Schema,OPSS­Schema>

DUMPFILE=<filename.dmp> PARALLEL=4

 LOGFILE=<log file name>

JOB_NAME= <job name>

EXCLUDE=STATISTICS

2) Sample command (exporting SCHEMA_VERSION_REGISTRY view and its underlying table from the SYSTEM Schema

expdp system/password@<SID>

DIRECTORY=<dir name>

SCHEMAS=SYSTEM

DUMPFILE=<filename.dmp>
INCLUDE= VIEW:"IN('SCHEMA_VERSION_REGISTRY')" TABLE:"IN('SCHEMA_VERSION_REGISTRY$')" LOGFILE=<log file name>

JOB_NAME= <job name>

Figure 2 - Data Pump Export sample console output

Figure 3 - Data Pump Export sample console output contd.

NOTE Verify the Data Pump Export log file for any significant errors encountered during the operation.

Post-Export Steps
After the Export completes successfully, do the following:

1) Document the name of the OIM Schema and the dependent Schemas
1) Document the name of the OIM Schema and the dependent Schemas

2) Document default, temporary tablespace name of OIM Schema, along with names of any other tablespace involved in OIM Schema objects

SQL> SELECT DISTINCT  tablespace_name,owner

           FROM

           dba_segments

          WHERE   owner IN ( ‘OIM’,’SOA’,’ORASDPM’,’MDS’,’OPSS’ Schema Names)

3) Generate the database schema tablespace and user creation script (should contain the user default tablespace, temporary tablespace etc.).

Following method using DBMS_METADATA can be used to generate the tablespace/user creation script from the data dictionary directly:

SET LONG 10000

SET LINES 200

SET PAGES 400

SQL> SELECT DBMS_METADATA.GET_DDL(‘TABLESPACE','<TablespaceName>') FROM DUAL

Run replacing the tablespace names as per the o/p of previous SQL in pt 2) above.

SQL> SELECT DBMS_METADATA.GET_DDL('USER','<Schema Name>') FROM DUAL

Run replacing the schema user names of OIM, MDS, SOAINFRA, ORASDPM and OPSS for users.

NOTE ­ Copy the output of the SQLs and append ‘;’ after each statement.

Save the output as a .sql file to run later on the destination database.

4) Capture all the System and Object Grants to the OIM, MDS, SOA Schema Users from the data dictionary using DBMS_METADATA Package.

ü  Capturing SYSTEM GRANTS

ü  SELECT DBMS_METADATA.GET_GRANTED_DDL ('SYSTEM_GRANT','<OIM Schema Name>')  FROM DUAL;

ü  SELECT DBMS_METADATA.GET_GRANTED_DDL ('SYSTEM_GRANT','<MDS Schema Name>')  FROM DUAL;

ü  SELECT DBMS_METADATA.GET_GRANTED_DDL ('SYSTEM_GRANT', '<SOAINFRA Schema Name>')  FROM DUAL;

ü  SELECT DBMS_METADATA.GET_GRANTED_DDL ('SYSTEM_GRANT', '<ORASDPM Schema Name>')  FROM DUAL;

NOTE

Copy the output of the SQLs and append ‘;’ after each statement. Save the output as a ‘.sql’ file to run on the destination database.

Capturing OBJECT GRANTS

ü  SELECT DBMS_METADATA.GET_GRANTED_DDL ('OBJECT_GRANT', '<OIM Schema Name>') FROM DUAL;
ü  SELECT DBMS_METADATA.GET_GRANTED_DDL ('OBJECT_GRANT', '<MDS Schema Name>') FROM DUAL;

ü  SELECT DBMS_METADATA.GET_GRANTED_DDL ('OBJECT_GRANT', '<SOAINFRA Schema Name>')  FROM DUAL;

ü  SELECT DBMS_METADATA.GET_GRANTED_DDL ('OBJECT _GRANT', '<ORASDPM Schema Name>')  FROM DUAL;

NOTE

Copy the output of the SQLs and append ‘;’ after each statement. Save the output as a ‘.sql ‘file to run on the destination database.

Detailed Steps for Restoration of OIM Schema(s)


Pre-Import Steps

All the Steps 1) to 4) are required in case of restoration on a different database instance. In case of restoration in the same database, all the steps 1) to 4) can be ignored
and DROP USER <username> CASCADE; should suffice.

1) OIM and dependant Schemas’ Tablespace creation:

In case of a fresh import of this dump (on the destination database), create the tablespaces as captured in step ‘Post Export Steps -> Step 2’.This is where the OIM Schema export
dump is to be restored:

SQL> CREATE TABLESPACE <tablespace name > DATAFILE '<file path>/<datafile>.dbf' SIZE 100 M AUTOEXTEND ON NEXT 10M;

 Datafile path can be fetched like this:

 As SYS user : SQL>SELECT  * FROM  dba_data_files.

FILE_NAME column will give the existing data files' location for the current database instance.

 Create the data and temp tablespaces for all the schemas i.e. OIM, MDS, SOA, ORASDPM and OPSS.

2) Check for a count of 2 for each of the following SQLs on the target database where the OIM Schema export dump is to be restored, if count <2,perform step 3) else skip and go to
step 4).

­­To verify for the creation of DBMS_SHARED_POOL Package

SQL> SELECT COUNT(*)

FROM dba_objects

WHERE owner = 'SYS'

AND object_name = 'DBMS_SHARED_POOL'

AND object_type IN ('PACKAGE','PACKAGE BODY')

­­To verify the creation of the v$xatrans$,v$pending_xatrans$ views

SELECT COUNT(*)
SELECT COUNT(*)

FROM dba_objects

WHERE owner = 'SYS'

AND object_name like '%XATRANS%'

3) Create the required generic database objects like DBMS_SHARED_POOL/xatrans view etc.

To create the DBMS_SHARED_POOL and xatrans view, run the following script from the RDBMS location:

a)       For DBMS_SHARED_POOL creation:

SQL> @<$ORACLE_HOME>/rdbms/admin/dbmspool

SQL> @<$ORACLE_HOME>/rdbms/admin/prvtpool.plb

b)       For xatrans$ views  creation:

SQL> @<$ORACLE_HOME>/rdbms/admin/xaview

Replace $ORACLE_NAME with the corresponding file path.

4) Create the Schema Users with the same default data and temp tablespace along with the system and object grants as per the instructions in ‘Post Export Steps’
Steps 3) and 4)

Data Pump Import


1) The import of the data dump (generated using the Data Pump Export ) using data pump can be imported via the following:

Run the sample import data pump command:

impdp system/password@<SID>

SCHEMAS=<OIM­Schema,SOA­Schema,MDS­Schema,ORASDPM­Schema,OPSS­ Schema>

DUMPFILE=<filename.dmp> PARALLEL=4

LOGFILE=<log file name>

JOB_NAME= <job name>
2) Import the SCHEMA_VERSION_REGISTRY view in the SYSTEM Schema

Run the sample import data pump command:

impdp system/password@<SID>

DIRECTORY=<dir name>

SCHEMAS=SYSTEM

DUMPFILE=<filename.dmp>

LOGFILE=<log file name>

JOB_NAME= <job name>

TABLE_EXISTS_ACTION=APPEND

Post successful import, do the following:

Connect as SYSTEM user:

SQL> CREATE PUBLIC SYNONYM schema_version_registry FOR system.schema_version_registry;

3) IGNORE the following type of errors if any:

a) Procedure/Package/Function/Trigger compilation warnings

b) DBMS_AQ errors if any

Post-Import Steps
1) Change all the Schema Passwords (preferably to the old passwords)

2) Compile INVALID Schema Objects

To identify INVALID Schema Objects:

SELECT owner,object_type,object_name, status

FROM   dba_objects
WHERE  status = 'INVALID'

AND owner in (‘<Schema Name1>’,’<Schema Name2>’..)

ORDER BY owner, object_type, object_name;

To compile INVALID Schema Objects:

Any appropriate method can be used, one such sample method using (UTL_RECOMP) shown below:

Execute the block for each of the affected Schemas:

BEGIN

 UTL_RECOMP.recomp_serial('<Schema Name>');

END;

NOTE

Rerun the above block for all Schemas to make sure no objects are INVALID

3) Start the SOAINFRA DBMS Queues:

Connect as SOAINFRA user:

SQL> SELECT name,enqueue_enabled,dequeue_enabled  FROM USER_QUEUES where queue_type='NORMAL_QUEUE'

To start all the Queues: use the PL/SQL API Methods of DBMS_AQADM

Sample PL/SQL Block –

BEGIN

DBMS_AQADM.START_QUEUE ('B2B_BAM_QUEUE');

DBMS_AQADM.START_QUEUE ('EDN_OAOO_QUEUE');

DBMS_AQADM.START_QUEUE ('EDN_EVENT_QUEUE');

DBMS_AQADM.START_QUEUE ('IP_IN_QUEUE');

DBMS_AQADM.START_QUEUE ('IP_OUT_QUEUE');

DBMS_AQADM.START_QUEUE ('TASK_NOTIFICATION_Q');

END;

4) Collect the DB Schema stats [optional]: Refer to Oracle® Fusion Middleware Performance and Tuning Guide 11g Release 2 (11.1.2) (26 Oracle Identity Manager Performance
Tuning) [ http://docs.oracle.com/cd/E27559_01/doc.1112/e28552/oim.htm#autoId17 ]
OIM 11gR2 mid­tier side JDBC Connection configuration

Data Source Configuration editing

1) Login to the Oracle FMW EM:

http://hostname:weblogic_adminconsole_portno/em

2) Goto Weblogic Domain->base_domain->JDBC Data Sources:


3) Edit ALL the above Data Sources for the connection information as shown below for one sample:
4) Goto Security->Credentials

5) Edit the OIMSchemaPassword attribute in case there is a change in the OIM Schema password

MBean Configuration changes
MBean Configuration changes

6) Goto the System MBean Browser

7) Goto the System MBean Browser -> Configuration MBeans ->Security-> myrealmOIMAuthenticationProvider->Attributes

8) Edit the DBUrl/DBUser for connection related information


9) Goto the System MBean Browser -> Application Defined MBeans ->oracle.iam-> Server:oim_server1->XML Config -> DirectDB
10) Goto the Attributes tab -> Look for Url and UserName

11) Edit the values for Connection information

You might also like