NimbusSanL ReguItal
NimbusSanL ReguItal
Known Issue
Known Issues for Cross Platform Transportable Tablespaces XTTS Document 2311677.1
Change History
Discuss Oracle Backup and recovery Topics!
References
Consider using the new release of this procedure, version 4. This version has drastically simplified the steps and procedure.
Before proceeding, review:
V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup Note 2471245.1
When using Cross Platform Transportable Tablespaces (XTTS) to migrate data between systems that have different endian formats, the
amount of downtime required can be substantial because it is directly proportional to the size of the data set being moved. However,
combining XTTS with Cross Platform Incremental Backup can significantly reduce the amount of downtime required to move data between
platforms.
Because the data transported must be made read only at the very start of the procedure, the application that owns the data is effectively
unavailable to users for the entire duration of the procedure. Due to the serial nature of the steps, the downtime required for this procedure
is proportional to the amount of data. If data size is large, datafile transfer and convert times can be long, thus downtime can be long.
To reduce the amount of downtime required for XTTS, Oracle has enhanced RMAN's ability to roll forward datafile copies using incremental
backups, to work in a cross-platform scenario. By using a series of incremental backups, each smaller than the last, the data at the
destination system can be brought almost current with the source system, before any downtime is required. The downtime required for
datafile transfer and convert when combining XTTS with Cross Platform Incremental Backup is now proportional to the rate of data block
changes in the source system.
The Cross Platform Incremental Backup feature does not affect the amount of time it takes to perform other actions for XTTS, such as
metadata export and import. Hence, databases that have very large amounts of metadata (DDL) will see limited benefit from Cross
Platform Incremental Backup since migration time is typically dominated by metadata operations, not datafile transfer and conversion.
Only those database objects that are physically located in the tablespaces that are being transported will be copied to the destination
system. If you need for other objects to be transported, that are located in different tablespaces (such as, for example, pl/sql objects,
sequences, etc., that are located in the SYSTEM tablespace), you can use data pump to copy those objects to the destination system.
The high-level steps using the cross platform incremental backup capability are the following:
2. Roll Forward phase (source data remains online - Repeat this phase as many times as necessary to catch destination datafile
copies up to source database)
NOTE: In Version 3, if a datafile is added to the tablespace OR a new tablespace name is added to the xtt.properties file, a warning and
additional instructions will be required.
The purpose of this document is to provide an example of how to use this enhanced RMAN cross platform incremental backup capability to
reduce downtime when transporting tablespaces across platforms.
Consider using the new release of this procedure, version 4. This version has drastically simplified the steps and procedure.
Before proceeding, review:
V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup Note 2471245.1
The source system may be any platform provided the prerequisites referenced and listed below for both platform and database are met.
If you are migrating from a little endian platform to Oracle Linux, then the migration method that should receive first consideration is Data
Guard. See Note 413484.1 for details about heterogeneous platform support for Data Guard between your current little endian platform and
Oracle Linux.
This method can also be used with 12c databases, however, for an alternative method for 12c see:
Note 2005729.1 12C - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup.
NOTE: Neither method supports 12c multitenant databases. Enhancement bug 22570430 addresses this limitation.
Consider using the new release of this procedure, version 4. This version has drastically simplified the steps and procedure.
Before proceeding, review:
V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup Note 2471245.1
This document provides a procedural example of transporting two tablespaces called TS1 and TS2 from an Oracle Solaris SPARC system to
an Oracle Exadata Database Machine running Oracle Linux, incorporating Oracle's Cross Platform Incremental Backup capability to reduce
downtime.
After performing the Initial Setup phase, moving the data is performed in the following three phases:
Prepare phase
During the Prepare phase, datafile copies of the tablespaces to be transported are transferred to the destination system and
converted. The application being migrated is fully accessible during the Prepare phase. The Prepare phase can be performed
using RMAN backups or dbms_file_transfer. Refer to the Selecting the Prepare Phase Method section for details about
choosing the Prepare phase method.
Transport phase
During the Transport phase, the tablespaces being transported are put into READ ONLY mode, and a final incremental backup
is taken from the source database and applied to the datafile copies on the destination system, making the destination datafile
copies consistent with source database. Once the datafiles are consistent, the tablespaces are TTS-exported from the source
database and TTS-imported into the destination database. Finally, the tablespaces are made READ WRITE for full access on
the destination database. The application being migrated cannot receive any updates during the Transport phase.
The Cross Platform Incremental Backup core functionality is delivered in Oracle Database 11.2.0.4 and later. See the Requirements and
Recommendations section for details. In addition, a set of supporting scripts in the file are attached to this
document that are used to manage the procedure required to perform XTTS with Cross Platform Incremental Backup. The two primary
supporting scripts files are the following:
Perl script - the script that is run to perform the main steps of the XTTS with Cross Platform Incremental Backup
procedure.
Parameter file - the file that contains your site-specific configuration.
Prerequisites
Selecting the Prepare Phase Method
Destination Database 11.2.0.3 or Earlier Requires a Separate Incremental Convert Home and Instance
If Cross Platform Incremental Backups will be used to reduce downtime for a whole database migration, then the steps in this document
can be combined with the XTTS guidance provided in the MAA paper Platform Migration Using Transportable Tablespaces: Oracle
Database 11g.
This method can also be used with 12c databases, however, for an alternative method for 12c see:
Note 2005729.1 12C - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup.
During the Prepare phase, datafiles of the tablespaces to be transported are transferred to the destination system and converted by the
xttdriver.pl script. There are two possible methods:
The dbms_file_transfer method uses the dbms_file_transfer.get_file() subprogram to transfer the datafiles from the source system to the
target system over a database link. The dbms_file_transfer method has the following advantages over the RMAN method: 1) it does not
require staging area space on either the source or destination system; 2) datafile conversion occurs automatically during transfer - there is
not a separate conversion step. The dbms_file_transfer method requires the following:
A destination database running 11.2.0.4. Note that an incremental convert home or instance do not participate in dbms_file_transfer
file transfers.
A database directory object in the source database from where the datafiles are copied.
A database directory object in the destination database to where the datafiles are placed.
A database link in the destination database referencing the source database.
The RMAN backup method runs RMAN on the source system to create backups on the source system of the datafiles to be transported.
The backups files must then be manually transferred over the network to the destination system. On the destination system the datafiles are
converted by RMAN, if necessary. The output of the RMAN conversion places the datafiles in their final location where they will be used by
the destination database. In the original version of xttdriver.pl, this was the only method supported. The RMAN backup method requires the
following:
Staging areas are required on both the source and destination systems for the datafile copies created by RMAN. The staging areas
are referenced in the xtt.properties file using the parameters dfcopydir and stageondest. The final destination where converted
datafiles are placed is referenced in the xtt.properties file using the parameter storageondest. Refer to the Description of Parameters
in Configuration File xtt.properties section for details and sizing guidelines.
Details of using each of these methods are provided in the instructions below. The recommended method is the dbms_file_transfer method.
The Cross Platform Incremental Backup core functionality (i.e. incremental backup conversion) is delivered in Oracle Database 11.2.0.4 and
later. If the destination database version is 11.2.0.4 or later, then the destination database can perform this function. However, if the
destination database version is 11.2.0.3 or earlier, then, for the purposes of performing incremental backup conversion, a separate 11.2.0.4
software home, called the incremental convert home, must be installed, and an instance, called the incremental convert instance, must be
started in NOMOUNT state using that home. The incremental convert home and incremental convert instance are temporary and are used
only during the migration.
Note that because the dbms_file_transfer Prepare Phase method requires destination database 11.2.0.4, which can be used to perform the
incremental backup conversions function (as stated above), an incremental convert home and incremental convert instance are usually only
applicable when the Prepare Phase method is RMAN backup.
For details about setting up a temporary incremental convert instance, see instructions in Phase 1.
To enable debug mode, either run xttdriver.pl with the -d flag, or set environment variable XTTDEBUG=1 before running xttdriver.pl. Debug
mode enables additional screen output and causes all RMAN executions to be performed with the debug command line option.
"
1. If the source database contains nested IOTs with key compression, then the fix for Bug 14835322 must be installed in the destination
database home (where the tablespace plug operation occurs).
2. If you wish to utilize block change tracking on the source database when incremental backups are created, then the fix for Bug
16850197 must be installed in the source database home.
3. If using ASM in both source and destination, see XTTS Creates Alias on Destination when Source and Destination use ASM (Note
2351123.1)
4. If the roll forward phase (xttdriver.pl -r) fails with the following errors, then verify RMAN DEVICE TYPE DISK is not configured
COMPRESSED.
Entering RollForward
After applySetDataFile
Done: applyDataFileTo
Done: RestoreSetPiece
DECLARE
*
ERROR at line 1:
ORA-19624: operation failed, retry possible
ORA-19870: error while restoring backup piece
/dbfs_direct/FS1/xtts/incrementals/xtts_incr_backup
ORA-19608: /dbfs_direct/FS1/xtts/incrementals/xtts_incr_backup is not a backup
piece
ORA-19837: invalid blocksize 0 in backup piece header
ORA-06512: at "SYS.X$DBMS_BACKUP_RESTORE", line 2338
ORA-06512: at line 40
5. This document can be referred as well for known issues : Note 17866999.8 & If the source contains cluster objects, then run "analyze
cluster &cluster_name validate structure cascade" after XTTS has been completed in the target database and if it reports an
ORA-1499 open the trace file and review if it has entries like:
kdcchk: index points to block 0x01c034f2 slot 0x1 chain length is 256
kdcchk: chain count wrong 0x01c034f2.1 chain is 1 index says 256
last entry 0x01c034f2.1 blockcount = 1
kdavls: kdcchk returns 3 when checking cluster dba 0x01c034a1 objn 90376
If after repairing the inconsistency the "analyze cluster &cluster_name validate structure cascade" still reports issues then recreate the
affected cluster which involves recreating its tables.
Note that the fix of bug 17866999 is a workaround fix to repair the index cluster; it will not avoid the problem.Oracle did not find a valid
fix for this situation so it will affect any rdbms versions.
6.
# # ! $ % & #
The XTTS with Cross Platform Incremental Backups procedure is divided into the following four phases:
& (
Perform the following steps to configure the environment to use Cross Platform Incremental Backups:
! )
Install the desired Oracle Database software on the destination system that will run the destination database. It is highly
recommended to use Oracle Database 11.2.0.4 or later. Note that the dbms_file_transfer Prepare Phase method requires the
destination database to be 11.2.0.4.
Identify (or create) a database on the destination system to transport the tablespace(s) into and create the schema users required for
the tablespace transport.
Per generic TTS requirement, ensure that the schema users required for the tablespace transport exist in the destination database.
See the Destination Database 11.2.0.3 and Earlier Requires a Separate Incremental Convert Home and Instance section for details.
Skip this step if the destination database software version is 11.2.0.4 or later. Note that the dbms_file_transfer Prepare Phase method
requires the destination database to be 11.2.0.4.
If the destination database is 11.2.0.3 or earlier, then you must configure a separate incremental convert instance by performing the
following steps:
Install a new 11.2.0.4 database home on the destination system. This is the incremental convert home.
Using the incremental convert home startup an instance in the NOMOUNT state. This is the incremental convert instance. A
database does not need to be created for the incremental convert instance. Only a running instance is required.
The following steps may be used to create an incremental convert instance named xtt running out of incremental convert home
/u01/app/oracle/product/11.2.0.4/xtt_home:
! )*+$
0 % 1 /
)2 .
If ASM storage is used for the xtt.properties parameter / 3 (described below), then the #4 5*6 ! initialization
parameter setting for this instance must be equal to or higher than the / / setting for the ASM disk group used.
!$ ) *
Identify the tablespace(s) in the source database that will be transported. Tablespaces TS1 and TS2 will be used in the examples in this
document. As indicated above, the limitations and considerations for transportable tablespaces must still be followed.
!, ) ( - - + *. / 0
Note that the dbms_file_transfer Prepare Phase method requires the destination database to be 11.2.0.4.
If using dbms_file_transfer as the Prepare Phase method, then three database objects must be created:
1. A database directory object in the source database from where the datafiles are copied
2. A database directory object in the destination database to where the datafiles are placed
3. A database link in the destination database referencing the source database
The source database directory object references the location where the datafiles in the source database currently reside. For example,
to create directory object sourcedir that references datafiles in ASM location +DATA/prod/datafile, connect to the source database and
run the following SQL command:
)2 . 1 78+ 5 % % 9 7:
The destination database directory object references the location where the datafiles will be placed on the destination system. This
should be the final location where the datafils will reside when in use by the destination database. For example, to create directory
object dstdir that will place transferred datafiles in ASM location +DATA/prod/datafile, connect to the destination database and run the
following SQL command:
)2 . 1 78+ 5 % % 9 7:
The database link is created in the destination database, referencing the source database. For example, to create a database link
named ttslink, run the following SQL command:
)2 . / / 3 3 1 9 /1 , ; . <
7, .7:
Verify the database link can properly access the source system:
)2 . = 9 3:
!1 ) 2
Create the staging areas on the source and destinations systems as defined by the following xtt.properties parameters: backupformat,
backupondest.
Also, if using RMAN backups in the Prepare phase, create the staging areas on the source and destinations systems as defined by the
following xtt.properties parameters: dfcopydir, stageondest.
!3 ) 4 & *
On the source system, as the oracle software owner, download and extract the supporting scripts attached as
to this document.
;
%( % %
>
( ? >
9 <?
9 <? /3 0
9 <? / 0
9 <?
9 <?
<? 0
!5 ) 4 ! *
Edit the file on the source system with your site-specific configuration. For more information about the parameters
in the file, refer to the Description of Parameters in Configuration File xtt.properties section in the Appendix below.
!6 ) *4 & 4 ! *
As the oracle software owner copy all xttconvert scripts and the modified file to the destination system.
%( % % ?%( % %
!7 )
In the shell environment on both source and destination systems, set environment variable TMPDIR to the location where the
supporting scripts exist. Use this shell to run the Perl script as shown in the steps below. If TMPDIR is not set, output
files are created in and input files are expected to be in /tmp.
5#4+* $%( % %
5#4+* $%( % %
")
During the Prepare phase, datafiles of the tablespaces to be transported are transferred to the destination system and converted by the
xttdriver.pl script. There are two possible methods:
Select and use one of these methods based upon the information provided in the Requirements and Recommendations section above.
NOTE: For large number of files, using dbms_file_transfer has been found to be the fastest method for transferring datafiles to
destination.
"2 ) - -
Only use the steps in Phase 2A if the Prepare Phase method chosen is dbms_file_transfer and the setup instructions have been completed,
particularly those in Step 1.4.
During this phase datafiles of the tablespaces to be transported are transferred directly from source system and placed on the destination
system in their final location to be used by the destination database. If conversion is required, it is performed automatically during transfer.
No separate conversion step is required. The steps in this phase are run only once. The data being transported is fully accessible in the
source database during this phase.
"2! ) *
On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment
variables) pointing to the source database, run the prepare step as follows:
! " #!% %/ % )
The prepare step performs the following actions on the source system:
Verifies the tablespaces are online, in READ WRITE mode, and do not contain offline datafiles.
Creates the following files used later in this procedure:
xttnewdatafiles.txt
getfile.sql
The set of tablespaces being transported must all be online, contain no offline data files, and must be READ WRITE. The
Prepare step will signal an error if one or more datafiles or tablespaces in your source database are offline or READ ONLY. If
a tablespace is READ ONLY and will remain so throughout the procedure, then simply transport those tablespaces using the
traditional cross platform transportable tablespace process. No incremental apply is needed for those files.
"2!" ) *
On the destination system, log in as the oracle user and set the environment (ORACLE_HOME and ORACLE_SID
environment variables) to the destination database (it is invalid to attempt to use an incremental convert instance). Copy the
; 9 and < 9 0 files created in step 2A.1 from the source system and run the -G get_file step as
follows:
NOTE: This step copies all datafiles being transported from the source system to the destination system. The length of
time for this step to complete is dependent on datafile size, and may be substantial. Use getfileparallel option for
parallelism.
?%( % % % ; 9 %( % %
?%( % % %< 9 0 %( % %
@ #A)5 /
! " #!% %/ % B
When this step is complete, the datafiles being transported will reside in the final location where they will be used by the
destination database. Note that endian conversion, if required, is performed automatically during this step.
" ) 28
Only use the steps in Phase 2B if the Prepare Phase method chosen is RMAN backup and the setup instructions have been completed,
particularly those in Step 1.5.
During this phase datafile copies of the tablespaces to be transported are created on the source system, transferred to the destination
system, converted, and placed in their final location to be used by the destination database. The steps in this phase are run only once. The
data being transported is fully accessible in the source database during this phase.
" ! ) *
On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment
variables) pointing to the source database, run the prepare step as follows:
! " #!% %/ %
The prepare step performs the following actions on the source system:
Creates datafile copies of the tablespaces that will be transported in the location specified by the xtt.properties
parameter 9 1 .
Verifies the tablespaces are online, in READ WRITE mode, and do not contain offline datafiles.
Creates the following files used later in this procedure:
xttplan.txt
rmanconvert.cmd
The set of tablespaces being transported must all be online, contain no offline data files, and must be READ WRITE. The
Prepare step will signal an error if one or more datafiles or tablespaces in your source database are offline or READ ONLY. If
a tablespace is READ ONLY and will remain so throughout the procedure, then simply transport those tablespaces using the
traditional cross platform transportable tablespace process. No incremental apply is needed for those files.
" !" ) *
On the destination system, logged in as the oracle user, transfer the datafile copies created in the previous step from the
source system. Datafile copies on the source system are created in the location defined in xtt.properties parameter
9 1 . The datafile copies must be placed in the location defined by xtt.properties parameter < .
Any method of transferring the datafile copies from the source system to the destination system that results in a bit-for-bit copy
is supported.
If the 9 1 location on the source system and the < location on the destination system refer to the
same NFS storage location, then this step can be skipped since the datafile copies are already available in the expected
location on the destination system.
In the example below, is used to transfer the copies created by the previous step from the source system to the destination
system.
?% < %= % <
Note that due to current limitations with cross-endian support in DBMS_FILE_TRANSPORT and ASMCMD, you must use
OS-level commands, such as SCP or FTP to transfer the copies from the source system to destination system.
" !$ ) & *
On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID
environment variables) pointing to the destination database, copy the file created in step 2B.1 from the
source system and run the convert datafiles step as follows:
?%( % % % %( % %
! " #!% %/ %
The convert datafiles step converts the datafiles copies in the < location to the endian format of the destination
system. The converted datafile copies are written in the location specified by the xtt.properties parameter < .
This is the final location where datafiles will be accessed when they are used by the destination database.
When this step is complete, the datafile copies in < location are no longer needed and may be removed.
$) 9
During this phase an incremental backup is created from the source database, transferred to the destination system, converted to the
destination system endian format, then applied to the converted destination datafile copies to roll them forward. This phase may be run
multiple times. Each successive incremental backup should take less time than the prior incremental backup, and will bring the destination
datafile copies more current with the source database. The data being transported is fully accessible during this phase.
$! ) *
On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment
variables) pointing to the source database, run the create incremental step as follows:
! " #!% %/ %
The create incremental step executes RMAN commands to generate incremental backups for all / listed in
xtt.properties. It creates the following files used later in this procedure:
/3
/ 3
$!" ) *
Transfer the incremental backup(s) created during the previous step to the < location on the destination system.
The list of incremental backup files to copy are found in the / 3 file on the source system.
C / 3 C ?% <
If the / 3 9 location on the source system and the < location on the destination system refer to the
same NFS storage location, then this step can be skipped since the incremental backups are already available in the
expected location on the destination system.
$!$ ) & 2 * *
On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID
environment variables) pointing to the destination database, copy the and /3 files from the
source system and run the rollforward datafiles step as follows:
?%( % % % %( % %
?%( % % % /3 %( % %
! " #!% %/ %
The rollforward datafiles step connects to the incremental convert instance as SYS, converts the incremental backups, then
connects to the destination database and applies the incremental backups for each tablespace being transported.
Note:
1. You must copy the and /3 files each time that this step is executed, because their
content is different each iteration.
2. Do NOT change, copy or make any changes to the ; generated by the script.
3. The destination instance will be shutdown and restarted by this process.
$!, ) 9 . - 8 8 4
On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment
variables) pointing to the source database, run the determine new FROM_SCN step as follows:
! " #!% %/ %
The determine new FROM_SCN step calculates the next FROM_SCN, records it in the file xttplan.txt, then uses that SCN
when the next incremental backup is created in step 3.1.
$!1 ) 9 : $; & : ,;
1. If you need to bring the files at the destination database closer in sync with the production system, then repeat the Roll
Forward phase, starting with step 3.1.
2. If the files at the destination database are as close as desired to the source database, then proceed to the Transport
phase.
NOTE: If a datafile is added to one a tablespace since last incremental backup and/or a new tablespace name is added to
the xtt.properties, the following will appear:
Error:
------
The incremental backup was not taken as a datafile has been added to the tablespace:
2. Copy backups:
<backup list>
from <source location> to the <stage_dest> in destination
NOTE: Before running incremental backup, delete FAILED in source temp dir or
run xttdriver.pl with -L option:
These instructions must be followed exactly as listed. The next incremental backup will include the new datafile.
,)
During this phase the source data is made READ ONLY and the destination datafiles are made consistent with the source database by
creating and applying a final incremental backup. After the destination datafiles are made consistent, the normal transportable tablespace
steps are performed to export object metadata from the source database and import it into the destination database. The data being
transported is accessible only in READ ONLY mode until the end of this phase.
,! ) %2 .80<
On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment
variables) pointing to the source database, make the tablespaces being transported READ ONLY.
1 % )2 . / 5)& 1:
5 /
1 % )2 . / 5) 1:
5 /
,!" ) 9 + + & + 2 *
Repeat steps 3.1 through 3.3 one last time to create, transfer, convert, and apply the final incremental backup to the
destination datafiles.
! " #!% %/ %
C / 3 C ?% <
?%( % %
/3 ?%( % %
! " #!% %/ %
,!$ ) . /
On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID
environment variables) pointing to the destination database, run the generate Data Pump TTS command step as follows:
! " #!% %/ %
The generate Data Pump TTS command step creates a sample Data Pump network_link transportable import command in the
file < with the transportable tablespaces parameters TRANSPORT_TABLESPACES and
TRANSPORT_DATAFILES correctly set. Note that network_link mode initiates an import over a database link that refers to
the source database. A separate export or dump file is not required. If you choose to perform the tablespace transport with
this command, then you must edit the import command to replace import parameters DIRECTORY, LOGFILE, and
NETWORK_LINK with site-specific values.
After the object metadata being transported has been extracted from the source database, the tablespaces in the source
database may be made READ WRITE again, if desired.
Database users that own objects being transported must exist in the destination database before performing the
transportable import.
If you do not use network_link import, then perform the tablespace transport by running transportable mode Data Pump Export
on the source database to export the object metadata being transported into a dump file, then transfer the dump file to the
destination system, then run transportable mode Data Pump Import to import the object metadata into the destination
database. Refer to the following manuals for details:
,!, ) : ; %2 = %
The final step is to make the destination tablespace(s) READ WRITE in the destination database.
1 % )2 . / 5)& ; :
5 /
1 % )2 . / 5) ; :
5 /
,!1 ) >
At this step, the transported data is READ ONLY in the destination database. Perform application specific validation to verify
the transported data.
Also, run RMAN to check for physical and logical block corruption by running VALIDATE TABLESPACE as follows:
# J. / 5)&E 5) ( 3 < :
1)
If a separate incremental convert home and instance were created for the migration, then the instance may be shutdown and
the software removed.
Files created by this process are no longer required and may now be removed. They include the following:
## '
4 & ! .
The following table describes the options available for the main supporting script xttdriver.pl.
Option Description
Prepare step is run once on the source system during Phase 2A with the environment
(ORACLE_HOME and ORACLE_SID) set to the source database. This step creates files
xttnewdatafiles.txt and getfile.sql.
Get datafiles step is run once on the destination system during Phase 2A with the environment
(ORACLE_HOME and ORACLE_SID) set to the destination database. The -S option must be run
beforehand and files xttnewdatafiles.txt and getfile.sql transferred to the destination system.
This option connects to the destination database and runs script getfile.sql. getfile.sql invokes
dbms_file_transfer.get_file() subprogram for each datafile to transfer it from the source database
directory object (defined by parameter srcdir) to the destination database directory object (defined by
parameter dstdir) over a database link (defined by parameter srclink).
Prepare step is run once on the source system during Phase 2B with the environment
(ORACLE_HOME and ORACLE_SID) set to the source database.
This step connects to the source database and runs the xttpreparesrc.sql script once for each
tablespace to be transported, as configured in xtt.properties. xttpreparesrc.sql does the following:
1. Verifies the tablespace is online, in READ WRITE mode, and contains no offline datafiles.
2. Identifies the SCN that will be used for the first iteration of the incremental backup step and
writes it into file $TMPDIR/xttplan.txt.
3. Creates the initial datafile copies on the destination system in the location specified by the
parameter dfcopydir set in xtt.properties. These datafile copies must be transferred manually to
the destination system.
4. Creates RMAN script $TMPDIR/rmanconvert.cmd that will be used to convert the datafile copies
to the required endian format on the destination system.
-c convert datafiles
-c option is used only when Prepare phase method is RMAN backup.
Convert datafiles step is run once on the destination system during Phase 2B with the environment
(ORACLE_HOME and ORACLE_SID) set to the destination database.
This step uses the rmanconvert.cmd file created in the Prepare step to convert the datafile copies to
the proper endian format. Converted datafile copies are written on the destination system to the
location specified by the parameter storageondest set in xtt.properties.
-i create incremental Create incremental step is run one or more times on the source system with the environment
(ORACLE_HOME and ORACLE_SID) set to the source database.
This step reads the SCNs listed in $TMPDIR/xttplan.txt and generates an incremental backup that will
be used to roll forward the datafile copies on the destination system.
-r rollforward datafiles Rollforward datafiles step is run once for every incremental backup created with the environment
(ORACLE_HOME and ORACLE_SID) set to the destination database.
This step connects to the incremental convert instance using the parameters cnvinst_home and
cnvinst_sid, converts the incremental backup pieces created by the Create Incremental step, then
connects to the destination database and rolls forward the datafile copies by applying the incremental
for each tablespace being transported.
-s determine new FROM_SCN Determine new FROM_SCN step is run one or more times with the environment (ORACLE_HOME and
ORACLE_SID) set to the source database.
This step calculates the next FROM_SCN, records it in the file xttplan.txt, then uses that SCN when the
next incremental backup is created in step 3.1. It reports the mapping of the new FROM_SCN to wall
clock time to indicate how far behind the changes in the next incremental backup will be.
-e generate Data Pump TTS Generate Data Pump TTS command step is run once on the destination system with the environment
command (ORACLE_HOME and ORACLE_SID) set to the destination database.
This step creates the template of a Data Pump Import command that uses a network_link to import
metadata of objects that are in the tablespaces being transported.
-d debug -d option enables debug mode for xttdriver.pl and RMAN commands it executes. Debug mode can
also be enabled by setting environment variable XTTDEBUG=1.
9 4 !
The following table describes the parameters defined in the xtt.properties file that is used by xttdriver.pl.
srcdir
Directory object in the source database that defines where the srcdir=SOURCEDIR
source datafiles currently reside. Multiple locations can be used
separated by ",". The srcdir to dstdir mapping can either be N:1 or srcdir=SRC1,SRC2
N:N. i.e. there can be multiple source directories and the files will be
written to a single destination directory, or files from a particular
source directory can be written to a particular destination directory.
dstdir
Directory object in the destination database that defines where the dstdir=DESTDIR
destination datafiles will be created. If multiple source directories
are used (srcdir), then multiple destinations can be defined so a dstdir=DST1,DST2
particular source directory is written to a particular destination
directory.
srclink srclink=TTSLINK
Database link in the destination database that refers to the source
database. Datafiles will be transferred over this database link using
dbms_file_transfer.
dfcopydir dfcopydir=/stage_source
Location on the source system where datafile copies are created
during the "-p prepare" step.
This location must have sufficient free space to hold copies of all
datafiles being transported.
backupformat Location on the source system where incremental backups are backupformat=/stage_source
created.
This location must have sufficient free space to hold the incremental
backups created for one iteration through the process documented
above.
stageondest Location on the destination system where datafile copies are placed stageondest=/stage_dest
by the user when they are transferred manually from the source
system.
This location must have sufficient free space to hold copies of all
datafiles being transported.
storageondest storageondest=+DATA
Location on the destination system where the converted datafile - or -
copies will be written during the "-c conversion of datafiles" step. storageondest=/oradata
/prod/%U
This location must have sufficient free space to permanently hold
the datafiles that are transported.
This is the final location of the datafiles where they will be used by
the destination database.
This location must have sufficient free space to hold the incremental
backups created for one iteration through the process documented
above.
cnvinst_home cnvinst_home=/u01/app/oracle
Only set this parameter if a separate incremental convert home is in /product/11.2.0.4/xtt_home
use.
cnvinst_sid cnvinst_sid=xtt
Only set this parameter if a separate incremental convert home is in
use.
asm_home ORACLE_HOME for the ASM instance that runs on the destination asm_home=/u01
system. /app/11.2.0.4/grid
parallel parallel=3
Defines the degree of parallelism set in the RMAN CONVERT
command file rmanconvert.cmd. This file is created during the
prepare step and used by RMAN in the convert datafiles step to
convert the datafile copies on the destination system. If this
parameter is unset, xttdriver.pl uses parallel=8.
NOTE: RMAN parallelism used for the datafile copies created in the
RMAN Backup prepare phase and the incremental backup created
in the rollforward phase is controlled by the RMAN configuration on
the source system. It is not controlled by this parameter.
rollparallel rollparallel=2
Defines the level of parallelism for the -r roll forward operation.
getfileparallel getfileparallel=4
Defines the level of parallelism for the -G operation
"
" $ $ # # ( )*++,--.+
' *
Change Date
2017-Jun-06
rman_xttconvert_v3.zip released - adds support for added datafiles
2015-Apr-20
rman-xttconvert_2.0.zip released - add support for multiple source
and destination directories
2014-Nov-14
rman-xttconvert_1.4.2.zip released - add parallelism support for -G
get file from source operation
2014-Feb-21
rman-xttconvert_1.4.zip released - remove staging area
requirement, add parallel rollforward, eliminate conversion instance
requirements when using 11.2.0.4.
2013-Apr-10
rman-xttconvert_1.3.zip released - improves handling of large
databases with large number of datafiles.