Xtts Migration2exadata
Xtts Migration2exadata
Author: ohsdba
Version: V1.0
Revision History
目录
Introduction......................................................................................................................................... 1
XTTS improvement ........................................................................................................................... 2
Traditional XTTS steps ................................................................................................................................ 2
Parameters ..................................................................................................................................................... 5
Troubleshooting ........................................................................................................................................... 6
Pre-requisites ..................................................................................................................................... 6
Building staging area .................................................................................................................................. 7
On Exadata ....................................................................................................................................................................7
On AIX.............................................................................................................................................................................8
Database check............................................................................................................................................. 8
Check archive log on source ...................................................................................................................................8
Check additional....................................................................................................................................................... 14
Prepare exadata.......................................................................................................................................... 16
Modify environment.................................................................................................................................. 21
Configure xtt.properties ........................................................................................................................... 21
Determine the from scn for the next incremental backup ........................................................... 23
Repeat roll forward steps in order if necessary ............................................................................................. 23
Migrate to Exadata
Phase 5 – Cleanup.......................................................................................................................... 30
XTTS Reference ............................................................................................................................... 30
xttdriver.pl options..................................................................................................................................... 30
xtt.properties parameters ........................................................................................................................ 30
tts_create_seq.sql ....................................................................................................................................... 41
Reference .......................................................................................................................................... 41
Migrate to Exadata
Introduction
There is no migration utility (Script or DBUA) to perform a cross platform migration of an Oracle
Database. Changing platforms requires the database be re-built and / or the data moved using
one of the following methods:
1. Export / Import to include the use of Datapump facilities. All versions support
Export/Import but for Datapump 10.1.0.2 or higher is required
2. Transportable Tablespaces 10G or Later
3. RMAN Convert Database functions. 10G or Later
4. RMAN Duplicate
5. Streams Replication
6. Create Table As Select (CTAS)
7. Dataguard Heterogeneous Primary and Physical Standbys
8. Data Guard Transient Logical Standby
9. Oracle Golden Gate (For assistance with Oracle Golden Gate, an SR needs opened with
the correct team)
Each available choice will have strengths and limitations to include data types, time required
and potential costs. It will depend on BOTH the Operating System and Oracle versions on both
the source and destination.
Generally speaking, Oracle XTTS is typically used when you are migrating from a platform with a
different database endian (byte ordering) format and using Data Pump export/import does not
meet the availability service level requirement.
Oracle Exadata Database Machine uses the Linux x86 64-bit operating system which is little
endian format. XTTS can also be used to move from an older release of the database to a
newer release starting with database release 10.2.0.3 on the source system.
When using Cross Platform Transportable Tablespaces (XTTS) to migrate data between systems
that have different endian formats, the amount of downtime required can be substantial
because it is directly proportional to the size of the data set being moved. However, combining
XTTS with Cross Platform Incremental Backup can significantly reduce the amount of downtime
required to move data between platforms.
To reduce the amount of downtime required for XTTS, Oracle has enhanced RMAN's ability to
roll forward datafile copies using incremental backups, to work in a cross-platform scenario. By
using a series of incremental backups, each smaller than the last, the data at the destination
system can be brought almost current with the source system, before any downtime is required.
The downtime required for datafile transfer and convert when combining XTTS with Cross
Platform Incremental Backup is now proportional to the rate of data block changes in the
source system.
The Cross Platform Incremental Backup feature does not affect the amount of time it takes to
perform other actions for XTTS, such as metadata export and import. Hence, databases that
have very large amounts of metadata (DDL) will see limited benefit from Cross Platform
Incremental Backup since migration time is typically dominated by metadata operations, not
datafile transfer and conversion.
XTTS improvement
TTS feature available since Oracle 8i.Cross platform support since Oracle 10g.The Cross
Platform Incremental Backup core functionality (incremental backup conversion) is delivered in
Oracle Database 11.2.0.4 and later. If the target database is 11.2.0.4 or later, the target database
can perform this function. If the destination database version is 11.2.0.3 or earlier, to perform
incremental backup conversion, a separate 11.2.0.4 software home (the incremental convert
home) must be installed, and an instance (the incremental convert instance), must be started in
nomount state using that home. The incremental convert home and incremental convert
instance are temporary and are used only during the migration.
Traditional XTTS
Transfer and Conver Exporting the Metadata Importing into Tablespace Read
Tablespace Read Only
Datafiles on Source Metadata on Target Write
The source datafiles must be in read only before copying to the target system. This copy can
take a very, very long time for a large, multi-terabyte database. That’s problem if we do not
have enough downtime.
The initial copy of the datafiles occurs while the source database remains online. Then we took
the incremental of the source database, transferred to the target, converted as target endian
format, and applied to the target.
Migration method
There are two primary scripts
• Perl script xttdriver.pl - the script that is run to perform the main steps of the XTTS with
Cross Platform Incremental Backup procedure.
• Parameter file xtt.properties - the file that contains your site-specific configuration.
During the Prepare phase, datafiles of the tablespaces to be transported are transferred to the
destination system and converted by the xttdriver.pl script. There are two possible methods:
xttplan.txt
rmanconvert.cmd
DBMS_FILE_TRANSFER
1) It does not require staging area space on either the source or destination system;
2) datafile conversion occurs automatically during transfer - there is not a separate conversion
step. The dbms_file_transfer method requires the following:
• A destination database running 11.2.0.4. Note that an incremental convert home or instance do not
participate in dbms_file_transfer file transfers.
• A database directory object in the source database from where the datafiles are copied.
• A database directory object in the destination database to where the datafiles are placed.
• A database link in the destination database referencing the source database.
RMAN
The RMAN backup method runs RMAN on the source system to create backups on the source
system of the datafiles to be transported. The backups files must then be manually transferred
over the network to the destination system. On the destination system the datafiles are
converted by RMAN, if necessary. The output of the RMAN conversion places the datafiles in
their final location where they will be used by the destination database. In the original version
of xttdriver.pl, this was the only method supported. The RMAN backup method requires the
following:
• Staging areas are required on both the source and destination systems for the datafile
copies created by RMAN. The staging areas are referenced in the xtt.properties file
using the parameters dfcopydir and stageondest. The final destination where
converted datafiles are placed is referenced in the xtt.properties file using the
parameter storageondest. Refer to the Description of Parameters in Configuration File
xtt.properties section for details and sizing guidelines.
Details of using each of these methods are provided in the instructions below. The
recommended method is the dbms_file_transfer method.
Parameters
DFT RMAN
tablespaces √ √
platformid √ √
srcdir √
dstdir √
srclink √(mandatory) √(optional)
dfcopydir √
backupformat √ √
(incremental backup location, the parallel is depenad on
rman device type disk parallelism)
stageondest √
storageondest √ √
(final datafile location)
backupondest(incremental) √ √
cnvinst_home(optional) √ √
(will be used if target database lower 11.2.0.4)
cnvinst_sid(optional) √ √
(will be used if target database lower 11.2.0.4)
asm_home(optional) √ √
(will be used if target database lower 11.2.0.4)
asm_sid(optional) √ √
(if backupondest is ASM DiskGroup, it will be used)
parallel √
(prepare phase, default 8)
rollparallel √ √
(parallelism for the -r roll forward operation)
getfileparallel √
(prepare phase, currently max value is 8)
metatransfer √ √
(if metatransfer=1,it will transfer the temporary files and
the backups from source to destination. desthost,
desttmpdir needs to be defined)
destuser √ √
desthost √ √
desttmpdir √ √
allowstandby √ √
(if allow, allowstandby=1)
Troubleshooting
To enable debug mode, either run xttdriver.pl with the -d flag, or set environment variable
XTTDEBUG=1 before running xttdriver.pl. Debug mode enables additional screen output and
causes all RMAN executions to be performed with the debug command line option.
Pre-requisites
• All steps in this procedure are run as the oracle user that is a member of the OSDBA
group. OS authentication is used to connect to both the source and destination
databases.
• If the Prepare Phase method selected is dbms_file_transfer, then the destination
database must be 11.2.0.4. See the Selecting the Prepare Phase Method section for
details.
• If the Prepare Phase method selected is RMAN backup, then staging areas are required
on both the source and destination systems. See the Selecting the Prepare Phase
Method section for details.
• It is not supported to execute this procedure against a standby or snapshot standby
databases.
• If the destination database version is 11.2.0.3 or lower, then a separate database home
containing 11.2.0.4 running an 11.2.0.4 instance on the destination system is required to
perform the incremental backup conversion. See the Destination Database 11.2.0.3 and
Earlier Requires a Separate Incremental Convert Home and Instance section for details.
If using ASM for 11.2.0.4 Convert Home, then ASM needs to be on 11.2.0.4, else error
ORA-15295 (e.g. ORA-15295: ASM instance software version 11.2.0.3.0 less than client
version 11.2.0.4.0) is raised.
• Tablespaces to be transported should not exist on target; if yes rename the tablespaces
at target.
• Make sure that the schema users required for the tablespace transport exist in the
target database.
• It’s better to create database link on target to connect to source
• Enable block change tracking in source database
• Download supporting scripts for Cross Platform Incremental Backups from Doc ID
1389592.1/ 2005729.1
• dfcopydir, backupformat, stageondest and backupondest directories should be created
before using the xtts migration tool and make sure it’s enough to hold all backups or
datafiles
Note: Only those database objects that are physically located in the tablespaces that are being
transported will be copied to the destination system. If you need for other objects to be
transported, that are located in different tablespaces (such as, for example, pl/sql objects,
sequences, etc., that are located in the SYSTEM tablespace), you can use data pump to copy
those objects to the destination system.
On Exadata
mkdir /xttstage
On AIX
# nfso -a | grep nfs_use_reserved_ports
nfs_use_reserved_ports = 0
# nfso -o nfs_use_reserved_ports=1
Setting nfs_use_reserved_ports to 1
mkdir /xttstage
chown -R oracle:oinstall /xttstage
mount -o rw,bg,hard,noac,nointr,proto=tcp,vers=3,rsize=131072,wsize=131072,timeo=600
192.168.10.12:/xttstage /xttstage
Database check
source target
CHARACTERSET
National CHARACTERSET
source target
Endian Format
source target
platform_id
platform_name
PLATFORM_NAME ENDIAN_FORMAT
------------------------------ --------------
AIX-Based Systems (64-bit) Big
SQL>
PLATFORM_NAME ENDIAN_FORMAT
------------------------------ --------------
Linux x86 64-bit Little
Check self-containment
Perform self-containment check on the required tabelsapce/s, to see what kind of
Beginning with Oracle Database 10g Release 2, you can transport tablespaces that contain
XMLTypes, but you must use the IMP and EXP utilities, not Data Pump; when using EXP, ensure
that the CONSTRAINTS and TRIGGERS parameters are set to Y (the default). For versions 11.1
and higher, you must use only Data Pump, not the IMP and EXP Utilities. This restriction on
XML types and tables with binary XML storage continues in 12.1.
all_all_tables t
where t.table_name=x.table_name
and t.tablespace_name=p.tablespace_name
and x.owner=u.username;
Restriction : Be aware that TTS across different endian platforms are not supported for spatial
indexes in 10gR1 and 10gR2; such a limitation has been released in 11g
- specific Spatial packages must be run before exporting and after transportation, please see
Oracle Spatial documentation.
Transportable Tablespace import using IMPDP fails when the tablespace contains a spatial index
defined in it.
Objects with underlying objects (such as materialized views) or contained objects (such as
partitioned tables) are not transportable unless all of the underlying or contained objects are in
the tablespace set.
Check Timezone
If the source is Oracle Database 11g release 2 (11.2.0.2) or later and there are tables in the
transportable set that use TIMESTAMP WITH TIMEZONE (TSTZ) columns, then the time zone file
version on the target database must exactly match the time zone file version on the source
database.
If the source is earlier than Oracle Database 11g release 2 (11.2.0.2), then the time zone file
version must be the same on the source and target database for all transportable jobs
regardless of whether the transportable set uses TSTZ columns.
If these requirements are not met, then the import job aborts before anything is imported. This
is because if the import job were allowed to import the
objects, there might be inconsistent results when tables with TSTZ columns were read.
To identify the time zone file version of a database, you can execute the following SQL
statement:
Check options
set linesize 156 pages 156
select parameter,value from v$option order by 2;
PARAMETER VALUE
---------------------------------------------------------------- ------------
Real Application Clusters FALSE
Automatic Storage Management FALSE
Oracle Label Security FALSE
ASM Proxy Instance FALSE
Unified Auditing FALSE
Management Database FALSE
I/O Server FALSE
Oracle Database Vault FALSE
Partitioning TRUE
Objects TRUE
Advanced replication TRUE
Bit-mapped indexes TRUE
Connection multiplexing TRUE
Connection pooling TRUE
Database queuing TRUE
Incremental backup and recovery TRUE
Instead-of triggers TRUE
Parallel backup and recovery TRUE
Parallel execution TRUE
Parallel load TRUE
Point-in-time tablespace recovery TRUE
Fine-grained access control TRUE
Proxy authentication/authorization TRUE
Change Data Capture TRUE
Plan Stability TRUE
Online Index Build TRUE
Coalesce Index TRUE
Managed Standby TRUE
Materialized view rewrite TRUE
Database resource manager TRUE
Spatial TRUE
Export transportable tablespaces TRUE
Transparent Application Failover TRUE
Fast-Start Fault Recovery TRUE
Sample Scan TRUE
Duplexed backups TRUE
Java TRUE
OLAP Window Functions TRUE
Block Media Recovery TRUE
Fine-grained Auditing TRUE
Application Role TRUE
Enterprise User Security TRUE
Oracle Data Guard TRUE
OLAP TRUE
Basic Compression TRUE
Join index TRUE
Trial Recovery TRUE
Advanced Analytics TRUE
Online Redefinition TRUE
Streams Capture TRUE
File Mapping TRUE
Block Change Tracking TRUE
Flashback Table TRUE
Flashback Database TRUE
Transparent Data Encryption TRUE
Backup Encryption TRUE
Unused Block Compression TRUE
Result Cache TRUE
SQL Plan Management TRUE
SecureFiles Encryption TRUE
86 rows selected.
SQL>
Check additional
If the owner/s of tablespace objects does not exist on target database, the usernames need to
be created manually before starting the transportable
tablespace import.
Opaque Types Types(such as RAW, BFILE, and the AnyTypes) can be transported, but they are
not converted as part of the cross-platform transport operation. Their actual structure is known
only to the application, so the application must address any endianness issues after these types
are moved to the new platform.
Before performing a TTS procedure ii, it’s important to be aware that the use of traditional
EXP/IMP with 11.2 is desupported. Original Export is desupported for general use as of Oracle
Database 11g. The only supported use of original Export in Oracle Database 11g is backward
migration of XMLType data to Oracle Database 10g release 2 (10.2) or earlier. Therefore, Oracle
recommends that you use the new Data Pump Export and Import utilities, except in the
following situations which require original Export and Import.
You cannot transport an encrypted tablespace to a database that already has an Oracle wallet for transparent data
encryption. In this case, you must use Oracle Data Pump to export the tablespace's schema objects and then import
them to the destination database. You can optionally take advantage of Oracle Data Pump features that enable you to
maintain encryption for the data while it is being exported and imported. See Oracle Database Utilities for more
information.
Tablespaces that do not use block encryption but that contain tables with encrypted columns cannot be transported.
You must use Oracle Data Pump to export and import the tablespace's schema objects. You can take advantage of
Oracle Data Pump features that enable you to maintain encryption for the data while it is being exported and
imported. See Oracle Database Utilities for more information.
- The database character sets of the source and the target databases are the same.
- The source database character set is a strict (binary) subset of the target database character set, and the
following three conditions are true:
+ The source database is in version 10.1.0.3 or higher.
+ The tablespaces to be transported contain no table columns with character length semantics or the
maximum character width is the same in both the source and target database character sets.
+ The tablespaces to be transported contain no columns with the CLOB data type, or the source and the
target database character sets are both single-byte or both multibyte.
- The source database character set is a strict (binary) subset of the target database character set, and the
following two conditions are true:
+ The source database is in a version lower than 10.1.0.3.
+ The maximum character width is the same in the source and target database character sets.
https://mikedietrichde.com/2016/05/25/transportable-tablespaces-characters-sets-same-same-
but-different/
Prepare exadata
Cleanup all unnecessary files and database before go live, and create the new database, modify
database initial parameters. Make sure it’s in best state.
Migration summary
We will migrate tablespaces from 10.2.0.5 Database running on IBM-AIX 5.3 Power with Big
Endian format to 12c Database running on Oracle Exadata (OEL 6.9) with Small Endian format.
We will use RMAN mode in this document.
Time summary
Migration Section Time Markers Done
DB Size 6+ TB 6+ TB
Running DB Instances
Platform
Model
Operating System
# CPU
Backup on disk/tape
Backup time
Running DB Instances
Platform
Model
Operating System
# CPU
Backup on disk/tape
Backup time
Sessions
Processes
Components
Inmemory_size
col comp_id for a10
col version for a12
col comp_name for a35
select comp_id,comp_name,version,status from dba_registry;
15 rows selected.
SQL>
Component ID Component Name Version Status
Owner (s)
Tablespace(s)
source
Owner Invalid objects
Invalid objects summary
Objects summary
SQL>
Prerequisites verification
Please make sure your environment satisfies all the above mentioned prerequisites.
Note: As we put the xtts scripts on stage directory (it’s NFS directory), just unzip it on source.
Modify environment
export TMPDIR=/xttstage/xtts
Configure xtt.properties
tablespaces=TS1,TS2
platformid=6
srclink=
dfcopydir=/xttstage
backupformat=/xttstage
stageondest=/xttstage
storageondest=+DATA
backupondest=/xttstage
parallel=6
rollparallel=8
Note: storageondest is the final location where you put the datafiles. If pluggable database is
used, you’d better put them under pdb directory.
storageondest=+DATA/pcdb/760270D22244401B933A5144B8F0D553
Purge recyclebin
sqlplus / as sysdba
purge dba_recyclebin;
./xttdriver.pl -p
Output
./xttdriver.pl -r
Output
NOTE: If a datafile is added to one a tablespace since last incremental backup and/or a new
tablespace name is added to the xtt.properties, the following will appear:
Error:
------
The incremental backup was not taken as a datafile has been added to the tablespace:
2. Copy backups:
<backup list>
from <source location> to the <stage_dest> in destination
NOTE: Before running incremental backup, delete FAILED in source temp dir or
run xttdriver.pl with -L option:
These instructions must be followed exactly as listed. The next incremental backup will include the
new datafile.
./xttdriver.pl -e
It will generate file xttplugin.txt
Output
You have three ways to do the import, just choose one (import traditional is desupported since
11gR2)
exclude=table_statistics,index_statistics \
transport_tablespaces=TS1,TS2 \
transport_datafiles='+DATA/prod/datafile/ts1.285.771686721', \
'+DATA/prod/datafile/ts2.286.771686723', \
'+DATA/prod/datafile/ts2.287.771686743'
Output
Import manually
Import traditional
exp \'/ as sysdba\'
transport_tablespace=y
tablespaces=''
statistics=none
file=
log=
RMAN validate
RMAN> validate tablespace TS1, TS2 check logical;
Output
source target
Owner Invalid objects
Invalid objects summary
Objects summary
Sequence verify
col sequence_owner for a10
col sequence_name for a25
select a.sequence_owner,a.sequence_name,a.last_number,b.last_number from dba_sequences
a,dba_sequences@ttslink b
where a.sequence_owner=b.sequence_owner
and a.sequence_name=b.sequence_name
order by a.sequence_owner,a.sequence_name;
-S prepare
Migrate source for transfer Prepare step is run once on the source system during Phase 2A with the environment
to Exadata
(ORACLE_HOME and ORACLE_SID) set to the source database. This step creates files
xttnewdatafiles.txt and getfile.sql.
-G option is used only when Prepare phase method is dbms_file_transfer.
Get datafiles step is run once on the destination system during Phase 2A with the
environment (ORACLE_HOME and ORACLE_SID) set to the destination database. The -S
option must be run beforehand and files xttnewdatafiles.txt and getfile.sql transferred to the
-G get datafiles from source destination system.
This option connects to the destination database and runs script getfile.sql. getfile.sql invokes
dbms_file_transfer.get_file() subprogram for each datafile to transfer it from the source
database directory object (defined by parameter srcdir) to the destination database directory
object (defined by parameter dstdir) over a database link (defined by parameter srclink).
-p option is used only when Prepare phase method is RMAN backup.
Prepare step is run once on the source system during Phase 2B with the environment
(ORACLE_HOME and ORACLE_SID) set to the source database.
This step connects to the source database and runs the xttpreparesrc.sql script once for each
tablespace to be transported, as configured in xtt.properties. xttpreparesrc.sql does the
following:
-p prepare source for backup 1. Verifies the tablespace is online, in READ WRITE mode, and contains no offline
datafiles.
2. Identifies the SCN that will be used for the first iteration of the incremental backup
step and writes it into file $TMPDIR/xttplan.txt.
3. Creates the initial datafile copies on the destination system in the location specified
by the parameter dfcopydir set in xtt.properties. These datafile copies must be
transferred manually to the destination system.
4. Creates RMAN script $TMPDIR/rmanconvert.cmd that will be used to convert the
datafile copies to the required endian format on the destination system.
Convert datafiles step is run once on the destination system during Phase 2B with the
-c convert datafiles environment (ORACLE_HOME and ORACLE_SID) set to the destination database.
This step uses the rmanconvert.cmd file created in the Prepare step to convert the datafile
copies to the proper endian format. Converted datafile copies are written on the destination
system to the location specified by the parameter storageondest set in xtt.properties.
Create incremental step is run one or more times on the source system with the environment
(ORACLE_HOME and ORACLE_SID) set to the source database.
-i create incremental
This step reads the SCNs listed in $TMPDIR/xttplan.txt and generates an incremental backup
that will be used to roll forward the datafile copies on the destination system.
Rollforward datafiles step is run once for every incremental backup created with the
environment (ORACLE_HOME and ORACLE_SID) set to the destination database.
-r rollforward datafiles This step connects to the incremental convert instance using the parameters cnvinst_home
and cnvinst_sid, converts the incremental backup pieces created by the Create Incremental
step, then connects to the destination database and rolls forward the datafile copies by
applying the incremental for each tablespace being transported.
Determine new FROM_SCN step is run one or more times with the environment
(ORACLE_HOME and ORACLE_SID) set to the source database.
-s determine new FROM_SCN
This step calculates the next FROM_SCN, records it in the file xttplan.txt, then uses that SCN
when the next incremental backup is created in step 3.1. It reports the mapping of the new
FROM_SCN to wall clock time to indicate how far behind the changes in the next incremental
backup will be.
Generate Data Pump TTS command step is run once on the destination system with the
environment (ORACLE_HOME and ORACLE_SID) set to the destination database.
-e generate Data Pump TTS
command
This step creates the template of a Data Pump Import command that uses a network_link to
import metadata of objects that are in the tablespaces being transported.
-d option enables debug mode for xttdriver.pl and RMAN commands it executes. Debug
-d debug
mode can also be enabled by setting environment variable XTTDEBUG=1.
begin
dbms_stats.set_global_prefs('CONCURRENT','TRUE');
end;
/
exec dbms_stats.gather_schema_stats(ownname => '<>',degree => 6);
Phase 5 – Cleanup
Cleanup all files created for this process
Cleanup staging area if not done already
XTTS Reference
xttdriver.pl options
xtt.properties parameters
Parameter Description Example Setting
Directory object in the source database that defines where the source
datafiles currently reside. Multiple locations can be used separated by ",".
The srcdir to dstdir mapping can either be N:1 or N:N. i.e. there can be
multiple source directories and the files will be written to a single srcdir=SOURCEDIR
srcdir destination directory, or files from a particular source directory can be
written to a particular destination directory. srcdir=SRC1,SRC2
This location must have sufficient free space to hold copies of all datafiles
being transported.
dfcopydir This location may be an NFS-mounted filesystem that is shared with the dfcopydir=/stage_source
destination system, in which case it should reference the same NFS
location as the stageondest parameter for the destination system. See
Note 359515.1 for mount option guidelines.
This location must have sufficient free space to hold the incremental
backups created for one iteration through the process documented
backupformat above. backupformat=/stage_source
stageondest Location on the destination system where datafile copies are placed by stageondest=/stage_dest
the user when they are transferred manually from the source system.
This location must have sufficient free space to hold copies of all datafiles
being transported.
This is also the location from where datafiles copies and incremental
backups are read when they are converted in the "-c conversion of
datafiles" and "-r roll forward datafiles" steps.
This location must have sufficient free space to permanently hold the
storageondest=+DATA
datafiles that are transported.
storageondest - or -
This is the final location of the datafiles where they will be used by the storageondest=/oradata/prod/%U
destination database.
This location must have sufficient free space to hold the incremental
backupondest backups created for one iteration through the process documented backupondest=+RECO
above.
asm_sid ORACLE_SID for the ASM instance that runs on the destination system. asm_sid=+ASM1
NOTE: RMAN parallelism used for the datafile copies created in the
RMAN Backup prepare phase and the incremental backup created in the
rollforward phase is controlled by the RMAN configuration on the source
system. It is not controlled by this parameter.
rollparallel rollparallel=2
Defines the level of parallelism for the -r roll forward operation.
##
## See documentation below and My Oracle Support Notes 1389592.1(11g) and 2005729.1 (12c) for
details.
##
## Tablespaces to transport
## ========================
##
## tablespaces
## -----------
## Comma separated list of tablespaces to transport from source database
## to destination database.
## Specify tablespace names in CAPITAL letters.
tablespaces=TS1,TS2
## dstdir
## ------
## Directory object in the destination database that defines where the
## destination datafiles will be created.
## Feb 2015: Ver2: We support multiple DESTDIR's.
## SOURCEDIR1 will map to DESTDIR1 and SOURCEDIR2 to DESTDIR2 and so on
## Refer to above parameter for more examples
#dstdir=DESTDIR1,DESTDIR2
## srclink
## -------
## Database link in the destination database that refers to the source
## database. Datafiles will be transferred over this database link using
## dbms_file_transfer.
srclink=TTSLINK
## backupformat
## ------------
## Location where incremental backups are created.
##
## This location may be an NFS-mounted filesystem that is shared with the
## destination system, in which case it should reference the same NFS location
## as the stageondest property for the destination system.
backupformat=/stage_source
## storageondest
## -------------
## This parameter is used only when Prepare phase method is RMAN backup.
##
## Location where the converted datafile copies will be written during the
## "-c conversion of datafiles" step. This is the final location of the
## datafiles where they will be used by the destination database.
storageondest=+DATA
## backupondest
## ------------
## Location where converted incremental backups on the destination system
## will be written during the "-r roll forward datafiles" step.
##
## NOTE: If this is set to an ASM location then define properties
## asm_home and asm_sid below. If this is set to a file system
## location, then comment out asm_home and asm_sid below
backupondest=+RECO
## asm_home, asm_sid
## -----------------
## Grid home and SID for the ASM instance that runs on the destination
## system.
##
## NOTE: If backupondest is set to a file system location, then comment out
## both asm_home and asm_sid.
#asm_home=/u01/app/11.2.0.4/grid
#asm_sid=+ASM1
## Parallel parameters
## ===================
##
## parallel
## --------
## Parallel defines the channel parallelism used in copying (prepare phase),
## converting.
##
## Note: Incremental backup creation parallelism is defined by RMAN
## configuration for DEVICE TYPE DISK PARALLELISM.
##
## If undefined, default value is 8.
parallel=3
## rollparallel
## ------------
## getfileparallel
## ---------------
## Defines the level of parallelism for the -G operation
##
## If undefined, default value is 1. Max value supported is 8.
## This will be enhanced in the future to support more than 8
## depending on the destination system resources.
getfileparallel=4
## metatransfer
## ---------------
## If passwordless ssh is enabled between the source and the destination, the
## script can automatically transfer the temporary files and the backups from
## source to destination. Other parameters like desthost, desttmpdir needs to
## be defined for this to work. destuser is optional
## metatransfer=1
## destuser
## ---------
## The username that will be used for copying the files from source to dest
## using scp. This is optional
## destuser=username
## desthost
## --------
## This will be the name of the destination host.
## desthost=machinename
## desttmpdir
## ---------------
## This should be defined to same directory as TMPDIR for getting the
## temporary files. The incremental backups will be copied to directory pointed
## by stageondest parameter.
## desttmpdir=/tmp
## dumpdir
## ---------
## The directory in which the dump file be restored to. If this is not specified
## then TMPDIR is used.
## dumpdir=/tmp
## using scp. This is optional
## destuser=username
## allowstandby
## ---------
## This will allow the script to be run from standby database.
## allowstandby=1
## END
Scripts Reference
tts_verify.sql
tts_verify.sql is a sample script to compare segment, object, and invalid object counts between the
source and target databases.
REM
REM Script to compare segment, object, and invalid object counts
REM between two databases. This script should be run on the target
REM database.
REM
REM This script requires a database link named ttslink between the
REM source and target databases.
REM
set heading off feedback off trimspool on linesize 500
spool tts_verify.out
prompt
prompt Segment count comparison across dblink
prompt
select r.owner, r.segment_type, r.remote_cnt Source_Cnt, l.local_cnt Target_Cnt
from ( select owner, segment_type, count(owner) remote_cnt
from dba_segments@ttslink
where owner not in
(select name
from system.logstdby$skip_support
where action=0) group by owner, segment_type ) r
, ( select owner, segment_type, count(owner) local_cnt
from dba_segments
where owner not in
(select name
from system.logstdby$skip_support
where action=0) group by owner, segment_type ) l
where l.owner (+) = r.owner
tts_create_seq.sql
set heading off feedback off trimspool on escape off
set long 1000 linesize 1000 pagesize 0
col SEQDDL format A300
spool tts_create_seq.sql
prompt /* ========================= */
prompt /* Drop and create sequences */
prompt /* ========================= */
select regexp_replace(
dbms_metadata.get_ddl('SEQUENCE',sequence_name,sequence_owner),
'^.*(CREATE SEQUENCE.*CYCLE).*$',
'DROP SEQUENCE "'||sequence_owner||'"."'||sequence_name
||'";'||chr(10)||'\1;') SEQDDL
from dba_sequences
where sequence_owner not in
(select name
from system.logstdby$skip_support
where action=0)
;
spool off
Reference
https://mikedietrichde.com/2017/10/05/oow-2017-slides-download-migrate-100-tb-databases-less-one-day/
http://docs.oracle.com/database/121/ADMIN/transport.htm#ADMIN13721
http://docs.oracle.com/database/122/ADMIN/transporting-data.htm#ADMIN13721
NOTE:413484.1 - Data Guard Support for Heterogeneous Primary and Physical Standbys in Same Data Guard Configuration
(Doc ID 1902618.1) Migrate database to Exadata with DBMS_FILE_TRANSFER (Doc ID 371556.1)How to Migrate to different
Endian Platform Using Transportable Tablespaces With RMAN
(Doc ID 1389592.1) 11G - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup
(Doc ID 1454872.1) Transportable Tablespace (TTS) Restrictions and Limitations: Details, Reference, and Version Where
Applicable
(Doc ID 2005729.1) 12C - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup
Note:1166564.1 Master Note for Transportable Tablespaces Common Questions and Issues
Note:1454872.1 Transportable Tablespace Restrictions and Limitations: Details, Reference, and Version Where Applicable