SQLDB
SQLDB
APPLIES TO:
Oracle Database - Enterprise Edition - Version 12.1.0.2 to 12.2.0.1 [Release 12.1
to 12.2]
Oracle Database - Enterprise Edition - Version 9.2.0.3 to 9.2.0.3 [Release 9.2]
Oracle Database Cloud Schema Service - Version N/A and later
Oracle Database Exadata Express Cloud Service - Version N/A and later
Gen 1 Exadata Cloud at Customer (Oracle Exadata Database Cloud Machine) - Version
N/A and later
Information in this document applies to any platform.
******************* WARNING *************
GOAL
Starting with Oracle Database 10g, you can transport tablespaces across platforms.
In this note there is a step by step guide about how to do it with ASM datafiles
and with OS filesystem datafiles.
You could also convert the datafiles at source platform and once converted transfer
them to destination platform.
http://www.oracle.com/technetwork/database/features/availability/maa-wp-11g-
platformmigrationtts-129269.pdf
From 11.2.0.4, 12C and further, if converting to Linux x86-64 consider to follow
this doc:
SOLUTION
Supported platforms
You can query the V$TRANSPORTABLE_PLATFORM view to see the platforms that are
supported and to determine each platform's endian format (byte ordering).
If the source platform and the target platform are of different endianness, then an
additional step must be done on either the source or target platform to convert the
tablespace being transported to the target format. If they are of the same
endianness, then no conversion is necessary and tablespaces can be transported as
if they were on the same platform.
Note: these violations must be resolved before the tablespaces can be transported.
The tablespaces need to be in READ ONLY mode in order to successfully run a
transport tablespace export:
SQL> ALTER TABLESPACE TBS1 READ ONLY;
SQL> ALTER TABLESPACE TBS2 READ ONLY;
Export the metadata.
Using the original export utility:
exp userid=\'sys/sys as sysdba\' file=tbs_exp.dmp log=tba_exp.log
transport_tablespace=y tablespaces=TBS1,TBS2
Using Datapump export:
First create the directory object to be used for Datapump, like in:
CREATE OR REPLACE DIRECTORY dpump_dir AS '/tmp/subdir' ;
GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;
Then initiate Datapump Export:
expdp system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir TRANSPORT_TABLESPACES
= TBS1,TBS2
If the tablespace set being transported is not self-contained then the export will
fail.
Use V$TRANSPORTABLE_PLATFORM to determine the endianness of each platform. You can
execute the following query on each platform instance:
SELECT tp.platform_id,substr(d.PLATFORM_NAME,1,30), ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;
If you see that the endian formats are different and then a conversion is necessary
for transporting the tablespace set:
RMAN> convert tablespace TBS1 to platform="Linux IA (32-bit)" FORMAT '/tmp/%U';
Then copy the datafiles as well as the export dump file to the target environment.
Import the transportable tablespace.
Using the original import utility:
imp userid=\'sys/sys as sysdba\' file=tbs_exp.dmp log=tba_imp.log
transport_tablespace=y datafiles='/tmp/....','/tmp/...'
Using Datapump:
CREATE OR REPLACE DIRECTORY dpump_dir AS '/tmp/subdir';
GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;
Followed by:
impdp system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir
TRANSPORT_DATAFILES='/tmp/....','/tmp/...' REMAP_SCHEMA=(source:target)
REMAP_SCHEMA=(source_sch2:target_schema_sch2)
You can use REMAP_SCHEMA if you want to change the ownership of the transported
database objects.
Put the tablespaces in read/write mode:
SQL> ALTER TABLESPACE TBS1 READ WRITE;
SQL> ALTER TABLESPACE TBS2 READ WRITE;
Using DBMS_FILE_TRANSFER
You can also use DBMS_FILE_TRANSFER to copy datafiles to another host.
From 12c and in 11.2.0.4 DBMS_FILE_TRANSFER does the conversion by default. Using
DBMS_FILE_TRANSFER the destination database converts each block when it receives a
file from a platform with different endianness. Datafiles can be imported after
they are moved to the destination database as part of a transportable operation
without RMAN conversion.
In releases lower than 11.2.0.4 you need to follow the same steps specified above
for ASM files. But if the endian formats are different then you must use the RMAN
convert AFTER transfering the files. The files cannot be copied directly between
two ASM instances at different platforms.
The same example, but here showing the destination being an +ASM diskgroup:
Index Organized Tables (IOT) can become corrupt when using Transportable Tablespace
(TTS) from Solaris, Linux or AIX to HP/UX.
This is a restriction caused by BUG:9816640.
Currently there is no patch for this issue, the Index Organized Tables (IOT) need
to be recreated after the TTS.
See Document 1334152.1 Corrupt IOT when using Transportable Tablespace to HP from
different OS.
When using dropped columns, Bug:13001379 - Datapump transport_tablespaces produces
wrong dictionary metadata for some tables can occur.See Document 1440203.1 for
details on this alert.
Known issue Using DBMS_FILE_TRANSFER
=> Unpublished Bug 13636964 - ORA-19563 from RMAN convert on datafile copy
transferred with DBMS_FILE_TRANSFER (Doc ID 13636964.8)
Versions confirmed as being affected
11.2.0.3
This issue is fixed in
12.1.0.1 (Base Release)
11.2.0.4 (Future Patch Set)
Description
Rediscovery Notes:
If RMAN convert fails on a file transferred using DBMS_FILE_TRANSFER
then it may be due to this bug
Workaround
Transfer the file using OS facilities.
=> Dbms_file_transfer Corrupts Dbf File When Copying between endians (Doc ID
1262965.1)
Additional Resources
Community: Database Utilities
Still have questions? Use the above community to search for similar discussions or
start a new discussion on this subject.
Note: these violations must be resolved before the tablespaces can be transported.
The tablespaces need to be in READ ONLY mode in order to successfully run a
transport tablespace export:
SQL> ALTER TABLESPACE TBS1 READ ONLY;
SQL> ALTER TABLESPACE TBS2 READ ONLY;
Export the metadata.
Using the original export utility:
exp userid=\'sys/sys as sysdba\' file=tbs_exp.dmp log=tba_exp.log
transport_tablespace=y tablespaces=TBS1,TBS2
Using Datapump Export:
CREATE OR REPLACE DIRECTORY dpump_dir AS '/tmp/subdir';
GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;
followed by:
expdp system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir TRANSPORT_TABLESPACES
= TBS1,TBS2
If the tablespace set being transported is not self-contained, then the export will
fail.
Use V$TRANSPORTABLE_PLATFORM to find the exact platform name of target database.
You can execute the following query on target platform instance:
SELECT tp.platform_id,substr(d.PLATFORM_NAME,2,30), ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;
Generate an OS file from the ASM file, in target platform format:
RMAN> CONVERT TABLESPACE TBS1
TO PLATFORM 'HP-UX (64-bit)' FORMAT '/tmp/%U';
RMAN> CONVERT TABLESPACE TBS2
TO PLATFORM 'HP-UX (64-bit)' FORMAT '/tmp/%U';
Copy the generated file to target server if different from source.
Import the transportable tablespace
Using the original import utility:
imp userid=\'sys/sys as sysdba\' file=tbs_exp.dmp log=tba_imp.log
transport_tablespace=y datafiles='/tmp/....','/tmp/...'
Using Datapump Import:
CREATE OR REPLACE DIRECTORY dpump_dir AS '/tmp/subdir';
GRANT READ,WRITE ON DIRECTORY dpump_dir TO system;
followed by:
impdp system/password DUMPFILE=expdat.dmp DIRECTORY=dpump_dir
TRANSPORT_DATAFILES='/tmp/....','/tmp/...' REMAP_SCHEMA=(source:target)
REMAP_SCHEMA=(source_sch2:target_schema_sch2)
You can use REMAP_SCHEMA if you want to change the ownership of the transported
database objects.
Put the tablespaces in read/write mode:
SQL> ALTER TABLESPACE TBS1 READ WRITE;
SQL> ALTER TABLESPACE TBS2 READ WRITE;
If you want to transport the datafiles from ASM area to filesystem, you have
finished after the above steps. But if you want to transport tablespaces between
two ASM areas you must continue.
Copy the datafile '/tmp/....dbf' into the ASM area using rman:
rman nocatalog target /
RMAN> backup as copy datafile '/tmp/....dbf' format '+DGROUPA';
Note down the name of the copy created in the +DGROUPA diskgroup, ex.
'+DGROUPA/s101/datafile/tts.270.5'.
Put the datafile online again, we need to recover it first:
SQL> recover datafile '+DGROUPA/SID/datafile/tts.270.5';
SQL> alter database datafile '+DGROUPA/SID/datafile/tts.270.5' online;
Check if datafile is indeed part of the ASM area and online:
SQL> select name, status from v$datafile;
You can also use DBMS_FILE_TRANSFER to copy datafiles from one ASM disk group to
another, even on another host. Starting with 10g release 2 you can also use
DBMS_FILE_TRANSFER also to copy datafiles from ASM to filesystem and to filesystem
to ASM.
The PUT_FILE procedure reads a local file or ASM and contacts a remote database to
create a copy of the file in the remote file system. The file that is copied is the
source file, and the new file that results from the copy is the destination file.
The destination file is not closed until the procedure completes successfully.
Syntax:
DBMS_FILE_TRANSFER.PUT_FILE(
source_directory_object IN VARCHAR2,
source_file_name IN VARCHAR2,
destination_directory_object IN VARCHAR2,
destination_file_name IN VARCHAR2,
destination_database IN VARCHAR2);
Where:
source_directory_object: The directory object from which the file is copied at the
local source site. This directory object must exist at the source site.
source_file_name: The name of the file that is copied from the local file system.
This file must exist in the local file system in the directory associated with the
source directory object.
destination_directory_object: The directory object into which the file is placed at
the destination site. This directory object must exist in the remote file system.
destination_file_name: The name of the file placed in the remote file system. A
file with the same name must not exist in the destination directory in the remote
file system.
destination_database: The name of a database link to the remote database to which
the file is copied.
where target_connect is the connect string for target database and USER is the user
that we are going to use to transfer the datafiles.
Connect to source instance. The following items are used:
dbs1: Connect string to source database
dbs2: dblink to target database
a1.dat: Filename at source database
a4.dat: Filename at target database
CONNECT user/password@dbs1
REFERENCES
BUG:9816640 - ORA-600 [6200] ORA-600 [KDDUMMY_BLKCHK] IOT CORRUPTION CODE 6401
AFTER TTS
Yes
No
Document Details
Type:
Status:
Last Major Update:
Last Update:
Language:
HOWTO
PUBLISHED
Sep 29, 2020
Jun 29, 2021
English
Related Products
Information Centers
Show More
Document References
Bug 13636964 - ORA-19563 from RMAN convert on datafile copy transferred with
DBMS_FILE_TRANSFER [13636964.8]
Show More
Recently Viewed
PHYSICAL: ORA-00494: enqueue [CF] held for too long after Node Crash
[747071.1]
Ora-00494: Enqueue [Cf] Held For Too Long Causing Database To Crash
[1101862.1]
Show More
Related
Products
Oracle Database Products > Oracle Database Suite > Oracle Database > Oracle
Database - Enterprise Edition > Recovery Manager > Duplicating existing databases
Oracle Database Products > Oracle Database Suite > Oracle Database > Oracle
Database - Enterprise Edition > Recovery Manager > Duplicating existing databases
Oracle Database Products > Oracle Database Suite > Oracle Database > Oracle
Database - Enterprise Edition > Recovery Manager > Compatibility
Oracle Cloud > Oracle Platform Cloud > Oracle Database Cloud Service > Oracle
Database Cloud Schema Service
Oracle Cloud > Oracle Platform Cloud > Oracle Database Cloud Service > Oracle
Database Exadata Express Cloud Service
Oracle Cloud > Oracle Infrastructure Cloud > Oracle Cloud at Customer > Gen 1
Exadata Cloud at Customer (Oracle Exadata Database Cloud Machine)
Oracle Cloud > Oracle Platform Cloud > Oracle Cloud Infrastructure - Database
Service > Oracle Cloud Infrastructure - Database Service
Oracle Cloud > Oracle Platform Cloud > Oracle Database Backup Service > Oracle
Database Backup Service
Oracle Cloud > Oracle Platform Cloud > Oracle Database Cloud Exadata Service >
Oracle Database Cloud Exadata Service
Oracle Cloud > Oracle Platform Cloud > Oracle Database Cloud Service > Oracle
Database Cloud Service
Keywords
CHANGE;CONVERSION;CONVERT;CROSS
PLATFORM;DATAPUMP;ENDIAN;EXPDP;IMPDP;KOREAN;MIGRATION;OPERATING
SYSTEM;RMAN;TRANSPORTABLE;TRANSPORTATION;TRANSPORT_TABLESPACE;TTS;V$TRANSPORTABLE_P
LATFORM
Translations
EnglishSource