Moving Data Between Different Database Versions: See Also
Moving Data Between Different Database Versions: See Also
Versions
Because most Data Pump operations are performed on the server side, if you are using any version of the
database other than COMPATIBLE, you must provide the server with specific version information.
Otherwise, errors may occur. To specify version information, use the VERSION parameter.
See Also:
VERSION for information about the Data Pump Export VERSION parameter
VERSION for information about the Data Pump Import VERSION parameter
Keep the following information in mind when you are using Data Pump Export and Import to move data
between different database versions:
If you specify a database version that is older than the current database version, certain features
may be unavailable. For example, specifying VERSION=10.1 will cause an error if data
compression is also specified for the job because compression was not supported in 10.1.
On a Data Pump export, if you specify a database version that is older than the current database
version, then a dump file set is created that you can import into that older version of the database.
However, the dump file set will not contain any objects that the older database version does not
support. For example, if you export from a version 10.2 database to a version 10.1 database,
comments on indextypes will not be exported into the dump file set.
Data Pump Import can always read dump file sets created by older versions of the database.
Data Pump Import cannot read dump file sets created by a database version that is newer than the
current database version, unless those dump file sets were created with the version parameter set
to the version of the target database. Therefore, the best way to perform a downgrade is to perform
your Data Pump export with the VERSION parameter set to the version of the target database.
When operating across a network link, Data Pump requires that the remote database version be
either the same as the local database or one version older, at the most. For example, if the local
database is version 10.2, the remote database must be either version 10.1 or 10.2. If the local
database is version 10.1, then 10.1 is the only version supported for the remote database
Data Pump Export and Import operate on a group of files called a dump file set rather than on a
single sequential dump file.
Data Pump Export and Import access files on the server rather than on the client. This results in
improved performance. It also means that directory objects are required when you specify file
locations.
The Data Pump Export and Import modes operate symmetrically, whereas original export and
import did not always exhibit this behavior.
For example, suppose you perform an export with FULL=Y, followed by an import
using SCHEMAS=HR. This will produce the same results as if you performed an export
with SCHEMAS=HR, followed by an import with FULL=Y.
Data Pump Export and Import use parallel execution rather than a single stream of execution, for
improved performance. This means that the order of data within dump file sets and the
information in the log files is more variable.
Data Pump Export and Import represent metadata in the dump file set as XML documents rather
than as DDL commands. This provides improved flexibility for transforming the metadata at
import time.
Data Pump Export and Import are self-tuning utilities. Tuning parameters that were used in
original Export and Import, such as BUFFER and RECORDLENGTH, are neither required nor
supported by Data Pump Export and Import.
At import time there is no option to perform interim commits during the restoration of a partition.
This was provided by the COMMIT parameter in original Import.
There is no option to merge extents when you re-create tables. In original Import, this was
provided by the COMPRESS parameter. Instead, extents are reallocated according to storage
parameters for the target table.
Sequential media, such as tapes and pipes, are not supported.
The Data Pump method for moving data between different database versions is different than the
method used by original Export/Import. With original Export, you had to run an older version of
Export (exp) to produce a dump file that was compatible with an older database version. With
Data Pump, you can use the current Export (expdp) version and simply use
the VERSION parameter to specify the target database version. See Moving Data Between
Different Database Versions.
When you are importing data into an existing table using either APPEND or TRUNCATE, if any row
violates an active constraint, the load is discontinued and no data is loaded. This is different from
original Import, which logs any rows that are in violation and continues with the load.
Data Pump Export and Import consume more undo tablespace than original Export and Import.
This is due to additional metadata queries during export and some relatively long-running master
table queries during import. As a result, for databases with large amounts of metadata, you may
receive an ORA-01555: snapshot too old error. To avoid this, consider adding
additional undo tablespace or increasing the value of the UNDO_RETENTION initialization
parameter for the database.
If a table has compression enabled, Data Pump Import attempts to compress the data being loaded.
Whereas, the original Import utility loaded data in such a way that if a even table had compression
enabled, the data was not compressed upon import.
Data Pump supports character set conversion for both direct path and external tables. Most of the
restrictions that exist for character set conversions in the original Import utility do not apply to
Data Pump. The one case in which character set conversions are not supported under the Data
Pump is when using transportable tablespaces. [This info was added per mail from Simon
Law/Bill Fisher on 2/23/05]