0% found this document useful (0 votes)
59 views9 pages

Imp Exp

Export utility extracts objects and the ir data from the database and writes the data to an export file. The file is I n binary format and can reside either on disk or tape. Export files are only used by the import utility to load the data into a different database or back into the same one.

Uploaded by

Vineet Kumar
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views9 pages

Imp Exp

Export utility extracts objects and the ir data from the database and writes the data to an export file. The file is I n binary format and can reside either on disk or tape. Export files are only used by the import utility to load the data into a different database or back into the same one.

Uploaded by

Vineet Kumar
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 9

export

The export utility (exp ) enables DBAs to extract information from the database.
Exports play a vital role in a backup and recovery strategy, and are a conveni
ent way to duplicate production data in test environments.
They are used for many reasons, including:
· Logical database backups.
· Copying data from one database to another (used in combination with import).
· Reorganization of segments (utilizing compress=y).
The export utility extracts objects (along with their dependent objects) and the
ir data from the database and writes the data to an export file. This file is i
n binary format and can reside either on disk or tape.
Exporting to disk is much faster, but may not be an option for large databases w
here free disk space is limited. The export files are only used by the import u
tility to load the data into a different database or back into the same one. Th
e version of the import utility cannot be older than the export version used.
Export Modes
There are four data export modes:
· Full exports the contents of the entire database. This can be time consumin
g and requires substantial disk space, depending on the size of the database.
· User exports all objects within a particular schema. If user SCOTT performs
a user mode export, all of the objects belonging to SCOTT will be exported.
· Table exports the DDL and data (optional) of any listed tables, table partit
ions, or subpartitions. If the table specified is partitioned, then all of its
partitions will be exported. Table names can be specified using wildcard charac
ters. In the example below, all of user DAVE s tables will be exported, along wit
h any table owned by SCOTT that contains a D , and any table owned by AARON that co
ntains an S in the table name.
TABLES=(scott.%D%,dave.%,aaron.%S%)
· Tablespace exports all the tables residing in the specified tablespace(s).
All indexes on the corresponding tables will also be exported, regardless of the
ir tablespace (which should always be a different one than the table). If a tab
le has one partition stored in the specified tablespace, the entire table (all p
artitions) will be exported. This option requires the exp_full_database role.
Each export mode addresses a different requirement and should be used appropriat
ely. For instance, full exports should not be performed when only a few schemas
have the data that needs to be exported.
Export Options
In addition to export modes, the export utility enables the user to specify runt
ime parameters interactively, on the command line or defined in a parameter file
(PARFILE). These options include:
· buffer Specifies the size, in bytes, of the buffer used to fetch the rows.
If 0 is specified, only one row is fetched at a time. This parameter only appli
es to conventional (non direct) exports.
· compress When Y , export will mark the table to be loaded as one extent for the
import utility. If N , the current storage options defined for the table will be
used. Although this option is only implemented on import, it can only be specif
ied on export.
· consistent [N] Specifies the set transaction read only statement for export,
ensuring data consistency. This option should be set to Y if activity is anticip
ated while the exp command is executing. If Y is set, confirm that there is suffi
cient undo segment space to avoid the export session getting the ORA-1555 Snapsh
ot too old error.
· constraints [Y] Specifies whether table constraints should be exported with
table data.
· direct [N] Determines whether to use direct or conventional path export. Di
rect path exports bypass the SQL command, thereby enhancing performance.
· feedback [0] Determines how often feedback is displayed. A value of feedbac
k=n displays a dot for every n rows processed. The display shows all tables exp
orted not individual ones. From the output below, each of the 20 dots represent
50,000 rows, totaling 1 million rows for the table.
About to export specified tables via Direct Path ...
. . exporting table TABLE_WITH_ONE_MILLION_ROWS
....................
1000000 rows exported
· file The name of the export file. Multiple files can be listed, separated by
commas. When export fills the filesize, it will begin writing to the next file
in the list.
· filesize The maximum file size, specified in bytes.
· flashback_scn The system change number (SCN) that export uses to enable flas
hback.
· flashback_time Export will discover the SCN that is closest to the specified
time. This SCN is used to enable flashback.
· full The entire database is exported.
· grants [Y] Specifies object grants to export.
· help Shows command line options for export.
· indexes [Y] Determines whether index definitions are exported. The index da
ta is never exported.
· log The filename used by export to write messages. The same messages that a
ppear on the screen are written to this file:
Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
About to export specified tables via Direct Path ...
. . exporting table TABLE_WITH_ONE_MILLION_ROWS 1000000 rows exported
Export terminated successfully without warnings.
· object_consistent [N] Specifies whether export uses SET TRANSACTION READ ONL
Y to ensure that the data being exported is consistent.
· owner Only the owner s objects will be exported.
· parfile The name of the file that contains the export parameter options. Th
is file can be used instead of specifying all the options on the command line fo
r each export.
· query Allows a subset of rows from a table to be exported, based on a SQL wh
ere clause (discussed later in this chapter).
· recordlength Specifies the length of the file record in bytes. This paramet
er affects the amount of data that accumulates before it is writen to disk. If
not specified, this parameter defaults to the value specific to that platform.
The highest value is 64KB.
· resumable [N] Enables and disables resumable space allocation. When Y , the pa
rameters resumable_name and resumable_timeout are utilized.
· resumable_name User defined string that helps identify a presumable statemen
t that has been suspended. This parameter is ignored unless resumable = Y.
· resumable_timeout [7200 seconds] The time period in which an export error mu
st be fixed. This parameter is ignored unless resumable = Y.
· rows [Y] Indicates whether or not the table rows should be exported.
· statistics [ESTIMATE] Indicates the level of statistics generated when the d
ata is imported. Other options include COMPUTE and NONE.
· tables Indicates that the type of export is table-mode and lists the tables
to be exported. Table partitions and sub partitions can also be specified.
· tablespaces Indicates that the type of export is tablespace-mode, in which a
ll tables assigned to the listed tablespaces will be exported. This option requ
ires the EXP_FULL_DATABASE role.
· transport_tablespace [N] Enables the export of metadata needed for transport
able tablespaces.
· triggers [Y] Indicates whether triggers defined on export tables will also b
e exported.
· tts_full_check [FALSE] When TRUE, export will verify that when creating a tr
ansportable tablespace, a consistent set of objects is exported.
· userid Specifies the userid/password of the user performing the export.
· volsize Specifies the maximum number of bytes in an export file on each tape
volume.
The functionality of the export utility has been significantly enhanced in recen
t versions of Oracle. To check which options are available in any release use:
exp help=y
Parameter Files
Parameter files are a convenient way to consolidate all the options from a file
that will be used when executing the utility. The benefit of the parameter fil
e is that it allows the options to be specified once and reused by all utility j
obs. Three of the utilities discussed in this chapter (export, import, SQL*Load
er) support parameter files.
Below is an example of an export parameter file: export_options.par.
compress=n
direct=n
buffer=1000
tables=table_with_one_million_rows
userid=scott/tiger
Using this parameter file, the export command line is executed by the following:
exp parfile=export_options.par
Specifying options in a file makes it much easier to implement the options with
any utility that accepts a parameter file. In addition, these options are not r
evealed on the command line, and therefore not exposed to commands ( UNIX ps co
mmand ) that would reveal the username and password had they been specified on t
he command line.
Exporting Data Subsets
This export utility allows the DBA to limit the number of rows exported, based o
n a SQL where clause. This is very useful when only a portion of the table need
s to be exported. For example, to export only those rows from the table whose o
rder_number > 873737:
QUERY=\"WHERE order_number \> 873737\"
The specification of the query text must compensate for special characters that
are specific to the operating system. The query text above places the entire str
ing in double quotes and places an escape character (\) in front of special char
acters. The query specification above would result in the following SQL being ex
ecuted:
select * from orders where order_number > 873737;
Subsetting is just one method by which export performance can be improved. The
next section discusses other ways to optimize export performance.
Maximizing Export Performance
Many DBAs are faced with the challenge of speeding up utility functions such as
export. Typically, an organization has only a small window for maintenance, and
utility jobs must complete within that timeframe. Fortunately, there are a few
things a DBA can do to expedite exports. These include:
· Use Direct Path Direct path exports (DIRECT=Y) allow the export utility to s
kip the SQL evaluation buffer, whereas the conventional path export executes SQL
SELECT statements. With direct path, the data is read from disk into the buffe
r cache, returning rows directly to the export client. This can offer substanti
al performance gains, depending on the actual data. When using the direct path,
the recordlength parameter should also be used to optimize performance.
· Use Subsets By subsetting the data using the QUERY option, the export proces
s is only executed against the data that needs to be exported. If tables have o
ld rows that are never updated, the old data should be exported once, and from t
hat point only the newer data subsets should be exported. Subsets cannot be spe
cified with direct path exports since SQL is necessary to create the subset.
· Use a Larger Buffer For conventional path exports, a larger buffer will incr
ease the number of rows that are processed between each physical write to the ex
port file. Fewer physical writes equals greater performance. The following for
mula can be used to determine a proper buffer size:
buffer size = rows in array * max row size
· Separate Tables Separate those tables that require consistent=y from those t
hat don t, in order to expedite the export. This way, the performance penalty wil
l only be incurred for those tables that actually require it.
For the table with one million rows, the following benchmark tests were performe
d using the different export options.
Export Type
Elapsed Time (seconds)
Time Reduction
conventional
55
-
buffer=2800000
50
9%
direct = y
55
0 %
direct = y
recordlength = 50000
41
25%
Table 4.1- Benchmark tests performed using the different export options.
The table above reveals a small improvement in performance was obtained by incre
asing the buffer size on a conventional export. Using direct=y offered no perfo
rmance boost over conventional, until it was accompanied by recordlength, which
reduced the elapsed time by 25 percent.
Once data has been successfully copied to an export file, it can then be used by
the import utility, as described in the next section.

Copyright 2003, Rampant Tech Press, Dave Moore - All Rights Reserved. All produc
t names and trademarks are property of their respective owners.

import
The import utility (imp) reads files generated by the export utility and loads t
he objects and data into the database. Tables are created, their data is loade
d, and the indexes are built. Following these objects, triggers are imported, c
onstraints are enabled, and bitmap indexes are created.
This sequence is appropriate for a number of reasons. First, rows are inserted
before triggers are enabled, to prevent the firing of the triggers for each new
row. Constraints are loaded last, due to referential integrity relationships an
d dependencies among tables. If each EMPLOYEE row required a valid DEPT row and
no rows were in the DEPT table, errors would occur. If both of these tables al
ready existed, the constraints should be disabled during the import and enabled
after import for the same reason.
Import Options
The import modes are the same as the export modes (Full, User, Table, Tablespace
) previously described. Imports support the following options:
· buffer Specifies the size, in bytes, of the buffer used to insert the data.
· commit [N] Specifies whether import should commit after each array insert. B
y default, import commits after each table is loaded, however, this can be quite
taxing on the rollback segments or undo space for extremely large tables.
· compile [Y] Tells import to compile procedural objects as they are imported.
· constraints [Y] Specifies whether table constraints should also be imported
with table data.
· datafiles Used only with transport_tablespace, this parameter lists datafile
s to be transported to the database.
· destroy [N] Determines if existing datafiles should be reused. A value of Y
will cause import to include the reuse option in the datafile clause of the cre
ate tablespace statement.
· feedback [0] Determines how often feedback is displayed. A value of feedba
ck=10 displays a dot for every 10 rows processed. This option applies to the to
tal tables imported, not individual ones. Another way to measure the number of
rows that have been processed is to execute the following query while the import
is active:
select rows_processed
from v$sqlarea
where sql_text like 'INSERT %INTO "%'
and command_type = 2
and open_versions > 0;
· file The name of the export file to import. Multiple files can be listed, se
parated by commas. When export reaches the filesize it will begin writing to th
e next file in the list.
· filesize The maximum file size, specified in bytes.
· fromuser A comma delimited list of schemas from which to import. If the exp
ort file contains many users or even the entire database, the fromuser option en
ables only a subset of those objects (and data) to be imported.
· full The entire export file is imported.
· grants - [Y] Specifies to import object grants.
· help Shows command line options for import.
· ignore [N] Specifies how object creation errors should be handled. If a tab
le already exists and ignore=y, then the rows are imported to the existing table
s, otherwise errors will be reported and no rows are loaded into the table.
· indexes [Y] Determines whether indexes are imported.
· indexfile Specifies a filename that contains index creation statements. Thi
s file can be used to build the indexes after the import has completed.
· log The filename used by import to write messages.
· parfile The name of the file that contains the import parameter options. Th
is file can be used instead of specifying all the options on the command line.
· recordlength Specifies the length of the file record in bytes. This paramet
er is only used when transferring export files between operating systems that us
e different default values.
· resumable [N] Enables and disables resumable space allocation. When Y , the par
ameters resumable_name and resumable_timeout are utilized.
· resumable_name User defined string that helps identify a resumable statement
that has been suspended. This parameter is ignored unless resumable = Y.
· resumable_timeout [7200 seconds] The time period in which an error must be f
ixed. This parameter is ignored unless resumable=Y.
· rows [Y] Indicates whether or not the table rows should be imported.
· show [N] When show=y, the DDL within the export file is displayed.
· skip_unusable_indexes [N] Determines whether import skips the building of in
dexes that are in an unusable state.
· statistics [ALWAYS] Determines the level of optimizer statistics that are ge
nerated on import. The options include ALWAYS, NONE, SAFE and RECALCULATE. ALW
AYS imports statistics regardless of their validity. NONE does not import or re
calculate any optimizer statistics. SAFE will import the statistics if they app
ear to be valid, otherwise it will recompute them after import. RECALCULATE alw
ays generates new statistics after import.
· streams_configuration [Y] Determines whether or not any streams metadata pre
sent in the export file will be imported.
· streams_instantiation [N] Specifies whether or not to import streams instant
iation metadata present in the export file.
· tables Indicates that the type of export is table-mode and lists the tables
to be exported. Table partitions and sub partitions can also be specified.
· tablespaces When transport_tablespace=y, this parameter provides a list of t
ablespaces.
· toid_novalidate Specifies whether or not type validation should occur on imp
ort. Import compares the type s unique ID (TOID) with the ID stored in the export
file. No table rows will be imported if the TOIDs do not match. This paramete
r can be used to specify types to exclude from TOID comparison.
· to_user Specifies a list of user schemas that will be targets for imports.
· transport_tablespace [N] When Y, transportable tablespace metadata will be i
mported from the export file.
· tts_owners When transport_tablespace=Y, this parameter lists the users who o
wn the data in the transportable tablespace set.
· userid Specifies the userid/password of the user performing the import.
· volsize Specifies the maximum number of bytes in an export file on each tape
volume.
To check which options are available in any release of import use:
imp help=y
Maximizing Import Performance
Regardless of which options were used when the data was exported, it has no infl
uence on how the data is imported. For example, it is irrelevant to the import
process whether it was a direct path export or not, since it is a plain export f
ile, be it generated from direct or conventional means.
Unfortunately, there is no direct option available for imports (only for export
and SQL*loader). The import process has more tuning limitations than other util
ities. The DBA should consider the following when trying to optimize import per
formance:
· Set commit=n For tables that can afford not to commit until the end of the l
oad, this option provides a significant performance increase. Larger tables may
not be suitable for this option due to the required rollback/undo space.
· Set indexes=n Index creation can be postponed until after import completes,
by specifying indexes=n. If indexes for the target table already exist at the t
ime of execution, import performs index maintenance when data is inserted into t
he table. Setting indexes=n eliminates this maintenance overhead.
· Use the buffer parameter By using a larger buffer setting, import can do mor
e work before disk access is performed.
When tuning import, emphasize reducing the amount of work that import needs to d
o. This can be accomplished by committing less frequently, not importing indexe
s, not generating statistics, or by using the buffer parameter to reduce disk ac
cess.
For the table with one million rows, the following benchmark tests were performe
d using the different import options. The table was truncated after each import
.
Import Option
Elapsed Time (Seconds)
Time Reduction
commit=y
120
-
commit=y
buffer=64000
100
17%
commit=n
buffer=30720
72
40%
commit=N
buffer = 64000
67
44%
Table 4.2 - Shows that increasing the size of the buffer has a positive performa
nce impact.
The table above shows that increasing the size of the buffer has a positive perf
ormance impact. However, the most dramatic increase in performance was obtained
when setting commit=n. The increase in the size of the buffer resulted in a m
arginal improvement when commit=n.
Before devising a strategy for using export / import to copy data from one datab
ase to another, the SQL*Plus copy command should be considered.

Copyright 2003, Rampant Tech Press, Dave Moore - All Rights Reserved. All produc
t names and trademarks are property of their respective owners.

You might also like