7.2 Netezza Data Loading Guide
7.2 Netezza Data Loading Guide
Release 7.2
IBM Netezza
Release 7.2
Note
Before using this information and the product it supports, read the information in “Notices” on page D-1
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-1
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-3
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-1
Contents v
vi IBM Netezza Data Loading Guide
Electronic emission notices
When you attach a monitor to the equipment, you must use the designated
monitor cable and any interference suppression devices that are supplied with the
monitor.
This equipment was tested and found to comply with the limits for a Class A
digital device, according to Part 15 of the FCC Rules. These limits are designed to
provide reasonable protection against harmful interference when the equipment is
operated in a commercial environment. This equipment generates, uses, and can
radiate radio frequency energy and, if not installed and used in accordance with
the instruction manual, might cause harmful interference to radio communications.
Operation of this equipment in a residential area is likely to cause harmful
interference, in which case the user is required to correct the interference at their
own expense.
Properly shielded and grounded cables and connectors must be used to meet FCC
emission limits. IBM® is not responsible for any radio or television interference
caused by using other than recommended cables and connectors or by
unauthorized changes or modifications to this equipment. Unauthorized changes
or modifications might void the authority of the user to operate the equipment.
This device complies with Part 15 of the FCC Rules. Operation is subject to the
following two conditions: (1) this device might not cause harmful interference, and
(2) this device must accept any interference received, including interference that
might cause undesired operation.
Responsible manufacturer:
Dieses Gerät ist berechtigt, in Übereinstimmung mit dem Deutschen EMVG das
EG-Konformitätszeichen - CE - zu führen.
Verantwortlich für die Einhaltung der EMV Vorschriften ist der Hersteller:
IBM Deutschland
Technical Regulations, Department M456
IBM-Allee 1, 71137 Ehningen, Germany
Telephone: +49 7032 15-2937
Email: [email protected]
This product is a Class A product based on the standard of the Voluntary Control
Council for Interference (VCCI). If this equipment is used in a domestic
environment, radio interference might occur, in which case the user might be
required to take corrective actions.
This is electromagnetic wave compatibility equipment for business (Type A). Sellers
and users need to pay attention to it. This is for any areas other than home.
Install the NPS® system in a restricted-access location. Ensure that only those
people trained to operate or service the equipment have physical access to it.
Install each AC power outlet near the NPS rack that plugs into it, and keep it
freely accessible.
High leakage current. Earth connection essential before connecting supply. Courant
de fuite élevé. Raccordement à la terre indispensable avant le raccordement au
réseau.
Homologation Statement
This product may not be certified in your country for connection by any means
whatsoever to interfaces of public telecommunications networks. Further
certification may be required by law prior to making any such connection. Contact
an IBM representative or reseller for any questions.
These topics often reference SQL commands that are used for tasks such as
creating external tables, inserting data, and running selects for reporting. IBM
Netezza SQL is the Netezza Structured Query Language (SQL), which runs on the
Netezza data warehouse appliance. Throughout this publication, the term SQL
refers to the SQL implementation by Netezza.
This section provides general information about the data loading methods that are
available for the IBM Netezza appliance. Data loading could require a significant
percentage of resources, which can affect system performance. It is important to
schedule loads during times when the system is less busy to avoid impacts to user
activity and scheduled reports.
For the text-delimited format, and for unloading data, this option is available only
at the table level.
For the fixed-length format, you can specify this option at the column level,
making it possible to have a mix of comma and decimal separators.
The option is available for the following data types, for both text-delimited and
fixed-length formats:
v Numeric
v Float
v Double
v Time
v Timetz
v Timestamp
Option usage for each data type is explained in each particular section that
describes that data type.
Related concepts:
Appendix A, “Examples and grammar,” on page A-1
In the IBM Netezza environment, there are the following types of tables:
System tables
Stored on the host
User tables
Stored on the disks in the storage arrays
External tables
Stored as flat files on the host or client systems
Related concepts:
Chapter 3, “External table load options,” on page 3-1
Appendix A, “Examples and grammar,” on page A-1
After you create the external table definition, you can use INSERT INTO
statements to load data from the external file into a database table, or SELECT
FROM statements to query the external table.
Privileges required
To create an external table, you must have the CREATE EXTERNAL TABLE
administration privilege and List privilege on the database where you are defining
the table. If the schema where the table is defined is not the default schema, you
must have List privilege on the schema as well.
The database user who issues the CREATE EXTERNAL TABLE command owns the
external table.
When you create an external table, you must specify the location where the
external table data object is stored. The nz operating system user must have
permission to read from the data object location to support SELECT operations
from the table, and to write to the location if you use commands such as INSERT
to add rows to the external table.
Log files
By default, loading errors are written to the following log files:
v nzbad: <tablename>.<schema>.<dbname>.nzbad
v nzlog: <tablename>.<schema>.<dbname>.nzlog
You can override the default by specifying the file for errors by using the following
options with a file name:
v bf <filename> for nzbad
v lf <filename> for nzlog
To load an external data file into the Netezza appliance as an external table, you
can use either of the following clauses:
v Use a FROM clause of a SELECT SQL statement/command, like any normal
table.
v Use a WHERE clause of an UPDATE or DELETE SQL statement.
To unload an external table into an external data file, use the table as the target
table in any of the following SQL statements:
v INSERT SQL
v SELECT INTO SQL
v CREATE TABLE AS SELECT SQL
All references to columns in the external table can be complex SQL expressions
used for the transformation of external data during a load/unload process.
Related concepts:
“Back up and restore” on page 2-3
Parse rows
For loads, the sequence of rows are parsed one-by-one from the external data file,
and converted into internal records of the external table. There can be errors
during the parsing of each row or each column. For example, there can be errors in
identifying the column value itself, as in the case of a missing delimiter. Or there
can be errors during the conversion from external format to internal records of the
external table, such as alphabets mentioned for an integer column in text-delimited
format.
Each error is logged in detail in an nzlog file, and bad rows are logged in an nzbad
file. These files help user to identify bad rows in the external data file and correct
them for reloading. Depending on the load options of the external table in use,
each bad row would either cause the row to be skipped, or the entire load to be
aborted. Similarly, each bad column of a bad row can cause the rest of the row to
be ignored, or if possible to recover, the load can continue to parse subsequent
columns of the same row.
If there is an error in the project-expression on the external table columns, then the
entire load is aborted and the transaction rolled back. Errors of this nature are not
logged in nzbad or nzlog files, as they are outside of the scope of the external table
load mechanism. When the processing reaches the normal SQL engine, the external
table is treated as if it is a normal table.
Unlike an external table that has external rows in an ordered sequence, normal
user tables have no implicit row order other than hidden rowid columns. So there
is no way for a user who is not using rowids to identify the bad row in a SQL
engine. In this case, the IBM Netezza system returns an error that a particular
column caused an error, without identifying the bad row. It is as if the query was
selecting from a normal table and inserting into another normal table, with some
row that caused the error during insertion.
To back up table data by using an external table, create external table definitions
for each user table and then use SQL to insert into the external table. When you
restore table data, create a table definition (if it does not exist) and then use SQL to
insert into the table from an external table.
Related concepts:
“External table usage” on page 2-2
Note: The system allows and maintains PRIMARY KEY, DEFAULT, UNIQUE, and
REFERENCES. UNIQUE, PRIMARY KEY, and REFERENCES are ignored for
external tables. The system does not support constraint checks and referential
integrity. The user must ensure constraint checks and referential integrity.
Related concepts:
“Column constraint rules for empty strings” on page 2-9
Transient external tables have the same capabilities and limitations as normal
external tables. A special feature of a TET is that the table schema does not need to
be defined when the TET is used to load data into a table or when the TET is
created as the target of a SELECT statement.
Syntax
The table schema of a transient external table can be explicitly defined in a query.
When defined this way, the table schema definition is the same as is used when
defining a table schema by using CREATE TABLE.
SELECT x, y, NVL(dt, current_date) AS dt FROM EXTERNAL ’/tmp/test.txt’
( x integer, y numeric(18,4), dt date ) USING (DELIM ’,’);
The explicit schema definition feature can be used to specify fixed-length formats.
SELECT * FROM EXTERNAL ’/tmp/fixed.txt’ ( x integer, y numeric(18,4),
dt date ) USING (FORMAT ’fixed’ LAYOUT (bytes 4, bytes 20, bytes 10));
The SAMEAS keyword can also be used to specify that the schema of the external
table is identical to some other table that currently exists in the database.
SELECT * FROM EXTERNAL ’/tmp/test.txt’ SAMEAS test_table
USING (DELIM ’,’);
If the transient external table schema is not explicitly defined, the schema is
determined based on the query that is executing. When a TET is used as a data
source for an INSERT statement, the external table uses the schema of the target
table.
The external table in this INSERT statement uses the schema of the target table.
The columns in the external data file must be in the same order as the target table,
and every column in the target table must also exist in the external table data file.
INSERT INTO target SELECT * FROM external ’/tmp/data.txt’
USING (DELIM ’|’);
A transient external table can also be used to export data out of the database. In
this case, the schema of the external table is based on the query that is executing.
For example:
CREATE EXTERNAL TABLE ’/tmp/export.csv’ USING (DELIM ’,’) AS
SELECT foo.x, bar.y, bar.dt FROM foo, bar WHERE foo.x = bar.x;
A session connected to IBM Netezza using ODBC, JDBC, or OLE DB from a client
system can import and export data by using a remote transient external table,
which is defined by using the REMOTESOURCE option in the USING clause.
For example, the following SQL statement loads data from a file on a Windows
system into a TEMP table on Netezza by using an ODBC connection.
CREATE TEMP TABLE mydata AS SELECT cust_id, upper(cust_name) as name
from external ’c:\customer\data.csv’ (cust_id integer, cust_name
varchar(100)) USING (DELIM ’,’ REMOTESOURCE ’ODBC’);
Remote external table loads work by sending the contents of a file from the client
system to the Netezza server where the data is then parsed. This method
minimizes CPU usage on the client system during a remote external table load.
double precision
char (n) salary “Character strings” on page 2-9 and “Column
constraint rules for empty strings” on page 2-9.
23:00:01
time with time zone 01:15:33 -05 “Time with time zone” on page 2-11.
timestamp 2002-02-04 “Timestamp” on page 2-12.
01:15:33
The syntax of fixed-point values is the same as the syntax of integer values with
the addition of an optional decimal digit that can occur anywhere such as from
before the first decimal digit to after the last decimal digit.
The optional decimal point can be followed by zero or more decimal digits, if there
is at least one decimal digit before the decimal point; followed by one or more
decimal digits if there are no decimal digits before the decimal point.
You can also specify a comma as a separator by using it like the decimal digit.
Precision Representation
P≤9 4 bytes, signed
9 < P ≤ 18 8 bytes, signed
18 < P ≤ 36 16 bytes signed
Note: Because the fixed-point data type is an exact data type, when there are too
many digits that follow the decimal point, the system does not round the number.
Related concepts:
“Decimal delimiter examples” on page A-3
The syntax of floating-point values is the same as the syntax of fixed-point values
augmented by an optional trailing exponent specification.
The optional decimal point can be followed by zero or more decimal digits, if there
is at least one decimal digit before the decimal point; followed by one or more
decimal digits if there are no decimal digits before the decimal point.
You can also specify a comma as a separator by using it like the decimal digit.
Character strings
Char(n)/nchar(n) are character strings of length n. Varchar(n)/nvarchar(n) are
variable-length character strings of maximum length n. A valid character is
between the ASCII values 32 - 255.
varchar/nvarchar:
Zero length string.
Bool, Date, Int NULL NULL NULL NULL
(1,2,4,8), Numeric(),
NOT NULL ERROR ERROR ERROR
Float (4,8), Time,
Timestamp, Timetz
If the record contains fewer data values than the actual columns defined in the
schema of the table, the system writes an error to the nzlog file and discards the
record. To override this behavior, use the -fillRecord option, which applies to the
entire load operation.
The -fillRecord option tells the system to use a null value in place of any missing
fields. You can use this option if the columns whose values are missing allow
nulls. If these columns are defined as not null, the system writes an error to the
nzlog file and discards the record. You must resolve this conflict by changing the
schema to allow null values or modifying the data file to include a valid non-null
value.
Related concepts:
“CREATE EXTERNAL TABLE command syntax” on page 2-3
Related reference:
“The NullValue option” on page 3-10
You can also specify a comma as a separator in time data types by using it like the
decimal digit.
Related concepts:
“Decimal delimiter examples” on page A-3
Time
The IBM Netezza appliance time is an exact, 8-byte data type stored internally as a
signed integer that represents the number of microseconds since midnight.
The system accepts both 24 hour and 12 hour a.m. and p.m. time values. You can
specify the format with the -timeStyle option. The default is the 24-hour format.
The time options have the following formats. The delimited examples use the
default time delimiter, which is a colon (:).
v 12-hour delimited HH:MM:SS.FFF [AM | PM] (such as 10:12 PM, or
1:02:46.12345 AM)
v 12-hour undelimited HHMMSS.FFF [AM | PM] (such as 1012 PM or
010246.12345 PM)
v 24-hour delimited HH:MM:SS.FFF (such as 19:15 or 1:15:00.1234)
v 24-hour undelimited HHMMSS.FFF (such as 1915 or 10246.12345 PM)
Syntax
<time> ( ’+’ | ’-’ ) <digit> [ <digit> [ ’:’ <digit> [ <digit> ] ] ]
The input format of time with time zone value is identical to that of simple time
followed by a trailing signed offset from Coordinated Universal Time (UTC,
formerly Greenwich Mean Time GMT). The time section must conform to the
-timeStyle and -timeDelim in effect during the nzload job.
You must specify a signed, time-zone hour, whereas the time-zone minute is
optional. If you use the minute, separate it with a colon (the default timeDelim
character).
Errors
Syntax
timestamp <date> <time>
The input format of a timestamp value is a date value followed by a time value.
You can have optional spaces between the date and the time. The date section
must conform to the -dateStyle and -dateDelim in effect during the load job.
Errors
Restrictions
The following restrictions and considerations are for use with external tables:
v Always consider your source and target systems, and whether the data is
properly formatted for loading.
v To insert and drop an external table, use the INSERT and DROP commands.
v You cannot delete, truncate, or update an external table. After creating an
external table, you can alter and drop the table definition. (Dropping an external
table deletes the table definition, but it does not delete the data file that is
associated with the table.) You can select the rows in the table and insert rows
into the table (following a table truncation).
v While you cannot select from more than one external table at a time in a query
or subquery, you can move data from one external table to another, such as
using SELECT and INSERT. The system displays an error if you incorrectly
specify multiple external tables in a SQL query, or if you reference the same
external table more than once in a query:
ERROR: Multiple external table references in a query not allowed
To specify more than two external tables, load the data in into a non-external
table and specify this table in the query.
v You cannot use a union operation that involves two or more external tables.
v Using the nzbackup command to back up external tables backs up the schema
but not the data.
v Host-side operations, such as selects and rowsetlimit user and group property
interactions, are not supported for compressed external tables.
v The DecimalDelim option is not supported for compressed external tables.
v There is a maximum limit of 300 concurrent loads for multiple loads.
Related concepts:
“External table usage” on page 2-2
Before you reload an external table, verify that the destination table in the database
is empty or that it does not contain the rows in the external table that you are
about to reload. If the destination table contains the rows contained in the external
table, problems might occur. These problems can also occur if you accidentally
reload the external table more than once.
For example, loading a text-format external table into a destination table that
contains the same data creates duplicate data in the database. The rows will have
unique row IDs, but the data is duplicated. To fix this problem, you would need to
delete the duplicate rows or truncate the database table and reload the external
table again (but only once).
If you load a compressed binary format external table into a destination table that
has the same rows, you create duplicate rows with duplicate row IDs in the
database table. The system restores the rows by using the same row IDs saved in
the compressed binary format file.
Duplicate row IDs can cause incorrect query results and can lead to problems in
the database. You can check for duplicate rowIDs by using the rowid keyword as
follows:
SELECT rowid FROM employee_table GROUP BY rowid HAVING count(rowid)>1;
If the query returns multiple rows that share the row ID, truncate the database
table and reload the external table (but only once).
After you load data from an external table into a user table, run GENERATE
STATISTICS to update the statistics for the user table. This improves the
performance of queries that run against that table.
Note: The best method to verify that the load processing has been successful is to
review the errors, if any, in the nzlog and nzbad files. Check these files occasionally
during and after the load operations.
The following table lists the external table options, their values, and data types.
The sections after the table describe each option. In the Valid formats column, Text
indicates the text-delimited format and Fixed is the fixed-length format. In the
Data type column, enumeration indicates that the system accepts a specified set of
quoted or unquoted string values.
Table 3-1. External table options
Unload
Option Valid formats Values Default Y/N Data type
BoolStyle Text, Fixed 1_0/T_F/Y_N... NULL, 1_0 Y enumeration
Compress Text, Fixed True/False False Y boolean
CRinString Text, Fixed True/False NULL, False Y boolean
CtrlChars Text, Fixed True/False NULL, False N boolean
DataObject Text, Fixed Existing file path No default Y file name
DateDelim Text, Fixed 1 byte NULL, "-" Y string
DateStyle Text, Fixed YMD/MDY/DMY... NULL, YMD Y enumeration
DecimalDelim Text, Fixed 1 byte ‘.’ Y string
Delimiter Text 1 byte NULL, "|" Y string
Encoding Text Internal/Latin9/Utf8 NULL, Internal Y enumeration
EscapeChar Text 1 byte NULL Y string
FillRecord Text True/False NULL, False N boolean
Format Text, Fixed Text/Internal/Fixed Text Y enumeration
IgnoreZero Text True/False NULL, False N boolean
IncludeHeader Text True/False NULL, False N boolean
Option details
The following sections describe each of the external table load options.
The following table lists the boolean styles and their values.
Table 3-2. Boolean values
Style name Value
1_0 1 or 0
T_F T or F
Y_N Y or N
YES_NO YES or NO
TRUE_FALSE TRUE or FALSE
The default style is 1_0. The values can be specified in mixed case, so you could
specify a value of true, True, TRUE, or tRuE.
If you specify the YES_NO option on the command line, the system assumes that the
data in the boolean field is in the form yes or no. If the data is any of the other
The valid values are true or on, false or off. The default is false. This option can
be true only if the format is set to 'internal'.
Acceptable values are true or false, on or off. Do not put quotation marks around
the value.
v False is the default value and treats all CR or CRLF as end-of-record.
v True accepts unescaped CR in char/varchar fields (LF becomes only end of row).
You must escape NULL, CR, and LF characters. Acceptable values are true or
false, on or off. The default is false. Do not insert quotation marks around the
value.
Note: This option is different for fixed-length format. For more information, see
“Fixed-length option changes” on page 6-2.
Related concepts:
“Format options” on page 6-2
You must specify a value for the data object path name. There is no default value
for the external table data object. When the RemoteSource option is not set (or set
to empty string), this path must be an absolute path and not a relative path. The
file name must be a valid UTF-8 string.
v For loads, this file must be an existing file with read permission for the OS user
that initiates the load.
v For unloads, the parent directory of this file must have read and write
permissions for the OS user that initiates the unload, and the data file is
overwritten if it exists. Typically, the unloads are owned by the nz user, so the
nz user must have permission to read and write files in the target path.
As a best practice, the external table locations should not be within the /nz
directory or its subdirectories because the data object files might accidentally
Starting in Release 7.1.0.1, the admin user can specify and manage the locations on
the IBM Netezza host where users can store the external table data object files.
Users who have the Manage System privilege can also manage the locations for the
external table object files.
Note: When you change or restrict the external table locations, the restrictions
apply only to the new external tables that are created on the system. Any existing
external tables continue to use their current data object path name.
You use the SHOW EXTERNAL TABLE LOCATION command to display the
current table locations. By default, data objects can be created in any of the paths
on the Netezza host that are accessible by the nz user account.
TESTDB.ADMIN(ADMIN)=> SHOW EXTERNAL TABLE LOCATION;
ALLOWDIRECTORY
----------------
*
(1 row)
The asterisk indicates that there are no restrictions on the locations for the external
table object files.
To restrict the locations for the external table data objects, the admin user or any
privileged database user can add and remove table locations using the following
steps.
1. Connect to a Netezza database as the admin user or any database user with
Manage System privilege.
2. Use the SHOW EXTERNAL TABLE LOCATION command to review the
current table location path names.
3. Delete the ’*’ wildcard location to remove access to all the paths that the nz
user can access.
TESTDB.ADMIN(ADMIN)=> REMOVE EXTERNAL TABLE LOCATION ’*’;
REMOVE EXTERNAL TABLE LOCATION
4. Add the locations where the external table objects are allowed using the ADD
EXTERNAL TABLE LOCATION command. Any new external tables created on
the system must be stored in a permitted directory.
TESTDB.ADMIN(ADMIN)=> ADD EXTERNAL TABLE LOCATION ’/export/home/nz/ext_tbl’;
ADD EXTERNAL TABLE LOCATION
TESTDB.ADMIN(ADMIN)=> ADD EXTERNAL TABLE LOCATION ’/tmp/ext_tbl’;
ADD EXTERNAL TABLE LOCATION
The locations and the object file must exist on the system and be accessible by
the nz user account before you can insert to or read from the external table.
After you specify the external table locations, you can use the SHOW EXTERNAL
TABLE LOCATION command to review the list of supported table locations. After
you restrict the external table locations, the restrictions apply when you create new
external tables. Any existing external tables continue to use their specified data
object locations.
When a user creates an external table and specifies a data object path that is not
part of the allowed location list, the command fails with an error:
When a user creates an external table and specifies a data object path that is in the
allowed locations list, but the nz user does not have read or write access to the file,
the CREATE EXTERNAL TABLE command succeeds, but commands to insert data
to the table will fail with a permission error:
TESTDB.ADMIN(ADMIN)=> CREATE EXTERNAL TABLE my_ext_tbl SAMEAS tbl_retail
USING (DATAOBJECT (’/tmp/ext_tbl’));
CREATE EXTERNAL TABLE
TESTDB.ADMIN(ADMIN)=> INSERT INTO my_ext_tbl VALUES (1,2,3,4);
ERROR: /tmp/ext_tbl : Permission denied
The default is a dash '-' for all dateStyle types except MONDY[2], where the
default is ' ' (space). This option is a single-byte string.
v If you specify the option as an empty string, which means that there is no
delimiter between the date components, you must specify days and months as
two-digit numbers. Single-digit months and days are not supported.
v With MonDY or MonDY2, the default dateDelim option is space.
v With days and months less than 10, use either one or two digits, or a space
followed by a single digit.
v With the dateDelim option as a space, the system allows a comma after the day.
v With any component (day, month, year) as zero, or any day/month
inconsistency, such as August 32 or February 30, the system returns an error.
Note: If you are not using delimiters, the date is determined as in the following
example for June 12, 2009: 06122009
The possible values for the DateStyle option are shown in the following table. The
example shows how the date 21 March 2014 would be represented without a date
delimiter.
Table 3-4. DateStyle
Value Description Example
YMD 4-digit year, 2-digit month, 2-digit day. This is the default. 20140321
DMY 2-digit day, 2-digit month, 4-digit year. 21032014
The 4-digit years are in the range 0001 - 9999. There is no provision for years
before 0001 CE or after 9999 CE.
For example:
v In a control file, to specify the date format MM-DD-YY (for example, 03-21-14),
set datestyle to MDY2 and datedelim to '-'.
v In the command line, if the data file jan-01.data contains records in the
following format (the date format is shown in bold):
14255932|30/06/2002|20238|20127|40662|157|
because the date value uses the DD/MM/YYYY format, load that file by
specifying the following nzload command:
nzload -t agg_month -df jan-01.data -delim ’|’ -dateStyle DMY -dateDelim ’/’
The default is the pipe character '|'. You can specify characters in the 7-bit ASCII
range by using either a quoted value (for example: delimiter ’|’) or by its
unquoted decimal number (delimiter 124). To specify a byte value above 127, use
the decimal number. This option is a single-byte string. This option is not
supported for fixed-length format.
The system processes an input row by identifying the successive fields within that
row. A single character field delimiter separates adjacent fields. The lack of a field
To use a character other than a 7-bit-ASCII character as a delimiter, make sure that
you specify it as a decimal or hex number. Do not specify a character literal, which
can result in errors from encoding transformation. For example, to use the hex
value 0xe9 as a delimiter (which is é in Latin9), use –d 0xe9 as the value. Do not
use –d 'é'.
Note: When you are using the nzload command you can enter escape characters
on the command line, such a \b. If you use the CREATE EXTERNAL TABLE
command, the only special character you can specify is \t ("\t").
Use the nzconvert command to convert character encoding before loading data
from external tables, if necessary.
Although efficient, this representation has the drawback that string fields might
not contain instances of the field delimiters. In addition, one value typically
becomes inexpressible because you use it to convey the absence of any value (that
is, that column is null).
One solution is to use an escape character for the delimiter. For example, the
following command line demonstrates the use the escapeChar option.
nzload -escapeChar ’\’ -nullValue ’NULL’ -delim ’|’
v |NULL| is null input field
v |\NULL| is a non-null input field that contains the text NULL
v |\|| is a non-null input field that contains the single character |
v |\\| is a non-null input field that contains the single character \
The system expects one input field for every column in the schema of a target
table, and rejects a row with fewer fields. If you specify the fillRecord option, the
system allows omitting one or more trailing (rightmost) fields if all corresponding
columns can be null.
The default is false. If true, the command accepts binary value zeros in input fields
and discards them.
By default, the setting is false to omit the table column names from the file. Set the
variable to true to add the column names as header values to the external table
file.
For example, a time value such as 12:34:00 or 12:34 is unloaded to the external
table in the format 12:34:00. The default is false.
Note: This option is not supported for fixed-length format, and is only for
unloading.
Note: This option is required for and used only with the fixed-length format. For
more information, see “Fixed-length only options” on page 6-2.
Related concepts:
“Format options” on page 6-2
The default value is '/tmp'. When users run remote loads from Windows clients
(through ODBC/JDBC), the default output directory is mapped to "C:\". The
directory name must be a valid UTF-8 string.
The default value is 1. This default causes the system to commit a load only if it
contains no errors. A maxErrors value n (where n is greater than 1) allows the first
n-1 row rejections to be recoverable errors, not including the number of rows
processed in the skipped row range.
This option is different for fixed-length format. For more information, see
“Fixed-length option changes” on page 6-2.
Related concepts:
After processing a row (whether inserted, skipped or rejected), the system uses
these guidelines to look for another input row:
v If you did not specify the maxRows option, the system attempts to locate the next
input row.
v If you specified the maxRows option and the input row counter is equal to the
maxRows count, the system ends the load and commits all inserted records, not
including the rows processed in the skipped row range. Otherwise, the system
attempts to locate the next input row.
You can specify a value such as a space (' ') or any string up to four characters.
Conceptually a field contains either a value or an indication that there is no value.
The system provides some flexibility for how you indicate that a field contains no
value.
The system determines the type of a field and whether it is null by inspecting the
corresponding column declaration:
v If there is no value, the system sets the corresponding value in the candidate
binary record to null.
v If you declared the target column “not null,” then an absence of a value is an
error.
v If a field does not indicate null, the system assumes that it contains a value. The
system analyzes the contents of that field, converts its textual input
representation to binary, and sets the corresponding value in the candidate
binary record to that value.
Related concepts:
“Column constraint rules for empty strings” on page 2-9
The system recognizes a quoted value when the first non-space character is the
quotation character specified in the quotedValue option. If the first non-space
character is not the specified quotation character, then the system handles it
according to the normal rules. In particular, leading or trailing spaces in string
fields are considered part of the value of the string.
Unlike the escapeChar option, the quotedValue option is not able to force the
system to accept the nullValue token as a valid non-null input value. The system
overhead for processing quoted value syntax is much greater than the default
unquoted syntax. In addition, except for strings that contain three or more field
delimiters that need to be escaped and no embedded quotation marks by using the
quotedValue option results in more bytes of input data than the escapeChar option.
When you have a choice, use unquoted syntax.
If you expect all values in all input fields (string or otherwise) to be uniformly
enclosed in quotation marks, then use the requireQuotes option to cause the
system to enforce this usage. Using the requireQuotes option improves the parsing
overhead and provides extra robustness.
Note: This option is used only with the fixed-length format. For more information,
see “Fixed-length only options” on page 6-2.
Related concepts:
“Format options” on page 6-2
Note: This option is used only with the fixed-length format. For more information,
see “Fixed-length only options” on page 6-2.
Related concepts:
“Format options” on page 6-2
External tables created with the remote source value set to ODBC, JDBC, or
OLE-DB are usable only through those values. External tables created with the
remote source not set (or set to empty string) are usable from any client (the source
data file path is assumed to be on the IBM Netezza host, even if the load or
unload is initiated remotely from a different host).
The nzsql command does not support remote loads or unloads to external tables.
You can only create external tables remotely. The command supports loads and
unloads locally on the host.
This option is automatically set to ODBC if the host name option is set to anything
but local host or the reserved IP address (127.0.0.1).
The default is false. If set to true, the quoted value must be set to YES, SINGLE, or
DOUBLE.
The default is 0 (none). After the system has a candidate binary record from an
input row, it determines whether to insert that record into the target table:
v If you did not specify this option, the system inserts every record.
v If you specified this option and the input row counter is less than or equal to
the skipRows count, the system discards the candidate binary record (skipped).
Otherwise, the system inserts the record.
Note: If you use the skipRows option, the system skips that number of rows, and
then begins the count for the maxErrors option, the maxRows option, or both if you
specify them.
This option cannot be used for 'header' row processing in a datafile, as even the
skipped rows are processed first, so the data in the header rows should be valid
with respect to the external table definition.
This option can be helpful for testing purposes. If you set this option to a
maximum value, you can validate that the data file is correct before loading the
rows into a user table.
The system checks syntax and range errors. If an error occurs, the system discards
the record to the nzbad file and logs an error with the record number in nzlog file.
Note: This option is not supported for fixed-length format. It is also referred to as
the TimeExtraZeros option.
When a string is larger than its declared storage size, you can use this option to
define how to process records with those strings.
v A value of True causes the system to truncate any string value that exceeds its
declared char/varchar storage.
v A value of False causes the system to report an error when a string exceeds its
declared storage. This is the default behavior.
The following table provides some examples of date ranges and their
corresponding input values.
Table 3-5. The -y2Base option
Wanted range 1900...1999 1923...2022 1976...2075 2000...2999
Option -y2Base 1900 -y2Base 1923 -y2Base 1976 -y2Base 2000
In Y2 input
00 1900 2000 2000 2000
01 1901 2001 2001 2001
02 1902 2002 2002 2002
...
24 1924 1924 2024 2024
25 1925 1925 2025 2025
...
76 1976 1976 1976 2076
77 1977 1977 1977 2077
...
98 1998 1998 1998 2098
99 1999 1999 1999 2099
Row Counts
The system uses a line-oriented input format where one line of text is an input
row. It operates by isolating successive rows in the input stream. For each new
row, the system increments a row counter (starting at 1) and analyzes the contents
of the row.
If a row contains no errors, the system converts the row into a candidate binary
record.
Bad rows
When the system encounters an error processing a row, it stops analyzing the row,
appends the row to the bad rows file, writes a supporting diagnostic message to
the nzlog file that describes the position and nature of the error, and increments a
rejected rows counter.
The <CR><CR> or <LF><LF> pairs are not valid end-of-line sequences. Instead
each pair encloses an empty row that contains no values. The system considers
such an empty row valid only if you specified the fillRecord option, and you
specified that every column in the target table is able to be set to null.
Note: It is an error for a row to contain more fields than the number of columns in
the target table.
Note: An empty field or a field that contains only spaces can represent a legitimate
string value, but can never be a legitimate non-string value.
The system uses the following rules based on whether the field is a string field:
v For a string field, all characters from the beginning of the field to the
terminating delimiter or end of row sequence contribute to the value of the field.
v For a non-string field, the system skips any leading spaces, interprets or converts
the contents of the field, and skips any trailing spaces.
The string and non-string distinction also affects the details of how a field indicates
that it is null. For more information, see “Handle the absence of a value.”
Related concepts:
“Handle the absence of a value”
Load continuation
If you enable load continuation with the allowReplay option, or set the session
variable LOAD_REPLAY_REGION to true, the system ensures that a simple load
that uses external tables continues after the system is paused and resumed. You do
not have to stop and resubmit the load.
If no value is specified for the allowReplay option, or the option setting is 0, the
system defaults to the Postgres default setting. If the setting is a valid non-zero
number, it specifies the number of allowable restarts.
When you enable load continuation, the system holds the records to be sent to the
SPU in the replay region in host memory. After the system sends the data in this
region to the SPUs, it does a partial commit that forces all the unwritten data to
the disks and allows the system to reuse the data buffers of the reload region. If a
SPU reboots or resets, the system rolls back to the last partial commit, and
reprocesses and resends the data.
Note: This option has a performance impact which depends on the speed of the
incoming data. In addition, system memory is used for the data buffering that
enables loads to be continued. When the buffer memory is exhausted, new loads
will pend until needed memory becomes available.
Load continuation cannot operate on any table that has one or more materialized
views in an active state. Before enabling load continuation, suspend the associated
materialized views. You can suspend active materialized views either through the
NzAdmin tool or by issuing the ALTER VIEWS command. Sample syntax for
ALTER VIEWS follows.
ALTER VIEWS ON <table> MATERIALIZE SUSPEND
After loading is completed, you can update and activate the materialized views for
the table. Sample syntax follows.
ALTER VIEWS ON <table> MATERIALIZE REFRESH
Related concepts:
“Session variables” on page 3-17
Legal characters
Input is composed of the printing characters (bytes 33-255), space (byte 32),
horizontal tab (byte 9), line feed (byte 10), and carriage return (byte 13). By default
you cannot use the nonprinting control characters.
The following table lists the end-of-row and control characters that are allowed
with the different nzload system options. The check mark indicates that the option
is specified or allowed.
Table 3-6. Control characters and end of record characters
Options End of record Control characters allowed within strings
-crlnString -ctrlChars lf cr crlf lfcr 0 1-8 ht lf 11 12 cr 14-31
U U U U U
U U U U
U U U U U U U U U U
U U U U U U U U U
Session variables
You can use the following session variables as nzload options.
v LOAD_REPLAY_REGION
Specifies that a simple load using external tables has the ability to continue after
the system has been paused and resumed
v MAX_QUERY_RESTARTS
Specifies the number of restarts allowed for load continuation.
v LOAD_LOG_MAX_FILESIZE
Specified the maximum allowed size in MB for the log file.
v NZ_SCHEMA
For Netezza systems that run Release 7.0.3 or later that support multiple
schemas in a database, specifies the schema into which data should be loaded.
Chapter 3. External table load options 3-17
The NZ_SCHEMA value can be helpful for users who have to support loads into
release 7.0.3 multiple schema systems as well as systems which do not support
multiple schemas. You can keep the same nzload commands and set
NZ_SCHEMA when connecting to multiple schema systems, and unset the
variable before using the load scripts to single- schema systems.
Related concepts:
“Load continuation” on page 3-16
The nzload command processes command-line load options to send queries to the
host to create an external table definition, run the insert/select query to load data,
and when the load completes, drop the external table.
The nzload command connects to a database with a database user name and
password, like any other IBM Netezza client application. The user name specifies
an account with a particular set of privileges, and the system uses this account to
verify access.
Note: While you can use the nzload command as an ODBC client application, it
does not require or does not work with Data Source Name (DSN). It bypasses the
ODBC Driver Manager and connects directly to the Netezza ODBC driver.
If you issue the nzload command from the IBM Netezza appliance host itself, and
the user who issues the command is not the user nz, you must do one of the
following tasks:
v Ensure that the GROUP nz has Read permissions for the data file to load.
v Use the -host option with the nzload command (such as nzload -host
<hostname>).
For more information, see the IBM Netezza System Administrator’s Guide.
The nzload command conducts all insertions into the target table within a single
transaction. The nzload command commits the transaction at the end of the job,
provided it does not detect any unrecoverable errors. Only after the nzload
command commits the transaction are the newly loaded records visible to other
queries. When encountering a load error while running multiple concurrent loads,
only the load with the error does not complete.
If the nzload command cannot commit the transaction, these storage resources
remain allocated. To free up this disk space, use the nzreclaim command on the
specific table or database. For more information about the nzreclaim command, see
the IBM Netezza System Administrator’s Guide.
If you cancel an nzload job, the nzload command does not commit the transaction.
Program invocation
The nzload command is a command-line program that accepts input values from
multiple sources. The precedence order of the input values is as follows:
v Command line
v Control file. Without a control file, you can only do one load at a time, and the
use of a control file allows multiple loads. For more information about the
control file, see “The nzload control file” on page 4-5.
v Environmental variables (only used for user, password, database, and host)
v Built-in defaults
Option names are not case-sensitive. Every option has a standard name for use in
either the command line or the control file. For more information about the input
values, see Table 4-1 on page 4-3.
Many options include a token argument, which you can enclose in either single or
double quotation marks. The nzload command ignores letter casing for the
characters in option token arguments (for example -boolStyle YES_NO is equivalent
to -boolStyle yes_no).
Note: You must use quotation marks for options that require a punctuation
character as a token, and use an escape character if quotation marks are part of the
argument.
You can query the system view _v_load_status to display details about the
progress of loads that are running on the system. The view shows information
about the load operations such as the table name, database name, data file, number
of processed rows, and number of rejected rows. More information has been added
to the load log file for performance-related details about the load operation.
A sample view query follows. The output has been reformatted to fit the page
width.
SYSTEM.ADMIN(ADMIN)=> select * from _v_load_status;
PLANID | DATABASENAME | TABLENAME | SCHEMANAME | USERNAME | BYTESPROCESSED | ROWSINSERTED |
--------+--------------+-----------+------------+----------+----------------+--------------+
ROWSREJECTED | BYTESDOWNLOADED
-------------+-----------------
2932 | SYSTEM | LINEITEM | ADMIN | ADMIN | 142606226 | 1136931 |
4 | 131911476
(1 row)
Syntax
Inputs
The nzload command uses many of the options for external tables as described in
Chapter 3, “External table load options,” on page 3-1. Particular options for nzload
are shown in the following table.
Table 4-1. The nzload options
Option Description
-cf filename Specifies the control file. For more information, see “The nzload
control file” on page 4-5.
-df filename Specifies the data file to load. If you do not specify a path, the
system uses the special token <stdin> to store the file path string.
Corresponds to the DataObject external table option.
-lf filename Specifies the log file name. If the file exists, append to it.
-bf filename Specifies the bad or rejected rows file name (overwrite if the file
exists).
-outputDir dir Specifies the output directory for the log and bad or rejected rows
files. Corresponds to the LogDir external table option.
-logFileSize n Session variable (LOAD_LOG_MAX_FILESIZE) that specifies the
size (in MB) of the log and bad or rejected rows files. The default is
2000 MB (2 GB).
-fileBufSize Specifies the chunk size (MB for fileBufSize or bytes for
fileBufByteSize) at which to read the data from the source file.
-fileBufByteSize Corresponds to the SocketBufSize external table option.
-allowReplay Session variables (LOAD_REPLAY_REGION and
MAX_QUERY_RESTARTS) that specify the number of query restarts
-allowReplay n for load continuation if a SPU is reset or failed over. If n is a valid
non-zero number, it specifies the number of allowable query
restarts. If no value is specified, or n is 0, the system defaults to the
Postgres default setting.
Additional options
You can also use control files to run multiple concurrent loads, with different
options, in one command instance. Each load is a different transaction. If a load
fails, the command continues to run the other load operations in the file. The
command displays messages to inform you of the success or failure of each load
operation.
Options
Within a control file, you can specify any of the valid options for an external table.
You can specify the long format name of the option or the short format name.
The options in a control file are not case-sensitive. For example, you can specify
the option in letter formats such as database, DataBase, Database, or DATABASE.
Syntax
The syntax for using a control file is as follows, where each sequence can be
another load:
DATAFILE <filename>
{
[<option name> <option value>]*
}
For example, the following control file options load the data from customer.dat
into the customer table:
DATAFILE /home/operation/data/customer.dat
{
Database dev
schema sales
TableName customer
}
If you save the control file contents as a text file (named cust_control.txt in this
example) you can specify it by using the nzload command as follows:
nzload -cf /home/nz/sample/cust_control.txt
Load session of table ’CUSTOMER’ completed successfully
When you use the nzload command, you cannot specify both the -cf and -df
options in the same command. You can load from a specified data file, or load
from a control file, but not both in one command.
The following control file options define two data sets to load. The options can
vary for each data set. The examples show the schema option, but if your Netezza
system supports only one schema in a database, omit that option.
DATAFILE /home/operation/data/customer.dat
{
Database dev
Schema sales
TableName customer
Delimiter ’|’
Logfile operation.log
Badfile customer.bad
}
DATAFILE /home/imports/data/inventory.dat
{
Database dev
Schema ops
TableName inventory
If you save these control file contents as a text file (named import_def.txt in this
example) you can specify it by using the nzload command as follows:
nzload -cf /home/nz/sample/import_def.txt
Load session of table ’CUSTOMER’ completed successfully
Load session of table ’INVENTORY’ completed successfully
Related concepts:
“Error reporting” on page B-4
The IncludeZeroSeconds external table option is used only for unloads. The
two-digit format of the DateStyle external table option is not supported for
unloads.
Related concepts:
Chapter 3, “External table load options,” on page 3-1
You can unload data to any of the supported Netezza clients, which include
Windows, Linux, Solaris, AIX®, and HP-UX. You can unload all data types
(including Unicode) and file types (uncompressed and compressed formats).
Note: You must be the admin user or have the Create External Table
administration privilege to create an external table, and you must have permission
to write to the path of the data object.
Format background
All data is a series of byte-sequences and has an associated data type, used as a
conceptual or abstract attribute of the data. Without an associated data type, a
byte-sequence can be interpreted in numerous ways.
A single data type can be represented in different forms. For example, an integer
data type can be represented or stored in various types of binary format, or in
human-readable text or character format (typically ASCII). Similarly, dates, times,
and other data types have multiple representations used by different programs,
languages, and environments. At some point, though, these data types must be
represented in readable form, so users can do something with the data. Data for
loading into the data warehouse typically is presented in either delimited format or
fixed-length format by using either ASCII or UTF-8.
Loading fixed-format data into the database requires that you define the target
data type for the field and the location within the record.
Not all fields in a fixed-length format file need to be loaded, and can be skipped
by using the ‘filler’ specification. The order of fields in the data file must match the
order of the target table, or an external table definition must be defined, which
specifies the order of the fields as database columns. An external table definition in
combination with an insert-select statement allows field order to be changed.
Unknown or null values are typically represented by known data patterns, which
are classified as representing null. The IBM Netezza system identifies and acts on
these values.
Data attributes
The typical data attributes in fixed-length format files are as follows:
Data type
The data at a given offset in a record is always of the same type.
Representation
The representation is constant, and each field has a fixed width. Data
within a field is always presented in the same way. Certain items such as
radix points, time separators, and date delimiters are always at the same
place and are typically implied, rather than being present in the data file.
Format options
The following sections describe the format options that are valid only for
fixed-length data loads, and those that have a different behavior when used for
fixed-length format loads.
The following external table options are valid only for the fixed-length format.
Table 6-1. Fixed-length only options
Option Meaning
RecordLength The length of the entire record, including null-indicator bytes (if any) and
excluding record-delimiter (if any).
v No default value
v Constant integer
RecordDelim The row/record delimiter.
v Default is ‘\n’ (newline). The field is literally interpreted, so ‘\n’ looks for
those characters, and not ‘newline’.
v The end-of-record delimiter is entered between single quotation marks.
The end-of-record indicator can be up to a maximum 8 bytes long.
v The omission of a record delimiter is defined by side-by side single
quotation marks.
Layout Mandatory for fixed-length format. Used to define the location of the fields
of the input record.
v No default value
v Comma-separated zone definitions within braces
The following external table options have a different meaning for the fixed-length
format:
Unsupported options
The following external table options are not supported for fixed-length format, and
if set, result in an error:
v Encoding
v FillRecord
v IgnoreZero
v TimeExtraZeros
v TruncString
v AdjustDistZeroInt
v IncludeZeroSeconds
v Delimiter
v EscapeChar
v QuotedValue
v RequireQuotes
The following existing external table options work as default values for zone
definitions:
NullValue
Default for the ‘NULLIF’ clause of all zones.
DateStyle, DateDelim, TimeStyle, TimeDelim, BoolStyle
Default for zone style for corresponding date, time, and boolean zones.
Related reference:
“The CRinString option” on page 3-3
“The CtrlChars option” on page 3-3
“The MaxErrors option” on page 3-9
“The Layout option” on page 3-9
“The RecordDelim option” on page 3-11
“The RecordLength option” on page 3-11
Layout definitions
Layout is an ordered collection of zone (field) definitions, and is a required option
for fixed-length format. Each zone definition is made up of mutually exclusive
(non-overlapping) clauses.
These clauses must be in the following order, although some are optional and can
be empty:
Use-type
Indicates whether a zone is a normal (data) zone or a filler zone. For data
zones, this value is omitted. Filler zones can only be specified in bytes.
Other use-types exist, but are not used for fixed-length format data.
Name The name of the zone. Duplicate zone names are not allowed. This
definition is not currently used, but is typically provided to identify the
field.
Type Defines the zone type. When not specified, type is defaulted to the
corresponding type of a table column. Filler-zones have no default type.
Valid values are as follows:
v CHAR
v VARCHAR
v NCHAR
v NVARCHAR
v INT1
v INT2
v INT4
v INT8
v INT
v UINT1
v UINT2
v UINT4
v UINT8
v UINT
End-of-record
When fixed-format records end in a newline character, no action is required.
Newline is the default end-of-record delimiter. When there is no record separator,
use single quotation marks side by side, as in the following example:
RecordDelim ’’
Record Length
Record Length is optional, but can provide feedback that the format definition has
the correct length. This excludes the end-of-record delimiter. The following is an
example:
Recordlength NNN
Skip fields
However, the preferred method is to indicate that the field is being skipped, as in
the following example:
“filler fld_name char(4) bytes 4”
Temporal values
Temporal values in fixed-length format files often omit delimiters. The following
table shows clauses that load dates, times, and timestamps without delimiters.
Table 6-4. Temporal values
Data type Value Format clause
Date 20101231 date1 date YMD’’ bytes 8
Time 231559 time1 time(6) 24hour ’’ bytes 6
Timestamp 0101231231559 stamp1 timestamp(6) 24hour ’’ bytes 14
to_timestamp(col,’YYYYMMDDHH24MISSUS’)
Date 2010-12-31 date2 date YMD’-’ bytes 10
Time 23.15.59 time2 time(6) 24hour ’.’ bytes 8
Timestamp 2010-12-31 23:15:59 tms2 timestamp(6) YMD ’-’ 24hour ’:’ bytes 19
Timestamp 2010-12-31 23:15:59.0001 tms3 timestamp(6) YMD ’-’ 24hour ’:’ bytes 26
Timetz 12:30:45+03:00 Tz1 TIMETZ(6) 24HOUR ’:’ bytes 14
Timetz 123045+-0300 (Load as char(11) then use insert-select)
(substring(col1,1,2)||’:’||
substring(col1,3,2)||’:’||substring(col1,5,5)||’:’||
substring(col1,10,2))::timetz
Numeric values
Logical values
Fixed-length format files typically use ‘magic’ values to represent nulls. Adding a
nullif clause to any specification allows the column to be checked for null. A nullif
clause has the following parts:
v The keyword “nullif”
v The column reference
v The test expression
In addition to &=, which evaluates to ‘string must exactly match,’ the nullif clause
also supports &&=, which allows substring matching. This is useful in cases where
the string might occur anywhere in a field with space padding. For example nullif
&&=’N’ matches the different expressions “ N “, “N “, “ N”.
Reference examples
The following table shows examples for references.
Table A-1. Reference examples
Reference Meaning
BYTES &2 Error only internal @ reference is allowed for length-clause (in any
format or zone-type).
BYTES @ An error length-clause cannot refer itself.
NULLIF & = '123' Self-reference (no number) is valid in null-clause.
Matches '123', ' 123 ' and so on, in text format, with spaces
skipped.
NULLIF @ = Valid for date zones
'2000-01-01'
The following is an example of the IBM Netezza external table definition for this
data:
CREATE EXTERNAL TABLE sample_ext (
Col01 DATE ,
Col09 BOOL ,
/* Skipped col10 */
Col11 TIMESTAMP,
Col26 Char(12),
Col38 Char(10),
Col48 Char(2),
Col50 Int4,
Col56 CHAR(10),
Col67 CHAR(3) /* Numeric(3,2) cannot be loaded directly */
)
USING (
dataobject(’/home/test/sample.fixed’)
logdir ’/home/test’
recordlength 72 /* does not include end of record delimiter */
recorddelim ’
’ /* This is actually a newline between the single quotes, really not needed as newline is default */
format ’fixed’
layout (
Col01 DATE YMD ’’ bytes 8 nullif &=’99991231’,
Col09 BOOL Y_N bytes 1 nullif &=’ ’,
FILLER Char(1) Bytes 1, /* was col10 space */
Col11 TIMESTAMP YMD ’’ 24HOUR ’’ bytes 14 nullif &=’99991231000000’,
Col26 CHAR(15) bytes 15 nullif &=’ ’, /* 15 spaces */
Col38 CHAR(13) bytes 13 nullif &=’****NULL*****’ ,
Col48 CHAR(2) bytes 2 nullif &=’##’ ,
Col50 INT4 bytes 5 nullif &=’00000’ ,
Col56 CHAR(10) bytes 10 nullif &=’0000000000’,
Col67 CHAR(3) bytes 3 /* We cannot load this directly, so we use an insert-select */
) /* end layout */
); /* end external table definition. */
INSERT INTO sampleTable
SELECT
Col01,
Col09,
Col11,
Col26,
Col38,
Col48,
Col50,
Col56,
(Col67/100)::numeric(3,2) as Col67 /* convert char to numeric(3,2) */
FROM sample_ext ;
function CreateDb()
{
nzsql -c "create database test"
}
function CleanUp()
{
$NZSQL "drop table textDelim_tbl"
$NZSQL "drop table textFixed_tbl"
}
function CreateTable()
{
$NZSQL "create table textDelim_tbl(col1 int, col2 char(10), col3
date)"
$NZSQL "create table textFixed_tbl(col1 int, col2 char(10), col3
date)"
}
function CreateDataFile()
{
function LoadData()
{
# nzload using text format
nzload -t textDelim_tbl -df $DIR/delimData -db test -outputDir
$LOGDIR -delim ’|’ -dateStyle MDY -dateDelim ’/’
#nzload using fixed format
nzload -t textFixed_tbl -df $DIR/fixedData -db test -outputDir
$LOGDIR -format fixed -layout "col1 int bytes 1, col2 char(10) bytes
10, col3 date YMD ’-’ bytes 10"
function UnloadData()
{
$NZSQL "insert into textDelim_tbl select * from external
’$DIR/delimData’ using (Delimiter ’|’ DateStyle ’MDY’ DateDelim ’/’);"
}
CreateDb
CleanUp
CreateTable
CreateDataFile
LoadData
UnloadData
Look at the file. Is the file on an NFS mounted directory? If so, remember that
your load performance is constrained by the speed of the network.
Stage the data before you move it to a production system. Create a table, load it,
validate it, then use the ALTER TABLE command to move the tables to production.
For example:
ALTER TABLE loan RENAME TO loan_lastmonth;
ALTER TABLE loan_stage RENAME TO loan;
If you are running multiple nzload jobs to load into a table, use unique names for
your nzbad files. The nzload command generates the default file name by using the
<tablename>.<databasename> format and appending the extension .nzbad. Loading
into the data table of the dev database uses the default file name data.dev.nzbad
for the nzbad file. Each instance of the nzload command overwrites the existing
file. If you want to preserve the bad records that are stored in this file, use the -bf
<file name> option to specify a different name for each nzload job.
Note: If your default system case is uppercase, the system displays lowercase table
names as uppercase in nzlog files, for example, DATA.DEV.nzlog and
DATA.DEV.nzbad.
Run the Linux top command on the host to monitor CPU resources. Consider
running more loads concurrently if resources are available.
Troubleshoot
If you see the error message, Too many data fields for table, use the Linux
command head d-1 on the data file to get the first row, which might contain the
extracted name of the column. Compare these names to the DDL of the table you
created and see whether their physical positions match.
B-2 IBM Netezza Data Loading Guide
If you see the error message, Data type mismatch on column 5, use the Linux
command cut -d^ -f 5 inputfile | more to look at the individual data values in
the source file and then compare them to your DDL. Compare these values to the
DDL of the table you created and see whether their physical positions match.
Handle exceptions
Repeat the load on the -bf file. If there are many exceptions, fix them and
re-extract from the source system. If they are few, use a text editor to change data.
To make large substitutions, use the Linux sed or awk commands.
Count the number of rows and select min/max/sum of each numeric and
min/max of each date column in the table.
Generate statistics
Remember to run the GENERATE STATISTICS command on your tables or
databases after you load new data.
Test performance
If your data is evenly distributed, you should see peak loading performance of at
least 75 percent CPU utilization on the host. You can monitor utilization by
running the Linux top command during the load. If you see less CPU utilization
that means either the data is skewed so that all SPUs are not sharing the workload
or the parser is waiting for data.
If your input data is skewed, that is, all records are being sent to only a few SPUs,
those SPUs become the performance bottleneck.
If your CPU utilization is less than 75 percent and the data is evenly distributed,
you might have a streaming problem:
v If the load is running from the local host, determine the source of the data.
Look for other concurrent database activities such as activities that are
SPU-to-SPU broadcast intensive or SPU disk I/O intensive.
v If the data is not locally staged or is on a SAN or NFS mount, determine
whether the bottleneck is the remote source of the data or the network.
The performance of the IBM Netezza appliance system depends on the number
of SPUs. If, however, data is being streamed across an external network, then the
performance is limited by the speed of the network.
Test the network by using the FTP command to send a file between the source
and the local host, and measure the transfer rate. Under optimal conditions, a
Gig-E network transfers at a rate of ~1000Mb/second, or ~125MB/second or
~450GB/hour.
The nzload command writes high-level errors to the terminal (stderr), nzlog file,
and nzbad file. You can specify the nzlog and nzbad file names on the command
line or by using a control file.
The system appends to the nzlog file for every nzload command that loads the
same table into the same database. The system names the nzlog file that is based
on the table and the database name with the extension .nzlog. So, in this example,
the file name is member_profile.dev.nzlog.
There is also a member_profile.dev.nzbad file that contains any record that had an
error. The system overwrites this file each time you run the nzload command for
the same table and database name (unlike the behavior of the nzlog file).
Specify options
The following table shows how to enter the external table options when you use
the nzload command-line method, n a control file, or as part of a SQL command.
Table C-1. Specify external table options
Option Command line Control file SQL
AllowReplay -allowreplay Not applicable LOAD_REPLAY_REGION
MAX_QUERY_RESTARTS
BadFile -bf badfile Not applicable
BoolStyle -boolStyle boolstyle BOOLSTYLE
Compress -compress compress COMPRESS
CRinString -crInString crinstring CRINSTRING
CtrlChars -CtrlChars ctrlchars CTRLCHARS
Database -db database Not applicable
Datafile -df datafile DATAOBJECT
DateDelim -dateDelim datedelim DATEDELIM
DateStyle -dateStyle datestyle DATESTYLE
DecimalDelim -decimaldelim decimaldelim DECIMALDELIM
Delimiter -delim delim DELIM
-fileBufByteSize
SuspendMviews -suspendMviews Not applicable Not applicable
Tablename -t tablename Not applicable
TimeDelim -timeDelim timedelim TIMEDELIM
TimeRound Nanos -timeRoundNanos timeroundnanos TIMEROUNDNANOS
TimeExtraZeros
-timeExtraZeros timeextrazeros TIMEEXTRAZEROS
TimeStyle -timeStyle timestyle TIMESTYLE
TruncString -truncString truncstring TRUNCSTRING
Y2Base -y2Base y2base Y2BASE
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user's responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not grant you
any license to these patents. You can send license inquiries, in writing, to: This
information was developed for products and services offered in the U.S.A.
For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country or send inquiries, in writing, to:
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore, this statement may not apply
to you.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those Web
sites. The materials at those Web sites are not part of the materials for this IBM
product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
IBM Corporation
Software Interoperability Coordinator, Department 49XA
3605 Highway 52 N
Rochester, MN 55901
U.S.A.
The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement or any equivalent agreement
between us.
All statements regarding IBM's future direction or intent are subject to change or
withdrawal without notice, and represent goals and objectives only.
All IBM prices shown are IBM's suggested retail prices, are current and are subject
to change without notice. Dealer prices may vary.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
© your company name) (year). Portions of this code are derived from IBM Corp.
Sample Programs.
If you are viewing this information softcopy, the photographs and color
illustrations may not appear.
Trademarks
IBM, the IBM logo, ibm.com® and Netezza are trademarks or registered trademarks
of International Business Machines Corporation in the United States, other
countries, or both. If these and other IBM trademarked terms are marked on their
first occurrence in this information with a trademark symbol (® or ™), these
symbols indicate U.S. registered or common law trademarks owned by IBM at the
time this information was published. Such trademarks may also be registered or
common law trademarks in other countries. A current list of IBM trademarks is
available on the web at "Copyright and trademark information" at
http://www.ibm.com/legal/copytrade.shtml.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of
Microsoft Corporation in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other
names may be trademarks of their respective owners.
Red Hat is a trademark or registered trademark of Red Hat, Inc. in the United
States and/or other countries.
D-CC, D-C++, Diab+, FastJ, pSOS+, SingleStep, Tornado, VxWorks, Wind River,
and the Wind River logo are trademarks, registered trademarks, or service marks
of Wind River Systems, Inc. Tornado patent pending.
APC and the APC logo are trademarks or registered trademarks of American
Power Conversion Corporation.
Notices D-3
D-4 IBM Netezza Data Loading Guide
Index
Special characters datestyle C-1
DateStyle option 3-5
load continuation 3-16
LOAD_LOG_MAX_FILESIZE 4-3
_v_load_status 4-2 decimaldelim 1-2, 3-1, 6-4 LOAD_REPLAY_REGION 4-3, C-1
decimalDelim option 3-6 load. See also nzload 4-1
delim C-1 loading, success tips B-1
A delimiter C-1 log file 4-5
allowreplay 4-3, C-1 delimiter option 3-6 log files 2-2
attributes logdir C-1
data 6-1 logDir option 3-9
E logfile
size C-1
encoding C-1
B encoding option 3-7
logfilesize 4-3
backup errors
external tables 2-3 nzload handling B-4
nzload B-2 escape C-1 M
badfile 4-5, C-1 escapechar C-1 matching input fields 3-15
best practices escapeChar option 3-8 MAX_QUERY_RESTARTS 4-3, C-1
external tables 2-12 external table maxerrors C-1
boolstyle C-1 about 2-1 maxErrors option 3-9
boolStyle option 3-2 backup and restore 2-3 maxrows 3-10, C-1
displaying information 2-1
examples 2-13
C options 3-1
parsing 2-3
N
character strings NOT NULL 3-10
privileges 2-1
char 2-9 nullvalue C-1
restrictions 2-12
varchar 2-9 nullValue option 3-10
column constraint 2-9 nz_migrate 1-1
compress C-1 nzload command
compress option 3-3 F backup B-2
compressed binary 1-2 filebufbytesize C-1 boolStyle 4-2
concurrency 4-1 fileBufByteSize 4-3 error reporting B-4
control file filebufsize C-1 examples A-1
using 4-5 fileBufSize 4-3 inputs 4-3
counting rows 3-14 fillrecord C-1 privileges 4-1
CREATE EXTERNAL TABLE fillRecord option 3-8 program invocation 4-2
dropping an external table 2-13 fixed point 2-7 specifying arguments A-1
examples 2-13 floating point 2-8 syntax 4-3
CRinsString option 3-3 format C-1 tips B-1
crinstring C-1 background 6-1 uncommitted jobs 4-1
ctrlchars C-1 format option 3-8 using 4-1
ctrlChars option 3-3 format options 6-2 nzreclaim command
nzload jobs 4-1
D H
data attributes 6-1 host 4-3, C-1 O
data file 4-5 options
data loading external table 3-1
components 1-1 I fixed-length only 6-2
fixed-length unsupported 6-2
formats 1-2 ignorezero C-1
data types names C-1
ignoreZero option 3-8
fixed-point 2-7 processing 3-2
includeHeader option 3-9
floating-point 2-8 outputdir 4-3, C-1
includezeroseconds C-1
for external tables 2-5 includeZeroSeconds option 3-9
integer 2-6
temporal 2-10 P
database C-1
datafile C-1 L pipes A-1
privileges, load session 4-1
dataObject option 3-3 layout 3-9
datedelim C-1 definitions 6-4
dateDelim option 3-5 legal characters 3-16
Y
R y2base C-1
recdelim C-1 y2Base option 3-14
recordDelim option 3-11
recordLength option 3-11
references
examples A-3
Z
zone definition, default values 6-2
remote client, unloading 5-1
zones
remotesource C-1
default values 6-2
remoteSource option 3-12
requirequotes C-1
requireQuotes option 3-12
rows
bad 3-15
counting 3-14
input 3-15
skipping 3-12
S
session variables 3-17
skiprows C-1
skipRows option 3-12
socketbufsize C-1
socketBufsize option 3-13
SQL grammar A-5
string versus non-string 3-15
supported data types
for external tables 2-5
suspendmviews C-1
T
tablename C-1
temporal data types 2-10
textfixed
using 6-1
timedelim C-1
timeDelim option 3-13
timeextrazeros C-1
timeroundnanos C-1
timeRoundNanos option 3-13
timestamp 2-12
timestyle C-1
timeStyle option 3-13
timetz 2-11
transactions, nzload jobs 4-1
troubleshooting B-1
truncString option 3-13
U
unload
options 5-1
unloading
examples 2-15
remote client 5-1
Printed in USA