IBM DataStage V11.5.x Database Transaction Processing
IBM DataStage V11.5.x Database Transaction Processing
x Database
Transaction Processing
24/30
Correcion 5
2 dudas
Select one:
A. Upsert
B. Insert
C. Modify
D. Join
The Connector stage offers two types of insert plus update (sometime called
an "upsert") statements. For the Insert then update write mode, the insert
statement is executed first. If the insert fails with a unique-constraint
violation, the update statement is executed. The Update then insert is the
reverse. Choose Insert then update or Update then insert based on the
expected number of inserts over updates. For example, if you expect more
updates than inserts, choose the latter.
Select one:
Select one:
A. Array count
B. Upsert
C. Array size
D. Index
This slide lists how commits are handled. The Autocommit mode property
can be set to On or Off. When On, commits are made after each write
operation, which slows performance down considerably. When Off, commits
are made after the number of
records specified by the Record count property are processed. This is the
default setting.
The Array size property specifies how many rows are bundled into each
physical write operation to the database. The higher the number, the better the
performance because fewer physical writes occur.
Record count must be a multiple of Array size.
4. How does the DB2 Fast Loader
mark a table in the event of a failure
during a bulk loading operation?
A. Locked
B. Quiesced exclusive
C. Roll Back
In the event of a failure during a DB2 bulk loading operation, the DB2 Fast
Loader marks the table inaccessible (quiesced exclusive or load pending
state).
You can reset the target table to the normal mode by rerunning the job with
the Cleanup on failure option turned on.
Select one:
A. Connector stage
C. Repository object
D. Parameter sets
6. With a Teradata database, what
does the sessionsperplayer property
indicate?
***
Select one:
[SessionsPerPlayer = <num_sessions>]
[RequestedSessions = <num_requested>]
El valor de las sesiones por jugador [SessionsPerPlayer] determina el número de conexiones que
cada jugador tiene con Teradata. Indirectamente, también determina el número de jugadores. El
número seleccionado debe ser tal que (sesiones por jugador * número de nodos * número de
jugadores por nodo) sea igual al total de sesiones solicitadas. El valor predeterminado es 2.
A. DataStage
B. Transformer stage
C. Connector stage
D. Universe stage
E. UniData stage
The server tool palette organizes stages into the following groups:
Database. These stages read or write data that is contained in a database.
File. These stages read or write data that is contained in a file or set of files.
Processing. These stages perform some processing on the data that is passed
through them.
Stage Function
Oracle 7 Load (see Oracle Generates control and data files for bulk
databases) loading data into a single table in an
Oracle database.
Sybase BCP Load (see Sybase Uses the BCP (Bulk Copy Program)
BCP Load stage and BCP Load utility to bulk load data into a single table
stage) in a Microsoft SQL Server or Sybase
database.
UniData (see IBM UniVerse Reads data from or writes data to a
and UniData) UniData database.
UniVerse (see IBM UniVerse Reads data from or writes data to a
and UniData) UniVerse database.
A. DSOpenJob
B. WebHDFS
C. HttpFS
D. TFile
E. datatransfer.sasl
F. Hdfs
The File Connector stage can read and write files locally on the Engine system
or on Hadoop HDFS file systems. When a local file system is specified the
stage functions like a sequential file stage.
You can use either of two API’s to read and write to HDFS: WebHDFS and
HttpFS.
The stage properties are basically the same regardless of which API is used.
The Big Data File stage can also be used to read and write files on HDFS, but
is more limited. The main limitation is that the Big Data File stage requires
IBM InfoSphere BigInsights.
9. What can be added to Connector
stages to specify how rows experiencing
SQL errors are handled
Select one:
A. Balanced Optimization tool
B. Runtime errors
C. Reject links
D. SQL Builder tool
Para que el linaje continúe el flujo a través de las bases de datos a las que se
refieren diferentes nombres de origen de datos, debe asignar el objeto de
conexión de datos a la base de datos importada. Lineage utiliza esta información
para crear relaciones entre etapas y tablas de bases de datos importadas. Si no se
realiza esta asignación, el linaje para ese trabajo muestra las tablas de la base de
datos como activos virtuales que podrían no vincularse correctamente con el resto
del flujo de linaje.
En ocasiones, es posible que deba asignar los objetos de conexión de datos a una
base de datos que no puede importarse debido a sus políticas de seguridad
internas. En este caso, puede identificar esos objetos de conexión de datos como
los mismos y seleccionar uno de ellos como el nombre preferido para cuando
muestre informes de linaje.
Click the Properties tab.
a. Specify the Write mode.
b. In the Table name field, specify the name of the destination table that is used
in the SQL statements that are meant for writing data.
For the write mode, the table must exist. You can create the table at runtime
using the Create or Replace table actions. The table name is used to
generate Data Definition Language (DDL) statements. You must specify a
table name if Write mode is set to Bulk Load, the Generate SQL property is
set to Yes, or the Table action property is set to Create, Drop, or Truncate.
c. Specify whether you want SQL statements generated at run time in
the Generate SQL field.
d. In the Enable quoted identifiers field, specify Yes to retain the case of all of
the object names in DDL and DML statements.
The default is No.
e. In the SQL field, specify the appropriate SQL statements.
f. In the Table action field, specify how you want tables to be created, or how
you want rows to be edited or inserted in an existing destination table.
g. In the Before or After SQL field, specify whether an SQL statement runs
before or after data processing.
Select one:
A. Type of information to include with the rejected rows
B. Abort conditions
C. Rejection Conditions
D. ODBC Connector
Select one:
A. Rejection Conditions
B. Abort conditions
C. The type of information to include with the rejected rows
D. ODBC Connector
In the window on the right, you can specify whether to include error
information along with the rejected row. If, for example, you check
ERRORCODE, a column named ERRORCODE will be added each
reject row. This new column will contain the SQL error code that
occurred.
The stage attempted to insert a row into the table, but the insert failed because
a row with a matching key value already existed in the table
Connector stages contain a utility called SQL Builder that can be used to build the SQL used by the
stage. SQL is built using GUI operations such as drag-and-drop in a canvas area. Using SQL Builder
you can construct complex SQL statements without knowing how to manually construct them.
Alternatively, this is where you would manually type or paste in an SQL statement
You can import table definitions from the following data sources:
Assembler files
COBOL files
DCLGen files
ODBC tables
Orchestrate® schema definitions
PL/1 files
Data sources accessed using certain connectivity stages.
Sequential files
Stored procedures
UniData files
UniVerse files
UniVerse tables
Web services WSDL definitions
XML table definitions
When you import metadata from a data source you can choose to save the
connection information that you used to connect to that data source as a data
connection object.
You can choose to do this when performing the following types of metadata import:
Import via connectors
Import via supplementary stages (Import > Table Definitions > Plug-in Metadata
Definitions)
Import via ODBC definitions (Import > Table Definitions > ODBC Table
Definitions)
Import from UniVerse table (Import > Table Definitions > UniVerse Table
Definitions)
Import from UniData® 6 table (Import > Table Definitions > UniData 6 Table
Definitions)
Import from Orchestrate® Schema (Import > Table Definitions > Orchestrate
Schema Definitions)
Select one:
A. No errors are detected using Peek_ActivityRejects
B. The corresponding project records are written
C. All the records are written into the Insert_Project_Activities
D. No errors are detected using PeekProjectRejects
En este ejemplo, el enlace del conector de entrada superior escribe registros en la tabla de la base
de datos PROJECT. El enlace del conector de entrada inferior escribe registros de actividad del
proyecto en la tabla de la base de datos PROJACT. Para garantizar la integridad referencial entre
las dos tablas, los registros de actividad para un proyecto en particular no se escriben hasta que
se escribe el registro del proyecto correspondiente. Esto se puede lograr estableciendo la
propiedad de orden de Registro en Todos los registros y especificando que el enlace de entrada a
la tabla PROYECTO es el primero en el orden de enlace
Si define un bloque PL / SQL para una operación de búsqueda dispersa, el conector ejecuta el
bloque PL / SQL especificado una vez para cada registro en el enlace de entrada a la etapa de
búsqueda.