Updating The Master Reports List
Updating The Master Reports List
to an ODBC or DB2 stage. Change the parameter in the Row Generator stage to output zero records. Write the SQL to update the MRL table as 'After SQL' in the output ODBC or DB2 stage.
Running SQL Commands in Datastage Create a Job which has a Row Generator stage linking to an ODBC or DB2 stage. Change the parameter in the Row Generator stage to output zero records. Write the SQL as 'After SQL' in the output ODBC or DB2 stage.
Emptying a Dataset Create a Job which has a Row Generator stage linking to a Dataset stage. Change the parameter in the Row Generator stage to output zero records. Set the parameter for the Dataset stage to 'Override'.
How To Generate a Key for An Output Table The preferred way is to use the Surrogate Key stage. However, this can be impractical if Datastage is not solely responsible for the allocation of key values (if you are sharing a file with another Job Sequence, or Data Integrator is also generating keys). In this case you have to ensure that keys you use are not duplicated by the parallel processing engine, so you can't just increment by one. Determine the existing highest key on the table using a Sort stage, and then reading one value of the existing key. Then use the following formula in a Transformer Stage: highestexistingkey + ((@INROWNUM - 1) * @NUMPARTITIONS + @PARTITIONNUM + 1)
Importing Data from Lotus Notes (from a third part external forum entry on the internet) .... Well, it's been awhile ... we've had 'fun' working with IBM Support to try and solve this. We could actually run a sample OSH file at the command line and connect to a Lotus Notes file, but not through the GUI. Finally, we are now able to import metadata from a Lotus Notes 'database' (i.e. a .nsf file) and read from it. Besides the DataStage service, the dsrpc service must be set to "Allow service to interact with Desktop". Once the change was made and the server rebooted, we were able to connect. I can read from a Lotus Notes database in a parallel job, but get multiple ODBC-related errors if I try to write to a Lotus Notes database in a parallel job. I've tried with both the ODBC connector and with the ODBC Enterprise stage. However, in a Server job, I can write to a Lotus Notes database without warning or error. Go figure. In any event, if anyone else is 'fortunate' enough to have to use DataStage against Lotus Notes, hope this has been of help.
Resolve deadlocks when writing to AAZD Situation 1 - Inserting new records into a AAZD table. Write mode = INSERT.
Potential Issue: Job hangs indefinitely or runs into deadlock if you use the default settings of DB2 UDB Stage or ODBC Connector Stage. By default, DataStage commits 2000 rows per transaction. When 2 parallel loaders try to commit 2000 rows to a table in AAZD, they will run into each other and cause deadlocks/hanging jobs. Resolution: You need to change the commit rate to 1000 rows per transaction. To do this, you need to set the following properties for DB2 UDB/ODBC Connector Stages. Record Count = 1000 Array Size = 1000
Situation 2 - Emptying a table, then inserting new records. Write mode = DELETE THEN INSERT.
Potential Issue: Job dies! It appears 2 parallel processes will try to delete the table at the same time, causing 1 process to fail and hence the whole job will fail. Resolution: 1. Change Write Mode to INSERT 2. In "Before SQL" property, manually enter a SQL statement to delete the table
How To Fix Error Selecting from log file RT_LOG Problem(Abstract) When trying to view a job log in DataStage Director, an error similar to the following is received:
Error selecting from log file RT_LOGnn Command was: SSELECT RT_LOGnn WITH @ID LIKE '1NON' COUNT.SU Error was: Internal data error. File <path_to>/RT_LOGnn/<filename>': Computed blink of 0xnnnn does not match expected blink of 0xnnnn! Detected within group starting at address 0xnnnnnn! Cause The error message received indicates that the log file for the job is corrupted. Resolving the problem To resolve the problem, the log file must be discarded and recreated. There are a number of ways to accomplish this.
Reimport the job and overwrite the existing job. Rename the job or do a File-->Save as in Designer. You can then delete the old job and rename your new job back to the original name. Manually recreate the RT_LOGxxx file. Note: The job with the associated RT_LOG may need to be re-compiled before it can be run again.
When attempting to overwrite the job, or deleting the original, you may get the error message: Cannot get exclusive of the job. If this happens, make sure that the job is not open in the designer or the director client tools. If this error still appears, take the following steps: 1. Login to DataStage Administrator and go to the "Projects" tab. Select the project in question and click "Command" button. Type "LIST.READU EVERY" in the command line. In the results, look for active record locks under "Item Id" column for RT_LOGnn (where nn is the description number as seen in the error) Write down the inode number and user number for the lock in question. Enter the command "CHDIR <path to DSEngine directory>". This is necessary because the subsequent UNLOCK command lives in UV account. Enter "UNLOCK INODE inode# USER user# ALL". This will unlock the lock hold on this file (inode#) and hold by this user (user#) for file locks, group locks and record locks.
2.
3. 4.
5.
Steps to manually recreate the RT_LOGxxx file 1. Log on to DS Administrator and go to the Projects tab. Select the project in question and click on the "Command" button. Delete the existing damaged log file with the following command, where xxx is the description number as seen in the error: DELETE.FILE DATA RT_LOGxxx Create a new log file with the following command, where xxx is the description number as seen in the error: CREATE.FILE DATA RT_LOGxxx 30
2.
3.
Datastage QA Checklist
Job Sequences should use the same numbering as the current DI jobs, e.g. JS_1500_TPA
For every job that will be scheduled, there should be a job sequence which permits emails or success or failure and restarts to occur. The job sequence should contain at least an error handling step and corresponding email to IBIS upon failure, and an email to the IBIS mailbox upon success. If the Job Sequence fails, the email should have a subject of JS_nnnn_XXXXXX has failed. If the Job Sequence completes, the email should have a subject of JS_nnnn_XXXXX has completed successfully. Job includes QA checks for record count checks, missing keys etc Job includes null handling Job updates pd_data_sources Job keeps at least one _old version of the tables Job is automated (little or no intervention required) Temporary text files, Datasets and Filesets are to be kept in GSASpace, for the duration of the job, but must be removed at the end of each run as we get charged for it.